id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.06799 | $^{179}$Ta(n,$γ$) cross-section measurement and the astrophysical
origin of $^{180}$Ta isotope | Tantalum-180m is nature's rarest (quasi) stable isotope and its astrophysical
origin is an open question. A possible production site of this isotope is the
slow neutron capture process in Asymptotic Giant Branch stars, where it can be
produced via neutron capture reactions on unstable $^{179}$Ta. We report a new
measurement of the $^{179}$Ta($n,\gamma$)$^{180}$Ta cross section at thermal
neutron energies via the activation technique. Our results for the thermal and
resonance-integral cross-sections are 952 $\pm$ 57 b and 2013 $\pm$ 148 b,
respectively. The thermal cross section is in good agreement with the only
previous measurement (Phys. Rev C {\bf 60} 025802, 1999), while the resonance
integral is different by a factor of $\approx$1.7. While neutron energies in
this work are smaller than the energies in a stellar environment, our results
may lead to improvements in theoretical predictions of the stellar cross
section. | R. Garg, S. Dellmann, C. Lederer-Woods, C. G. Bruno, K. Eberhardt, C. Geppert, T. Heftrich, I. Kajan, F. Käppeler, B. Phoenix, R. Reifarth, D. Schumann, M. Weigand, C. Wheldon | 2023-04-13T19:51:09Z | http://arxiv.org/abs/2304.06799v1 | 179Ta(n,\(\gamma\)) cross-section measurement and the astrophysical origin of \({}^{180}\)Ta isotope
###### Abstract
\({}^{180\mathrm{m}}\)Ta is nature's rarest (quasi) stable isotope and its astrophysical origin is an open question. A possible production site of this isotope is the slow neutron capture process in Asymptotic Giant Branch stars, where it can be produced via neutron capture reactions on unstable \({}^{179}\)Ta. We report a new measurement of the \({}^{179}\)Ta(n,\(\gamma\))\({}^{180}\)Ta cross section at thermal neutron energies via the activation technique. Our results for the thermal and resonance-integral cross-sections are 952 \(\pm\) 57 b and 2013 \(\pm\) 148 b, respectively. The thermal cross section is in good agreement with the only previous measurement (Phys. Rev C **60** 025802, 1999), while the resonance integral is different by a factor of \(\approx\)1.7. While neutron energies in this work are smaller than the energies in a stellar environment, our results may lead to improvements in theoretical predictions of the stellar cross section.
## I Introduction
Tantalum-180 is one of the most interesting isotopes in nature. In its ground state, this isotope is unstable with a half-life of 8.15 hours, however, it has a high-spin (9\({}^{-}\)) meta-stable state at 77.2 keV that has a half-life of \(>\) 7 \(\times\) 10\({}^{15}\) years. This isomer, \({}^{180\mathrm{m}}\)Ta, is nature's rarest (quasi) stable isotope and its stellar origin remains an open question. At least 3 nucleosynthesis processes are thought to contribute to the \({}^{180}\)Ta abundance. References [1; 2; 3; 4] suggest that \({}^{180}\)Ta is produced by \({}^{180}\)Hf (v\({}_{\mathrm{e}}\),e)\({}^{180}\)Ta and \({}^{181}\)Ta(v\({}_{\mathrm{}}\),v\({}_{\mathrm{n}}\))\({}^{180}\)Ta reactions in the v-process in stellar explosions. Another proposed site is the p-process in O/Ne-rich layers in Type-II supernovae (SNII), where the (\(\gamma\),n) reactions on \({}^{181}\)Ta lead to \({}^{180}\)Ta production [5; 6; 7; 8]. Finally, in low mass asymptotic giant branch (AGB) stars, two reaction sequences have been suggested as source for \({}^{180}\)Ta: (i) neutron capture on \({}^{179}\)Hf resulting in an isomeric state of \({}^{180}\)Hf (J\({}^{\mathrm{T}}\) = 8\({}^{-}\), 1141 keV), which has a small \(\upbeta\) decay branch to \({}^{180\mathrm{m}}\)Ta and (ii) \(\upbeta\) decay of thermally excited states in \({}^{179}\)Hf to \({}^{179}\)Ta, and subsequent neutron capture to \({}^{180\mathrm{m}}\)Ta [9]. Figure 1 shows the two reaction paths with red and green arrows respectively. Kappeler _et al._[10] estimate that (ii) can explain 80-86% of the solar \({}^{180}\)Ta abundance, while path (i) seems to only contribute to a small extent [11]. However, a recent study [8] modelled s-process nucleosynthesis in AGB stars using the neutron capture cross-sections derived from statistical models (using experimentally obtained nuclear structure parameters [12]), and found only a negligible contribution to the observed \({}^{180}\)Ta abundance. They also studied the impact of the newly constrained value of the \({}^{179}\)Ta(n,\(\gamma\)) cross-sections on the time reversed reaction, \({}^{180}\)Ta(\(\gamma\),n)\({}^{179}\)Ta, which is the main mode of destruction of \({}^{180\mathrm{m}}\)Ta in the SNII p-process. They found that the new reaction rate reduces the \({}^{180\mathrm{m}}\)Ta overabundance in the p-process models. The variety of different predictions emphasises the need for accurate experimental data on nuclear reactions and stellar half lives for the isotopes involved.
The destruction reaction \({}^{180\mathrm{m}}\)Ta(n,\(\gamma\)) has been measured by Wisshak _et al._[14]. A direct measurement of \({}^{179}\)Ta(n,\(\gamma\)) cross-section at neutron energies relevant to s-process temperatures (keV neutron energies) has not been possible yet, due to the lack of availability of a ra
Figure 1: s-process reaction network. Grey and white boxes show the stable and unstable isotopes respectively. Thick black arrows show the main s-process path. The red and green arrows show the weak branching paths suggested by Refs. [13] and [9] respectively. The orange arrows show all the other reactions in the network. |
2305.18856 | A Federated Channel Modeling System using Generative Neural Networks | The paper proposes a data-driven approach to air-to-ground channel estimation
in a millimeter-wave wireless network on an unmanned aerial vehicle. Unlike
traditional centralized learning methods that are specific to certain
geographical areas and inappropriate for others, we propose a generalized model
that uses Federated Learning (FL) for channel estimation and can predict the
air-to-ground path loss between a low-altitude platform and a terrestrial
terminal. To this end, our proposed FL-based Generative Adversarial Network
(FL-GAN) is designed to function as a generative data model that can learn
different types of data distributions and generate realistic patterns from the
same distributions without requiring prior data analysis before the training
phase. To evaluate the effectiveness of the proposed model, we evaluate its
performance using Kullback-Leibler divergence (KL), and Wasserstein distance
between the synthetic data distribution generated by the model and the actual
data distribution. We also compare the proposed technique with other generative
models, such as FL-Variational Autoencoder (FL-VAE) and stand-alone VAE and GAN
models. The results of the study show that the synthetic data generated by
FL-GAN has the highest similarity in distribution with the real data. This
shows the effectiveness of the proposed approach in generating data-driven
channel models that can be used in different regions | Saira Bano, Pietro Cassarà, Nicola Tonellotto, Alberto Gotta | 2023-05-30T08:50:22Z | http://arxiv.org/abs/2305.18856v1 | # A Federated Channel Modeling System using Generative Neural Networks
###### Abstract
The paper proposes a data-driven approach to air-to-ground channel estimation in a millimeter-wave wireless network on an unmanned aerial vehicle. Unlike traditional centralized learning methods that are specific to certain geographical areas and inappropriate for others, we propose a generalized model that uses Federated Learning (FL) for channel estimation and can predict the air-to-ground path loss between a low-altitude platform and a terrestrial terminal. To this end, our proposed FL-based Generative Adversarial Network (FL-GAN) is designed to function as a generative data model that can learn different types of data distributions and generate realistic patterns from the same distributions without requiring prior data analysis before the training phase. To evaluate the effectiveness of the proposed model, we evaluate its performance using Kullback-Leibler divergence (KL), and Wasserstein distance between the synthetic data distribution generated by the model and the actual data distribution. We also compare the proposed technique with other generative models, such as FL-Variational Autoencoder (FL-VAE) and stand-alone VAE and GAN models. The results of the study show that the synthetic data generated by FL-GAN has the highest similarity in distribution with the real data. This shows the effectiveness of the proposed approach in generating data-driven channel models that can be used in different regions.
Federated learning, Unmanned aerial vehicles, Channel modeling, Generative neural networks
## I Introduction
Non-terrestrial networks (NTNs), such as near-earth satellite constellations (LEO), high-altitude platforms, and unmanned aerial vehicles (UAVs) have traditionally been used for disaster management and remote sensing [1]. However, they are now being seen as promising technologies for providing ubiquitous connectivity in the future generation of the Internet [2]. Such radio access networks (RAN), operating in the millimeter wave (mmWave) range, are very promising, providing global coverage and high capacity for reliable and efficient communications services [3]. The 3rd Generation Partnership Project (3GPP), has also recognized the potential of mmWave technology to support satellite communications.
Accurate statistical channel models are essential to characterize the mmWave link and to determine the underlying channel parameters to improve the transmission performance of wireless communication systems. Extensive research has been conducted to develop effective methods for accurate channel modeling, such as the mathematical propagation model proposed in [4] for estimating ground-to-air path loss between wireless devices and low-altitude platforms using mmWave frequency bands. Furthermore, deterministic channel models, such as ray-tracing techniques, as well as stochastic channel models are commonly used and require extensive technical knowledge and expertise for analyzing measurement data to estimate a comprehensive set of different channel parameters [5]. However, building statistical channel models to determine the underlying channel parameters that accurately capture the delay, direction, and path gains of individual links is difficult, especially in the mmWave domain.
Machine Learning (ML) techniques, such as Neural Networks (NNs), can be used to develop statistical channel models that overcome the limitations of conventional channel modeling systems [6]. However, these models result in channel parameters that are site-specific and may not be generally applicable. In this regard, generative NNs, which have proven to be very successful in modeling images and text, provide a suitable approach to data-driven channel modeling and can accurately represent complex environments. Initial research has explored the use of generative NNs for site-specific wireless channels. For example, in [7], the authors proposed generative networks to model channel parameters and trained five different models for five different cities. In contrast, our main goal is to develop a general model that can be used for all participating cities, considering an acceptable model performance for each of these different locations.
To this end, we propose a location-agnostic statistical channel propagation model based on Federated Learning (FL) that focuses on predicting the path loss component between a UAV and terrestrial nodes in mmWave communication networks. FL is a paradigm developed by Google that aims to build ML models with distributed datasets across multiple devices while maintaining privacy [8]. Participating users communicate parameters or gradients to a central server, which updates and distributes a global model without access to user data [9, 10, 11]
@inproceedingsbano2022federated,. However, in this work, we used the FL frameworks as distributed training engines to train our models on different datasets and develop the generalized channel model using Variational Autoencoder (VAE) and Conditional Generative Adversarial Network (CGAN) architectures, i.e. FL-VAE and FL-GAN. In our study, we rely on the statistical characteristics of the urban environment of
the target area collected through ray tracing simulations to train the models. The performance of the proposed approach is determined using various statistical parameters.
The remainder of the paper is organised as follows. Section II discusses the system model, while sections III and IV present the federated VAE and GAN approaches for channel modeling, respectively. Section V shows the experimental evaluation performed. Finally, Section VI draws conclusions.
## II System Model
In this work, we focus on channel parameter modeling with the main focus on the path loss component connecting UAVs to cellular base stations on the ground, i.e. gNB. We propose a distributed training approach using FL for channel model estimation with two generative NNs. For modelling purposes, we assume that the UAVs act as transmitters and the ground base stations act as receivers, but the roles can be reversed. To model the air-to-ground channel, we assume two ground gNBs, one terrestrial and the other aerial, as in [12]. The aerial gNBs serve as dedicated stations (mounted on rooftops and tilted upward), while the terrestrial gNBs are for ground users (mounted at street level), as shown in Figure 1. In addition, we assume three link states between the transmitter and receiver, including Line of Sight (LOS), Non-LOS (NLOS), and no link (i.e., no paths are available). However, when modelling path loss between UAVs and gNBs, we mainly focus on NLOS paths since for LOS, path loss can be calculated using Friis' law [13].
We adopt the channel parameters estimated with the raytracer package by [7] as a benchmark dataset for our investigation. The raytracer simulations estimate the channel parameters, including path losses, azimuth and elevation angles of arrival and departure, and propagation delays. According to the dataset, there is a total of 20 paths per link and six parameters per path, resulting in 120 parameters per link with a maximum path loss of 200 dB [7]. The dataset consists of channel parameters for different cities estimated by using the ray-tracer package. Using this dataset, we train the generative models for each city in a decentralized manner. These standalone models can learn the channel representation of a UAV's local dataset in a given region but may have biases and be applicable only in a limited spatial domain. Therefore, a general model that is not tied to a specific environment is essential. To this end, we use FL to aggregate these standalone models and obtain a global model. We validate the generated model using CDF of path loss.
In the proposed approach, we use two generative NN models, both of which have a two-stage structure, i.e., link and path models [12]. In the first stage, an NN is used as a link model to determine the state of the link - whether it is LOS, NLOS, or no link, according to 3GPP requirements [14]. To determine the link state, the relative position of the UAV to the gNB and the type of gNB are used as inputs. After the link state is determined, a generative model, i.e., a path model, is used in the second stage to generate the path parameters. This generative model is trained to match the distribution of the training dataset. To perform the distributed training using FL, we trained the link-state model for each city and stored it on the corresponding station to use with the path model in FL. We then aggregate these generative models as described in Section III and Section IV, respectively. Once the model is trained, it can be used in simulations to statically determine channel parameters considering the link status.
## III Federated VAE
In this section, we describe our FL-VAE for channel modeling of the path loss component. We first introduce the basic concepts of VAE to understand its content (Section III-A) and then describe in detail the FL-VAE approach (Section III-B) used for modeling the channel parameters.
### _Variational Autoencoder (VAE)_
VAE consists of encoder and decoder modules, where the encoder, defined as \(q_{\theta}(z|x)\), characterizes the distribution of the input variables \(x\) according to the encoding in the latent space \(z\) (encoded representation of the input variables). On the other hand, the decoder, defined as \(p_{\phi}(x|z)\), characterizes the distribution of the decoded variables based on the latent space, where \(\theta\) and \(\phi\) are the parameters of the encoder and decoder NNs, respectively. The loss function of the VAE given in [15] is as follows:
\[\mathcal{L}(\phi,\theta)=-\mathbb{E}_{q_{\theta}(z|x_{i})}\big{[}\log p_{\phi} (x_{i}|z)\big{]}+KL(q_{\theta}(z|x_{i})\|p(z)) \tag{1}\]
The first component of the expression represents the reconstruction loss corresponding to the expected negative log likelihood of each data point. The expected value is calculated based on the encoder's distribution over the representations, and this component is intended to provide an incentive for the decoder to acquire the ability to reconstruct the data. The second term is the KL divergence, which acts as a regularizer and measures the loss of information when we use \(q_{\theta}(z|x_{i})\) to represent \(p(z)\), which is the posterior distribution defined for the latent space, i.e., a Gaussian distribution.
### _Fl-Vae_
FL-VAE uses the same VAE architecture proposed in [12] and trains generative (path) model using the FL framework developed in [16]. The goal of FL-VAE is to capture the conditional distribution \(p(x|u)\) of all participating cities such that it tends to encode the local latent space of all cities into a single latent space and form the generic global model for generating channel parameters. VAEs can easily be trained in an FL framework since their encoder and decoder components comprise of NNs.
Figure 1: System Model
Let \(\mathcal{V}\coloneqq(\theta^{e},\theta^{d})\) be the VAE parameters, and \(\theta^{e}\) and \(\theta^{d}\) be the weights of the encoder and decoder, respectively. A centralized server initiates the training by communicating the initial weights of VAE \(\mathcal{V}^{t}\) to all agents in the participating city stations. Each agent in a city initializes its own VAE model with these weights and uses local training data and a pre-trained link model to obtain a latent representation of its own data. Local updates of each city \(k\) is given by:
\[\mathcal{V}^{t+1}_{k}\longleftarrow\mathcal{V}^{t}_{k}-\eta\nabla\mathcal{L}( \mathcal{V}^{t}_{k}) \tag{2}\]
Where \(\eta\) is the learning rate. Each city agent uses equation (2) to perform some local training epochs on local data and send the updates \(\mathcal{V}^{t+1}_{k}\) to the central server. The server finally amalgamates the received updates with a weighted average approach given by:
\[\mathcal{V}^{t+1}=\sum_{k=1}^{K}\frac{n_{k}}{n}\mathcal{V}^{t}_{k} \tag{3}\]
\(n_{k}\) are the number of training examples at each agent \(k\) and \(n\) is the total number of training data of each city. The server continues training until it obtains a global latent representation sufficient to represent all training data.
## IV Federated CGAN
In this section, we describe our FL-GAN approach to channel modeling. We first describe the Generative Adversarial Network (GAN) (Section IV-A) and then the FL-GAN (Section IV-B) used to model the channel parameters to form the generalised or universal model.
### _Generative Adversarial Network (GAN)_
The GAN is a popular concept first proposed in [17]. Its main purpose is to generate synthetic data that closely resembles real data. GANs use an unsupervised learning approach to detect patterns in the input data and generate new samples with the same distribution as the original data. It consists of two NNs: the generator (G) and the discriminator (D), which compete in a "min-max two-player game." The G generates synthetic (fake) data from the learned latent vector, while the D discriminates the synthetic data from the real data. These models are trained until the G replicates the original data so well that it becomes difficult for the D to distinguish between the fake and the real data.
To generate samples from a given target, the CGAN was introduced in [18]. A CGAN learns the mapping from an observed sample \(x\) and a random noise vector \(z\) to an output sample \(y\), represented as \(G:x,z\to y\), where \(G\) is the generator function. Both networks in CGAN aim to solve a "min-max loss" like GAN given by [18]:
\[\mathcal{L}_{cGAN}(\mathcal{G},\mathcal{D})=\mathbb{E}_{x,y}\big{[} \log(\mathcal{D}(x,y))\big{]}+\\ \mathbb{E}_{x,z}\big{[}1-\log(\mathcal{D}(x,\mathcal{G}(x,z))) \big{]} \tag{4}\]
G and D compete according to equation (2), where D tries to maximize the probability of assigning correct labels, and G tries to minimize that probability. In the next section, we describe the distributed approach using FL to train the CGAN.
### _Fl-Gan_
We use the FL technique to train CGAN in a distributed manner. The training process is initiated by a central server, which communicates the initial parameters of generator and discriminator i.e., \(\theta^{G}\) and \(\theta^{D}\) to the agents in the cities. Each city agent initializes its own CGAN instance with the received parameters and trains it using local data and associated link state models. The updated parameters are then reported back to the server, which aggregates the updates from all cities as follows:
\[\theta^{G}=\sum_{k=1}^{K}\frac{n_{k}}{n}\theta^{G}_{k}\quad;\quad\theta^{D}= \sum_{k=1}^{K}\frac{n_{k}}{n}\theta^{D}_{k} \tag{5}\]
\(\theta^{G}\) and \(\theta^{D}\) in equation (5) are the aggregate parameter estimates of G and D, respectively. The server goes through this process until it develops a global CGAN that can generate synthetic samples from the distribution that captures the local data distributions. After training, each local city unit can generate the path parameters with \(\theta^{G}\).
## V Simulation Results
In this section, we describe the performed experiments to assess the efficiency and effectiveness of the proposed FL approach.
### _Dataset and settings_
In this work, we use raytracer data provided by [7]. The dataset consists of channel parameters from five different cities, each with different landscapes and structures. However, for this work, we use the channel parameters of three cities (Beijing, London, and Boston). In the raytracer simulation, the transmitting UAVs are positioned at different horizontal locations in each environment, with four possible heights: 30, 60, 90 and 120 m, to create the whole city dataset. A total of 36k links were created for Beijing, 25.8k for London, and 23k for Boston. All simulations were performed at a frequency of 28 GHz.
For our learning models, we used two generative NNs and trained them in a distributed manner using FL to make FL-VAE and FL-GAN model. The main goal is to develop a distributed model using FL framework that can be used universally for estimating channel parameters. In this context, we compare the generative models trained in a distributed manner and analyse which model is better at capturing channel characteristics of different latent spaces. We compared the results of these distributed models with the basic stand-alone models trained for each city using different statistical parameters, i.e., KL divergence and Wessterstein distance. The architecture and hyperparameters used to train these models are shown in Table I and Table II respectively.
As mentioned earlier, in all cases our generative models consist of two cascaded models, the first of which is the link predictor and the second is the path generator. We first train the link predictor for each city separately and then use these pre-trained link models for simulation.
### _Results_
In this work, we propose a promising solution for extending the channel model to large-scale application scenarios by using a cooperative modeling approach with multiple distributed channel datasets. We first describe the results obtained in both centralized and distributed approaches. To ensure a fair comparison, we train all models with the same number of epochs and hyperparameters. In particular, we train the stand-alone models for 500 epochs and for the FL-VAE and FL-GAN models, we perform 100 rounds of local training, where each city trains its respective model for 5 epochs on its local data within each FL round.
#### V-B1 Stand-alone Models
Our goal is to measure the extent to which the data generated by the generative models (VAE and GAN networks) are comparable to the test data. To this end, we compare the CDF of the path losses of the generated and test data. Both trained generative models are able to capture the dual-slope nature of CDF, which is a crucial component for the effectiveness of our proposed framework. However, due to space constraints, we only show in Table III the distance between the distribution of the standalone models (VAE and GAN) and the distributed models, i.e., (FL-VAE) and (FL-GAN).
#### V-B2 Fl-Vae
To evaluate the performance of our proposed decentralized model, we created CDF plots for the path losses of both the test data and the path losses generated by the FL-VAE model for each city. This allowed us to evaluate the generalizability of our federated global model, particularly in terms of its ability to accurately capture the channel characteristics of all participating cities. The results in Figures 1(a), 1(b), and 1(c) show that our federated model performs better compared to the individual models of each city. In addition, the FL-VAE approach helps address potential privacy and security issues related to data sharing between different cities. These measures ensure that individual city data sets are not shared outside of the city, thus maintaining privacy and security.
#### V-B3 Fl-Gan
Now we use the CGAN instead of the VAE to generate the channel parameters and compare its performance with the results we obtained with the FL-VAE and standalone GAN models. Our results show that the generative network learns the distribution of the channel modelling data very well and generates samples that exactly reflect the same distribution of the training dataset. It is also clear from Figures 2(a), 2(b), and 2(c) that FL-GAN produces better results for the path loss component of the channel parameters compared to FL-VAE. The results show that the channel parameters reconstructed using the FL-GAN approach are closest to the original test data and outperform the VAE-based methods. This can be attributed to the fact that it is difficult for VAEs to encode heterogeneous datasets from different cities into a common latent space, while GANs are better at learning diverse data. The FL-GAN approach is therefore better suited to deal with the challenges of heterogeneous data and produce synthetic data that accurately represents the actual data distribution.
### _Performance Metrics and Evaluation Results_
This section presents the evaluation metrics used to assess the performance of the proposed distributed techniques FL-VAE and FL-GAN in generating synthetic data compared to the standalone models trained separately for each city. Table III summarizes the KL divergence and Wasserstein distance results obtained by comparing the test data distribution with the synthetic data distribution generated by the VAE, GAN, FL-VAE, and FL-GAN networks. These metrics are used to measure the distance between the test data distribution and the synthetic data generated by each model, and provide information about the accuracy and quality of the generated data. These evaluation metrics used in Table III show that the distribution of synthetic data generated by the FL-GAN network is much closer to the true distribution compared to the other methods, i.e., the standalone networks and FL-VAE. This highlights the superiority of the proposed approach FL-GAN in accurately modeling the data and generating synthetic data that is very similar to the real data.
As shown in Table III, the KL-divergence between the test data distribution and the synthetic data distribution of the standalone GAN -model is much higher than that of the other alternatives. FL-GAN achieves the lowest KL -divergence among the alternatives, which is due to the fact that GANs generally require more training time than VAEs, but can generate better samples. We also evaluate our method using Wasserstein distance, which considers metric space. Table III shows that FL-GAN significantly outperforms all other methods and achieves satisfactory performance in developing a global model for channel estimation parameters.
## VI Conclusion
NTNs are anticipated to play a crucial role in future wireless networks due to their cost efficiency and wide coverage area. In this paper, we present a comprehensive study that employs a generative framework based on NNs to model wireless channels in a distributed environment. In order to have a common model for different cities, we train distributed generative models and combine them into a unified and adaptable model. Specifically, we propose a channel model for air-to-ground communication of UAVs in mmWave frequency bands. Our distributed training method does not require any special knowledge or technical expertise, as it learns directly from massive raw channel data to develop a generic channel model. The use of generative NNs,
\begin{table}
\begin{tabular}{c c c c} \hline
**Item** & **Link Model** & **Generative Model (VAE)** & **Generative Model (GAN)** \\ \hline Communication Bounds & \(N/A\) & \(100\) & \(100\) \\ Epocha & \(30\) & \(5\) & \(5\) \\ Batch Size & \(100\) & \(100\) & \(100\) \\ Learning Rate & \(10^{-3}\) & \(10^{-4}\) & \(10^{-4}\) \\ Optimizer & Adam & Adam & Adam \\ \hline \end{tabular}
\end{table} TABLE II: Hyperparameter settings for link model, Federated VAE and CGAN models
\begin{table}
\begin{tabular}{c c c c} \hline
**Model** & **Number of Inputs** & **Hidden Units** & **Number of Outputs** & **Number of Parameters** \\ \hline Link Model & \(5\) & \([25,10]\) & \(3\) & \(1,653\) \\ VAE (Enc) & \(125\) & \([200,80]\) & \(40\) & \(44,520\) \\ VAE (Dec) & \(25\) & \([80,200]\) & \(240\) & \(40,720\) \\ GAN (Disc) & \(125\) & \([1120,560,280]\) & \(40\) & \(1,055,761\) \\ GAN (Gen) & \(25\) & \([280,560,1120]\) & \(240\) & \(1,094,360\) \\ \hline \end{tabular}
\end{table} TABLE I: Model summary of the link model, path model (VAE) and CGAN
especially GANs and VAEs, is a suitable method for statistical channel modeling in complex scenarios. Although both models are capable of capturing data dependencies, our results show that the proposed FL-GAN approach outperforms the FL-VAE and centralized baseline methods in terms of learning the accuracy of path loss parameters. We validate our results with various statistical parameters, and the resulting model shows effective learning and interesting non-obvious predictions.
|
2310.16415 | Dynamic Fabry-Perot cavity stabilization technique for atom-cavity
experiments | We present a stabilization technique developed to lock and dynamically tune
the resonant frequency of a moderate finesse Fabry-P\'erot (FP) cavity used in
precision atom-cavity quantum electrodynamics (QED) experiments. Most
experimental setups with active stabilization either operate at one fixed
resonant frequency or use transfer cavities to achieve the ability to tune the
resonant frequency of the cavity. In this work, we present a simple and
cost-effective solution to actively stabilize an optical cavity while achieving
a dynamic tuning range of over 100 MHz with a precision under 1 MHz. Our unique
scheme uses a reference laser locked to an electro-optic modulator (EOM)
shifted saturation absorption spectroscopy (SAS) signal. The cavity is locked
to the PDH error signal obtained from the dip in the reflected intensity of
this reference laser. Our setup provides the feature to efficiently tune the
resonant frequency of the cavity by only changing the EOM drive without
unlocking and re-locking either the reference laser or the cavity. We present
measurements of precision control of the resonant cavity frequency and vacuum
Rabi splitting (VRS) to quantify the stability achieved and hence show that
this technique is suitable for a variety of cavity QED experiments. | S. P. Dinesh, V. R. Thakar, V. I. Gokul, Arun Bahuleyan, S. A. Rangwala | 2023-10-25T07:04:41Z | http://arxiv.org/abs/2310.16415v1 | # Dynamic Fabry-Perot cavity stabilization technique for atom-cavity experiments
###### Abstract
We present a stabilization technique developed to lock and dynamically tune the resonant frequency of a moderate finesse Fabry-Perot (FP) cavity used in precision atom-cavity quantum electrodynamics (QED) experiments. Most experimental setups with active stabilization either operate at one fixed resonant frequency or use transfer cavities to achieve the ability to tune the resonant frequency of the cavity. In this work, we present a simple and cost-effective solution to actively stabilize an optical cavity while achieving a dynamic tuning range of over 100 MHz with a precision under 1 MHz. Our unique scheme uses a reference laser locked to an electro-optic modulator (EOM) shifted saturation absorption spectroscopy (SAS) signal. The cavity is locked to the PDH error signal obtained from the dip in the reflected intensity of this reference laser. Our setup provides the feature to efficiently tune the resonant frequency of the cavity by only changing the EOM drive without unlocking and re-locking either the reference laser or the cavity. We present measurements of precision control of the resonant cavity frequency and vacuum Rabi splitting (VRS) to quantify the stability achieved and hence show that this technique is suitable for a variety of cavity QED experiments.
**Keywords: Fabry-Perot cavity, Cavity stabilization, Cavity QED.**
## 1 Introduction
Cavities find applications in quantum electrodynamics (QED) [1, 2, 3, 4], precision spectroscopy [5, 6, 7, 8] all the way to gravitational wave detection [9, 10], to give a few diverse
examples of the versatility of these systems. Several experiments study a multitude of physical phenomena originating from a combined system containing an optical cavity and an intra-cavity single atom or an ensemble of atoms [11, 12, 13, 14, 15, 16, 17]. In these experiments, the optical cavity can be tuned to a desired frequency close to the atomic transition, and an ensemble of atoms can be cooled and confined to have spatial overlap with the mode defined by the geometric properties of the cavity, thus coupling the atoms to the cavity. In our experimental setup, we study the effects of interactions between ultracold dilute gas of atoms in a magneto-optical trap (MOT) and a resonant moderate finesse Fabry-Perot (FP) cavity [18, 19, 20, 21] in which a weak probe beam along the cavity axis is typically scanned over several tens of megahertz within a few milliseconds. For effective measurements at low signal levels, the experiment needs multiple repetitions. To achieve this, the resonant condition of the cavity needs to be held constant with respect to the cavity mode coupled atoms.
To perform these measurements efficiently, it is necessary to either actively or passively stabilize the FP cavity to compensate for any drifts in the length of the cavity due to temperature fluctuations, acoustic noise, etc. Passive stabilization is achieved by using ultralow thermal expansion materials and vibration-resistant mounting designs to build the cavity [22, 23, 24, 25, 26, 27, 28] and active stabilization techniques use feedback loops to maintain the resonant frequency of the cavity [29, 30, 31, 32, 33]. The existing active stabilization techniques use transfer cavities to shift the lock point and hence achieve the dynamic tuning property. In this paper, we propose a Pound-Drever-Hall (PDH) [34, 35, 36] based active stabilization scheme of the cavity that uses only two lasers. The FP cavity is locked to a reference laser which is in turn locked to an electro-optic modulator (EOM) shifted, saturation absorption spectroscopy (SAS) signal. A feature of our setup allows us to tune the resonant frequency of the cavity over a range of over a hundred MHz with a sensitivity of 1MHz by introducing a shift in the reference laser frequency without unlocking either the cavity or the reference laser. Thus, a locking technique that can be dynamically tuned over a wide frequency range can be implemented in experiments where we want to study the dependence of light scattered by the coupled atom-cavity system as a function of the detuning between the atomic transition and cavity frequency. Our main objective in presenting this work is to show how one can develop a relatively low investment and robust lock setup that bypasses using the transfer cavity to achieve the dynamic tuning ability. In this paper, we describe, the general experimental setup, the technique implemented to tune and stabilize the cavity, the configuration of the reference laser, the setup used to generate the necessary error signal, and the sensitivity, effectiveness, and tuning range of the implemented technique.
## 2 Experimental setup
The six laser cooling beams and a gradient magnetic field form the \({}^{85}\)Rb MOT that confines atoms at the center of the FP cavity formed by two high reflectivity mirrors, as shown in Fig. 3. The Fabry-Perot cavity consists of two identical spherical mirrors of diameter 12.5 mm, radius of curvature \(\approx\) 50 mm, separation of \(\approx\) 45 mm, a beam
waist of \(\approx\) 78 \(\mu\)m and has a linewidth of nearly \(\approx\) 8.8 MHz at 780nm. The input mirror is attached to an annular piezoelectric transducer (PZT). The 780 nm resonant probe beam (red) and the 767 nm cavity lock beam (blue) are coupled into the cavity so that they are both simultaneously resonant with the cavity mode. On the input side, a quarter-wave plate (QWP) and half-wave plate (HWP) combination in conjunction with two polarization beam splitting cubes (PBS) are used to separate the back reflected beams from the input mirror of the cavity, and the 767 light is captured by the photodiode (PD) to measure the light intensity dip in the reflected light when the cavity is resonant with the 767 nm as shown in 3. This allows the creation of a feedback signal to the PZT and locks the cavity length as discussed further in section B.
The 780 nm probe light is transmitted through the appropriately tuned locked cavity for probing the atoms. The transmitted light through the cavity is passed through a premium 780 nm band pass filter (BPF) (Thorlabs-FBH780-10) and detected by the CCD camera and the photomultiplier tube (PMT). The spatial overlap of the cavity mode and the MOT density profile is measured to obtain the number of atoms coupled to the cavity. In our experiment, prominent collective strong coupling effects such as vacuum Rabi splitting (VRS) [4; 37; 38] can be detected when a few thousands of atoms are coupled to the cavity. A 780 nm laser referenced to a Rb saturated absorption spectroscopy (SAS) signal is used to probe VRS. This probe laser is incident along the axis of the cavity. As the atoms of interest are \({}^{85}\)Rb, whose 5S\({}_{1/2}\) - 5P\({}_{3/2}\)(D2) transition wavelength is at 780nm a separate wavelength has to be used to lock the cavity.
Figure 1: **Schematic of the experiment.** Six laser cooling beams (light brown) create the MOT between the cavity mirrors. The other element shown are annular piezoelectric transducer (PZT), The description of the operation is in the text.
This laser has to be far off-resonance from \({}^{85}\)Rb D2 transitions to avoid any interaction with the cavity-coupled atoms. The coatings of the cavity restricted the choice of the reference laser to within 780\(\pm\)100 nm. Keeping these conditions in mind, a 767 nm laser is chosen as the reference laser to lock the cavity. A 40 dB premium bandpass filter is placed in the cavity transmission collection setup to ensure that the contribution of the reference laser in the measured transmitted signal is well below the average noise level. The 767 nm laser needs to be locked to a potassium SAS signal since the drift in its frequency changes the resonant frequency of the cavity and impacts the precision of the experiment This SAS locked frequency is noted using a wavemeter.
### Scheme of the lock
Initially, the 780 nm probe laser is locked to an independent rubidium SAS setup while the FP cavity and the 767 nm reference laser are kept unlocked. A CCD camera is used to observe the spatial profile of the transmitted light from the cavity. The unlocked cavity is tuned to transmit TEM\({}_{00}\) mode of 780 nm probe laser by adjusting the voltage of the cavity PZT. As the cavity frequency is scanned about this TEM\({}_{00}\) mode over a range of 40 MHz, the characteristic Lorentzian peak is observed in the cavity transmission signal. Any major drifts or fluctuations in the cavity length are countered by manually adjusting the cavity PZT voltage in order to ensure that this transmission peak, corresponding to the resonance condition for TEM\({}_{00}\) mode of probe laser, would be seen in each scan cycle. While maintaining this transmission peak within the scan range, the frequency of the 767 nm reference laser is changed by gradually varying the laser controller offset. This is done until a dip in the reflected intensity of the reference laser and the corresponding PDH error signal is observed to coincide with the transmission peak of the probe on an oscilloscope. Thus, a condition is experimentally achieved where the cavity can be stabilized using the PDH error signal produced by the 767 nm reference laser while being simultaneously nearly resonant to the TEM\({}_{00}\) mode of the 780 nm probe. At this point, the cavity length is approximately a common multiple of both the wavelengths corresponding to the probe and the reference laser. Let, \(L\) be the cavity length, \(c\) the speed of light, \(\nu_{p}\) and \(n_{p}\) respectively, be the frequency and the longitudinal eigen mode number corresponding to the TEM\({}_{00}\) mode of the probe laser sustained in the cavity, \(\nu_{r}\) and \(n_{r}\) be the frequency and longitudinal eigen mode number of the mode of reference laser. Then while keeping \(\nu_{p}\) and \(n_{p}\) constant, \(\nu_{r}\) and in turn, \(n_{r}\) are varied until the condition \(L\)\(\approx\)\(c\)\(\cdot\)n\({}_{\mathrm{p}}\)/\(\nu_{\mathrm{p}}\)\(\approx\)\(c\)\(\cdot\)n\({}_{\mathrm{r}}\)/\(\nu_{r}\) is satisfied. The reference laser frequency corresponding to this condition is noted on the wavemeter. This frequency is different from the SAS locked value. This preliminary measurement allows an appropriate choice combination of EOM and acoustic-optic modulator (AOM) required to introduce the necessary shift in the reference laser frequency from its SAS locked value. The reference laser is stabilized at the approximately required frequency and the FP cavity is in turn locked to the corresponding PDH error signal. Finally, the reference laser frequency is fine-tuned, without unlocking either the cavity or the laser itself, by adjusting only the EOM drive to maximize the transmitted intensity of the probe, thus achieving the desired condition, L = \(c\)\(\cdot\)n\({}_{\mathrm{p}}\)/\(\nu_{p}\) = \(c\)\(\cdot\)n\({}_{\mathrm{r}}\)/\(\nu_{\mathrm{r}}\).
### Configuration of the reference laser
A Toptica DL-100 extended cavity diode laser operated at 767 nm, stabilized to potassium SAS is used as the reference to lock the cavity, see Fig. 2 for the schematic of the experimental setup of the reference laser. The 767 nm reference laser is transmitted through a fiber-based EOM to achieve the relative frequency shift between the probe and reference resonances, modulo the cavity-free spectral range (FSR). In an earlier attempt, the EOM output was fed to a potassium SAS setup to obtain three SAS signals. One for the central frequency and one for each sideband generated by the EOM. In this scheme, the 767 nm laser is directly referenced to the SAS signal corresponding to the appropriate sideband.
To avoid this issue in the present scheme the reference laser output from the Toptica DL-100 is divided into two arms. The first arm is passed through a double pass AOM setup driven at 200 MHz to generate a 400 MHz shift in the reference laser frequency. The second arm is passed through the EOM to achieve the remainder of the shift. At this drive frequency, sufficient power can be transferred to the sideband, and a high-quality error signal for the sideband SAS can be obtained to lock the reference
Figure 2: **Setup of 767 nm reference laser**. Schematic representation of the optical circuit of the reference laser. Labels: M mirror, L lens, K potassium, HWP half-wave plate, QWP quarter wave plate, PBS1, PBS2, PBS3, PBS4 are polarizing beam splitters, AOM acoustic-optic modulator, EOM1 fiber electro-optic modulator, EOM2 free space electro-optic modulator. The output of DL 100 is passed through a combination of AOM and fiber-based EOM to obtain the desired shift in the frequency. Before the output is taken to the experimental table, it is passed through a free-space EOM to create sidebands for PDH locking.
laser. After the double pass AOM setup, the first arm is further subdivided into two parts, one of which is fed to a wavemeter to monitor the reference laser frequency, and the other is passed through a free space Qubig EOM and coupled into a polarization maintaining single mode optical fiber to transfer it to the experimental table. When the cavity is tuned in resonance with the reference laser, the intensity of reference laser light reflected from the input mirror of the cavity dips.
The free space EOM is driven by the internal oscillator of the PDH module used to lock the cavity to generate the sidebands required to obtain the error signal from this reflection dip. On stabilizing the 767 nm reference laser to the potassium SAS signal, the wavemeter readings showed a standard deviation of \(\approx\) 100 kHz, which is about the least count of the instrument. Since the cavity is made to follow the frequency of the reference laser, the length of the cavity is stabilized at a comparable frequency linewidth.
### Cavity referencing setup
The probe beam and the modulated reference laser are mixed using a half-wave plate and PBS2 as shown in Fig. 3. The output of the PBS is aligned along the axis of
Figure 3: **Schematic of the experimental setup implemented to reference the cavity.** The stabilized reference laser is transferred to the experimental table through an optical fiber. Labels: QWP quarter waveplate, HWP half waveplate, PBS polarizing beam splitter, ML mode matching lens, HWP half-wave plate, QWP quarter waveplate, BPF premium bandpass filter, M mirror, CCD charge-coupled device, PD photodiode and PMT photomultiplier tube, RFO rf oscillator, LPF low pass filter, PBS1, PBS2 are polarizing beam splitters.
the cavity. A 150 mm focal length convex lens is used to match the spatial mode profile of the two lasers with that defined by the FP cavity. This lens ensured that the two lasers are maximally coupled to the cavity. A quarter-wave plate is placed close to the window of the vacuum chamber. This quarter-wave plate introduced the shift in polarization required to separate out and measure the reflected intensity of the reference laser on a photodiode. On resonance, the cavity transmits the reference laser and a corresponding dip can be observed in the photodiode signal. The reflective dip is the order of 1% dip in the reflected intensity.
The PDH internal oscillator drove the free space EOM to modulate the laser frequency at 20 MHz to produce sidebands in the reflection dip. Since the cavity linewidth at 767 nm is over 10 MHz, the two sidebands can not be well resolved from the central feature in the dip. The shape of the error signal can be adjusted to produce a steep slope near resonance by carefully choosing the phase. At low oscillator power, the error signal strength increased with an increase in the amplitude of the PDH internal oscillator. However, as the sidebands are not well separated, a further increase in the amplitude of the internal oscillator would distort the signal. Despite the small amplitude and limited resolution of the dip, a good error signal is generated by optimizing the amplitude and the phase of the PDH internal oscillator, see Fig. 4. The optimized error signal is fed to a proportional, integration and derivative (PID) module which controls the cavity PZT voltage, thus stabilizing the cavity length. Fig. 4 shows the optimized PDH error signal and the PID signal upon locking the cavity. The 780 nm bandpass filter in the collection setup ensured that of both the frequencies transmitted by the cavity, only the contribution from 780 nm probe is measured by the photo-multiplier tube (PMT).
Figure 4: **PDH error signal to lock the cavity. Inset is the oscilloscope trace of a locked signal.** Cavity is scanned across the resonance of the 767 nm reference laser to obtain the standard PDH error signal (red curve) from the dip in the reflected intensity. The inset graph (blue curve) shows the oscilloscope trace of a locked signal.
## 3 Sensitivity and tuning ability of the lock
Following the extensive setup, an experiment is conducted to measure the intensity of the probe transmitted through the empty cavity as a function of reference laser frequency. The probe laser is locked to F=3 to F\({}^{\prime}\)=3 D2 line of \({}^{85}\)Rb transition. The reference laser is locked to the appropriate sideband SAS signal and the cavity is locked to the error signal of the reflection dip. The drive frequency of the fiber-based EOM (Eospace) is changed in steps of 1 MHz by monitoring the wavemeter reading of the reference laser frequency. This effectively changes the resonant frequency of the cavity in steps of 1 MHz. It must be noted that since \(\Delta\nu_{p}=\Delta\nu_{r}\cdot n_{p}/n_{r}\), where \(n_{p}/n_{r}\neq 1\), a change of 1 MHz in 767 nm reference laser frequency will not translate to 1 MHz change in the resonant frequency of the 780 nm probe. The intensity of the TEM\({}_{00}\) mode of the 780 nm probe transmitted by the cavity is measured at each step for 10 seconds. The graph in Fig. 5 shows the average intensity and the standard deviation versus the resonant frequency of the cavity. The ability to distinguish between the average intensity in steps of 1 MHz and the flexibility to tune the cavity over a range 100 MHz without unlocking and relocking any of the components demonstrates the precision, sensitivity, and tunability of the cavity lock.
To demonstrate the utility of the cavity stabilization mechanism in cavity QED experiments, we present a comparison of symmetry and precision in measuring VRS in our atom-cavity experiment before and after the implementation of the locking scheme. In the collective strong coupling regime, the split between the two normal modes of VRS is given by \(g_{0}\sqrt{N_{c}}\), where \(g_{0}\) is the atom-cavity interaction strength for a single atom and \(N_{c}\) is the effective number of atoms coupled to the cavity.
Figure 5: **Transmitted intensity of probe versus reference laser detuning.** The plot shows the intensity of transmitted 780 nm probe light from the cavity versus wavemeter reading reference laser frequency. In this measurement, the cavity is locked to the reference laser, and the transmission of the probe is measured for various detunings of the reference laser for approximately two seconds. Each black dot represents the mean and the error bar is the standard deviation of the transmitted intensity at a given detuning. The value of the error bar is observed to increase near zero detuning. This is because of the Lorentzian nature of the transmission of light through an FP cavity. The reference laser frequency is changed in steps and the intensity of the transmitted probe light is measured.
We observe a double peak structure characteristic of VRS on probing at F=3 to F\({}^{\prime}\)=3 of \({}^{85}\)Rb D2 by using a weak light incident along the axis of the cavity [18]. The F=3 to F\({}^{\prime}\)=4 of \({}^{85}\)Rb D2 laser is used for cooling transition for our MOT and needs to be turned off to ensure that the maximum number of atoms are prepared at the ground state of the probing transition. The MOT isotropically expands with a time constant of the order of a few milliseconds. Therefore, the probe is scanned within a millisecond, demanding a minimum scan rate of the order of 1 kHz. This demands a higher bandwidth PMT pre-amplifier. In the higher bandwidth setting, the pre-amplifier generates higher noise, thus reducing the signal-to-noise ratio (SNR) in the transmission. To improve the SNR, the transmitted signal has to be averaged over several cycles. Before implementing the cavity stabilizing mechanism, small drifts, and fluctuations in the cavity length during the several consecutive cycles, would make the 2 peaks in the VRS asymmetric as seen in Fig. 6a, 6b, and 6c. Once the cavity is stabilized using the reference laser, an average of over 100 cycles can be successfully performed and we get a symmetric VRS as shown in Fig. 6d, which can be used to estimate the number of atoms coupled to the cavity with much higher reliability.
Figure 6: **VRS of a locked versus an unlocked cavity** The plot shows the averaged transmitted intensity of the probe versus detuning of the probe light from F=3-3\({}^{\prime}\) transition of \({}^{85}\)Rb under collective strong coupling condition. The two peak structure VRS is obtained by scanning the probe laser around the atomic transition. Fig (a), Fig (b), and Fig (c) are plotted by taking the averages of the data from \(0-10\), \(45-55\), and \(90-100\) respectively. The symmetric VRS peak shown in Fig (d) is obtained when there is an active stabilization on the length of the cavity. In the unlocked case, during the measurement, the resonant frequency of the optical cavity changes due to the drift and fluctuations in the length, as a result, the expected symmetry in the double-peak structured VRS is lost.
It can be clearly observed that the split in the VRS changes with time in the absence of the cavity lock due to cavity drifts, while it remains unchanged with the cavity lock in place as shown in Fig. 7. This demonstrates the efficiency of our FP cavity stabilization technique.
## 4 Conclusion
In this paper, we have discussed in detail the experimental setup and the techniques used to stabilize a Fabry-Perot cavity. We have demonstrated the ability to dynamically tune the resonant cavity frequency over a range of 100 MHz with under 1 MHz precision. The techniques present here are suitable for a variety of cavity QED experiments. They are especially applicable in studying the interaction between a detuned cavity and ultra-cold dilute gas of atoms [20]. Such a lock strategy can be made compact and portable for future quantum electrodynamics experiments.
## Acknowledgments
The authors would like to thank the Department of Science and Technology and Ministry of Electronics and Information Technology (MeitY), Government of India, under a Centre for Excellence in Quantum Technologies grant with Ref. No. 4(7)/2020-ITEA. The authors would also like to acknowledge the inputs from Dr. Arijit Sharma, Mohamed Ibrahim, and Meena M S.
Figure 7: **The plot (a) shows the drift of the unlocked cavity with time and (b) is the measured VRS split of an unlocked cavity versus a locked cavity.** We allowed the cavity to drift and the probe laser is continuously scanned across the resonant transition to obtain the VRS split at various times, each dot is data of the VRS split averaged over 10 such scans. In the unlocked case, the fluctuations in the length of the cavity cause changes in resonant frequency and this introduces an unequal detuning between the atom and the cavity during the measurement cycle. In Fig (7a) we show the drift of the left and right peaks of the double peak structured VRS, the solid red plot shows the progression of the left peak without the lock, and the solid blue is the left peak with the cavity lock, the dotted red and blue shows the progression of the right peak without and with the cavity lock respectively. It can be observed that measured drifts in frequency for blue lines do not change with time. In Fig (7b) we show the VRS split observed for unlocked (blue plot) and locked (red plot). A consistent VRS split with a standard deviation of 0.83 MHz, when there is an active lock on the cavity is in place. The red dots are experimental data, The black solid line is the mean, and the light red band shows the standard deviation. |
2308.14048 | A Bayesian Non-parametric Approach to Generative Models: Integrating
Variational Autoencoder and Generative Adversarial Networks using Wasserstein
and Maximum Mean Discrepancy | Generative models have emerged as a promising technique for producing
high-quality images that are indistinguishable from real images. Generative
adversarial networks (GANs) and variational autoencoders (VAEs) are two of the
most prominent and widely studied generative models. GANs have demonstrated
excellent performance in generating sharp realistic images and VAEs have shown
strong abilities to generate diverse images. However, GANs suffer from ignoring
a large portion of the possible output space which does not represent the full
diversity of the target distribution, and VAEs tend to produce blurry images.
To fully capitalize on the strengths of both models while mitigating their
weaknesses, we employ a Bayesian non-parametric (BNP) approach to merge GANs
and VAEs. Our procedure incorporates both Wasserstein and maximum mean
discrepancy (MMD) measures in the loss function to enable effective learning of
the latent space and generate diverse and high-quality samples. By fusing the
discriminative power of GANs with the reconstruction capabilities of VAEs, our
novel model achieves superior performance in various generative tasks, such as
anomaly detection and data augmentation. Furthermore, we enhance the model's
capability by employing an extra generator in the code space, which enables us
to explore areas of the code space that the VAE might have overlooked. With a
BNP perspective, we can model the data distribution using an
infinite-dimensional space, which provides greater flexibility in the model and
reduces the risk of overfitting. By utilizing this framework, we can enhance
the performance of both GANs and VAEs to create a more robust generative model
suitable for various applications. | Forough Fazeli-Asl, Michael Minyi Zhang | 2023-08-27T08:58:31Z | http://arxiv.org/abs/2308.14048v1 | A Bayesian Non-parametric Approach to Generative Models: Integrating Variational Autoencoder and Generative Adversarial Networks using Wasserstein and Maximum Mean Discrepancy
###### Abstract
Generative models have emerged as a promising technique for producing high-quality images that are indistinguishable from real images. Generative adversarial networks (GANs) and variational autoencoders (VAEs) are two of the most prominent and widely studied generative models. GANs have demonstrated excellent performance in generating sharp realistic images and VAEs have shown strong abilities to generate diverse images. However, GANs suffer from ignoring a large portion of the possible output space which does not represent the full diversity of the target distribution, and VAEs tend to produce blurry images. To fully capitalize on the strengths of both models while mitigating their weaknesses, we employ a Bayesian non-parametric (BNP) approach to merge GANs and VAEs. Our procedure incorporates both Wasserstein and maximum mean discrepancy (MMD) measures in the loss function to enable effective learning of the latent space and generate diverse and high-quality samples. By fusing the discriminative power of GANs with the reconstruction capabilities of VAEs, our novel model achieves superior performance in various generative tasks, such as anomaly detection and data augmentation. Furthermore, we enhance the model's capability by employing an extra generator in the code space, which enables us to explore areas of the code space that the VAE might have overlooked. With a BNP perspective, we can model the data distribution using an infinite-dimensional space, which provides greater flexibility in the model and reduces the risk of overfitting. By utilizing this framework, we can enhance the performance of both GANs and VAEs to create a more robust generative model suitable for various applications.
D 1
Dirichlet process, generative models, variational autoencoders, computational methods.
## 1 Introduction
The original GAN, also known as Vanilla GAN, was introduced by Goodfellow et al. (2014), and since then, various types of adversarially trained generative models have been developed. These models have exhibited outstanding performance in creating sharp and realistic images by training two neural networks concurrently, one for generating images and the other for
discriminating between authentic and counterfeit images. These networks are trained in an adversarial manner until the discriminator cannot distinguish between real and fake samples.
Despite GANs being a powerful class of deep-learning models, they still face some notable challenges including mode collapse and training instability. The mode collapse issue in GANs occurs when the generator starts to memorize the training data instead of learning the underlying patterns of the data. This results in the generator becoming too specialized in generating the same samples repeatedly, which leads to a lack of diversity in the generated samples. Instability in GANs occurs when the generator and the discriminator are incapable of converging to a stable equilibrium (Kodali et al., 2017). A significant factor contributing to these issues is the tendency of the gradients used to update network parameters to become exceedingly small during the training process, causing the vanishing gradient that leads to a slowdown or even prevention of learning (Arjovsky and Bottou, 2017).
Unlike GANs, VAEs use a probabilistic approach to encode and decode data, enabling them to learn the underlying distribution and generate diverse samples, albeit with some blurriness. Consequently, the integration of GAN and VAE models has garnered significant attention as an intriguing idea for generating high-quality and realistic datasets. This approach allows for the full exploitation of the strengths of both generative models while mitigating their shortcomings.
In spite of the availability of numerous frequentist generative models, developing a BNP-based procedure poses a considerable challenge. Frequentist GANs, in particular, consider a specific form for the distribution of the data, which can lead to overfitting. This means that the generator may fit the training data too closely and not be able to generalize well to new data. In contrast, BNP methods can reduce overfitting in GANs by permitting the generator to adapt to the complexity of the data without overfitting to a pre-specific distribution. BNP models use stochastic priors that can accommodate infinite parameters, allowing them to capture complex patterns in the data and provide more accurate and reliable results.
Recently, there has been an attempt to train GANs within the BNP framework, which has been proposed in Fazeli-Asl et al. (2023). The authors identified the limitations of using some frequentist techniques in training GANs. Instead, they placed a Dirichlet process (DP) prior to the data distribution and suggested a semi-BNP MMD two-sample test to train the GAN generator. The effectiveness and superiority of the proposed test were evaluated on various benchmark datasets and compared to the classical MMD test used in Li et al. (2015). The results demonstrated statistical power and reduced false positive rates of the proposed test. Furthermore, the sample generated by the semi-BNP MMD GAN was shown to be exceptional compared to its frequentist counterpart. To the best of our knowledge, this is one of the few BNP works in this area, and thus it represents a significant contribution to the field.
This paper proposes a novel hybrid generative model that integrates a GAN, a VAE, and a code generator from the BNP perspective. The GAN serves as the primary component of our model, while the VAE and the code generator are included to enhance its capabilities. Specifically, we develop this model to extend the capability of semi-BNP MMD GAN in generating high-resolution medical datasets. To achieve this, we first develop a stochastic representation of the Wasserstein distance using DP inferences. This allows us to estimate
the distance between a random probability measure and a fixed distribution, which we then incorporate into the semi-BNP MMD loss function. By considering both the Wasserstein and MMD loss functions, our proposed model benefits from both overall distribution comparison and feature matching techniques, leading to reduced mode collapse and improved training outcomes.
To ensure training stability, we include a gradient penalty term to the generator loss, following the approach proposed by Gulrajani et al. (2017). Additionally, to generate diverse samples, we replace the GAN generator with the decoder of a VAE model. Furthermore, we employ an additional generator in the code space to generate more sample codes. The code GAN serves to complement the VAE by exploring untapped areas of the code space, ensuring a more comprehensive coverage and avoiding mode collapse. Overall, our proposed approach effectively enhances the quality and visualization of the generated outputs, making it suitable for high-resolution medical image generation.
The structure of the paper is organized as follows: In Section 2, we provide an overview of the background materials in GANs, VAEs, and BNP. Next, in Section 3, we introduce a probabilistic method for calculating the Wasserstein distance using a DP prior. Then, in Section 4, we introduce our BNP-based model for integrating a GAN and a VAE. Afterwards, in Section 5, we provide experimental results from our novel generative model. Lastly, we conclude our paper in Section 6 and provide some new directions based on the research presented in this paper.
## 2 Background Materials
### Vanilla GAN
The GAN can be mathematically represented as a minimax game between a generator network \(Gen_{\mathbf{\omega}}\), which maps from the latent space \(\mathbb{R}^{p}\) to the data space \(\mathbb{R}^{d}\), \(p<d\), and a discriminator network \(Dis_{\mathbf{\theta}}\), which maps from the data space to the interval \([0,1]\). Specifically, discriminator outputs represent how likely the generated sample is drawn from the true data distribution. In the vanilla GAN, the generator tries to minimize the probability that the discriminator correctly identifies the fake sample, while the discriminator tries to maximize the probability of correctly identifying real and fake samples (Goodfellow et al., 2014). For a real sample \(\mathbf{X}\) from data distribution \(F\), this can be expressed as the objective function
\[\arg\min_{\mathbf{\omega}}\max_{\mathbf{\theta}}\mathcal{L}(Gen_{\mathbf{\omega}},Dis_{\bm {\theta}}),\]
where \(\mathcal{L}(Gen_{\mathbf{\omega}},Dis_{\mathbf{\theta}})=E_{F}[\ln(Dis_{\mathbf{\theta}}( \mathbf{X}))]+E_{F_{\mathbf{Z}}}[\ln(1-Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}( \mathbf{Z})))]\), \(F_{\mathbf{Z}}\) is the distribution of the noise vector \(\mathbf{Z}\), and \(\ln(\cdot)\) denotes the natural logarithm. Throughout the paper, it is assumed that \(F_{\mathbf{Z}}\) follows a standard Gaussian distribution.
GAN models are frequently customized by modifying the generator loss, \(\mathcal{L}_{\text{Gen}}(\mathbf{\omega})\), and discriminator loss, \(\mathcal{L}_{\text{Dis}}(\mathbf{\theta})\), functions. To facilitate fair comparisons among different GAN models, we reformulate the vanilla GAN objective function as a sum of \(\mathcal{L}_{\text{Gen}}(\mathbf{\omega})\) and \(\mathcal{L}_{\text{Dis}}(\mathbf{\theta})\) by defining
\[\mathcal{L}_{\text{Gen}}(\mathbf{\omega}) =E_{F_{\mathbf{Z}}}[\ln(1-Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}( \mathbf{Z})))], \tag{1}\] \[\mathcal{L}_{\text{Dis}}(\mathbf{\theta}) =-E_{F}[\ln(Dis_{\mathbf{\theta}}(\mathbf{X}))]-E_{F_{\mathbf{Z}}}[ \ln(1-Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}(\mathbf{Z})))], \tag{2}\]
respectively. Now, the vanilla GAN is trained by iteratively updating \(\mathbf{\omega}\) and \(\mathbf{\theta}\) using stochastic gradient descent to minimize the loss functions (1) and (2), respectively. The GAN is considered to have learned when the generator can produce samples that are indistinguishable from the real samples, and the discriminator cannot differentiate between them.
### Mode Collapse in GANs
To mitigate mode collapse and enhance the stability property in GANs, Salimans et al. (2016) proposed using a batch normalization technique to normalize the output of each generator layer, which can help reduce the impact of vanishing gradients.The authors also implemented mini-batch discrimination, an additional technique to diversify the generated output. This involves computing a pairwise distance matrix among examples within a mini-batch. This matrix is then used to augment the input data before being fed into the model.
Another strategy, widely suggested in the literature to overcome GAN limitations, is incorporating statistical distances into the GAN loss function. Arjovsky et al. (2017) suggested updating GAN parameters by minimizing the Wasserstein distance between the distribution of the real and fake data (WGAN). They noted that this distance possesses a superior property compared to other measures like Kulback-Leibler, Jenson Shanon, and total variation measures. This is due to its ability to serve as a sensible loss function for learning distributions supported by low-dimensional manifolds. Arjovsky et al. (2017) used the weight clipping technique to constrain the discriminator to the 1-Lipschitz constant. This condition ensures discriminator's weights are bounded to prevent the discriminator from becoming too powerful and overwhelming the generator. Additionally, this technique helped to ensure that the gradients of the discriminator remained bounded, which is crucial for the stability of the overall training process.
However, weight clipping has some drawbacks. For instance, Gulrajani et al. (2017) noted that it may limit the capacity of the discriminator, which can prevent it from learning complex functions. Moreover, it can result in a "dead zone" where some of the discriminator's outputs are not used, which can lead to inefficiencies in training. To address these issues, Gulrajani et al. (2017) proposed to force the 1-Lipschitz constraint on the discriminator in an alternative way. They improved WGAN using a gradient penalty term in the loss function to present the WGPGAN model. They showed that it helps to avoid mode collapse and makes the training process more stable.
Instead of comparing the overall distribution of the data and the generator, Salimans et al. (2016) remarked on adopting the feature matching technique as a stronger method to prevent the mode collapse in GANs and make them more stable. In this strategy, the discriminator is a two-sample test and the generator is trained to deceive the discriminator by producing images that match the features of real images, rather than just assessing the overall distribution of the data. Dziugaite et al. (2015) and Li et al. (2015) have demonstrated remarkable results by implementing this procedure using the MMD measure. Furthermore, Li et al. (2015) employed an autoencoder (AE) network to train an MMD-based GAN in the code space referred to as AE generative moment matching networks (AE+GMMNs). AE networks use deterministic mapping to compress data into a lower dimension (code space) that captures essential features of the original data. Li et al. (2015)
attempted to generate code samples and reconstruct them in the data space to enhance the performance of their model. Their experiments showed that this approach led to a considerable reduction in noise in the generated samples compared to using MMD to train GAN in the data space.
### Standard VAE
The VAE consists of an encoder network \(Enc_{\mathbf{\eta}}\) that maps the input data \(\mathbf{X}\sim F\) to a latent representation \(\mathbf{Z}_{e}\), and a decoder network \(Dec_{\mathbf{\gamma}}\) that reconstructs \(\mathbf{Z}_{e}\) to the data space (Kingma and Welling, 2013). It uses a hierarchical distribution to model the underlying distribution of the data. More precisely, a prior distribution \(F_{\mathbf{Z}}\) is first placed on the latent space, \(\mathbf{Z}\sim F_{\mathbf{Z}}\), to specify the distribution of the encoder (variational distribution), \(\mathbf{Z}_{e}:=\mathbf{Z}|\mathbf{X}\sim F_{Enc_{\mathbf{\eta}}}\), and the distribution of the decoder, \(\mathbf{X}|\mathbf{Z}\sim F_{Dec_{\mathbf{\gamma}}}\), by reparametrization tricks. Then, the intractable data likelihood is approximated by maximizing the marginal log-likelihood:
\[\log f_{Dec_{\mathbf{\gamma}}}(\mathbf{x})=\log\int f_{Dec_{\mathbf{\gamma}}}(\mathbf{ x}|\mathbf{z})f_{\mathbf{Z}}(\mathbf{z})\,d\mathbf{z}, \tag{3}\]
where \(f_{Dec_{\mathbf{\gamma}}}\) represents the density function corresponding to \(F_{Dec_{\mathbf{\gamma}}}\). It can be shown that maximizing (3) is equivalent to minimizing:
\[\mathcal{L}_{\text{VAE}}(\mathbf{\eta},\mathbf{\gamma}) =D_{\text{KL}}\left(f_{Enc_{\mathbf{\eta}}}(\mathbf{z}|\mathbf{x}),f_{ \mathbf{Z}}(\mathbf{z})\right)-E_{F_{Enc_{\mathbf{\eta}}}(\mathbf{z}|\mathbf{x})}\left(\log f _{Dec_{\mathbf{\gamma}}}(\mathbf{x}|\mathbf{z})\right)\] \[=\mathcal{L}_{\text{Reg}}+\mathcal{L}_{\text{Rec}} \tag{4}\]
with respect to \(\mathbf{\eta}\) and \(\mathbf{\gamma}\). Here, \(D_{\text{KL}}(\cdot,\cdot)\) denotes Kullback-Leibler divergence, and \(\mathcal{L}_{\text{Reg}}\) and \(\mathcal{L}_{\text{Rec}}\) represent the regularization and reconstruction errors, respectively. In fact, \(\mathcal{L}_{\text{Rec}}\) is the cross-entropy that measures how well the model can reconstruct the input data from the latent space, while \(\mathcal{L}_{\text{Reg}}\) encourages the approximate posterior to be close to the prior distribution over the latent space.
Although the latent space in VAEs is a powerful tool for learning the underlying structure of data, it can face limitations in its capacity to capture the complex features of an input image. When the latent space is not able to fully represent all of the intricate details of an image, the resulting reconstructions can be less accurate and lead to unclear outputs. They tend to distribute probability mass diffusely over the data space, increasing the tendency of VAEs to generate blurry images, as pointed out by Theis et al. (2015). To mitigate the blurriness issue in VAEs, researchers have proposed various modifications such as considering the adversarial loss (Makhzani et al., 2015; Mescheder et al., 2017) in the VAE objective, improving the encoder and decoder network architectures (Yang et al., 2017; Kingma et al., 2016), and using denoising techniques (Im et al., 2017; Creswell and Bharath, 2018). However, the methods mentioned earlier still produce images that exhibit a degree of blurriness.
Meanwhile, the idea of integrating GANs and VAEs was first suggested by Larsen et al. (2016), using the decoder in the VAE as the generator in the GAN. This model, known as the VAE-GAN, provides an alternative approach to addressing the challenges previously mentioned. The paper demonstrates the effectiveness of the VAE-GAN model on several benchmark datasets, showing that it outperforms other unsupervised learning methods in
terms of sample quality and diversity. Donahue et al. (2016) and Dumoulin et al. (2016) independently proposed another similar approach in the same manner as in the paper by Larsen et al. (2016). However, their method incorporated a discriminator that not only distinguished between real and fake samples but also jointly compared the real and code samples (encoder output) with the fake and noise (generator input) samples, which sets it apart from the VAE-GAN model.
Another stronger method was proposed by Rosca et al. (2017) called \(\alpha\)-GAN. In this approach, the decoder in a VAE was also replaced with the generator of a GAN. Two discriminator networks were then used to optimize the reconstruction and regularization errors of the VAE adversarially. Moreover, a zero-mean Laplace distribution was assigned to the reconstruction data distribution to add an extra term for reconstruction error. This term was considered to provide weights to all parts of the model outputs. Several proxy metrics were employed for evaluating \(\alpha\)-GAN models. The findings revealed that the WGPGAN is a robust competitor to the \(\alpha\)-GAN and can even surpass it in certain scenarios. Recently, Kwon et al. (2019) proposed a 3D GAN by extending \(\alpha\)-GAN for 3D generations. The authors addressed the stability issues of \(\alpha\)-GAN and proposed a new hybrid generative model, 3D \(\alpha\)-WGPGAN, which employs WGPGAN loss to the \(\alpha\)-GAN loss to enhance training stability. They validated the effectiveness of the 3D \(\alpha\)-WGPGAN on some 3D MRI brain datasets, outperforming all previously mentioned models.
### 3d \(\alpha\)-Wgpgan
When \(f_{Dec_{\mathbf{\gamma}}}(\mathbf{x}|\mathbf{z})\) in (4) is unknown, \(\mathcal{L}_{\text{Rec}}\) cannot be employed directly in the training process. One way is assigning a specific distribution to \(f_{Dec_{\mathbf{\gamma}}}(\mathbf{x}|\mathbf{z})\), like the Laplace distribution which is a common choice in many VAE-based procedures, and then minimize \(\mathcal{L}_{\text{Rec}}\)(Ulyanov et al., 2018). However, this procedure can increase subjective constraints in the model (Rosca et al., 2017). An alternative method in approximating \(f_{Dec_{\mathbf{\gamma}}}(\mathbf{x}|\mathbf{z})\) is to treat \(Dec_{\mathbf{\gamma}}\) as the generator of a GAN to train the decoder by playing an adversarial game with the GAN discriminator. It guarantees the available training data is fully explored through the training process, thereby preventing mode collapse.
The 3D \(\alpha\)-WGPGAN uses both the above structures to present a model to generate new samples (Kwon et al., 2019). It also avoids using \(D_{\text{KL}}\) in \(\mathcal{L}_{\text{Reg}}\) to minimize the regularization error which often considers a simple form like a Gaussian for \(f_{Enc_{\mathbf{\eta}}}(\mathbf{z}|\mathbf{x})\). It replaces \(\mathcal{L}_{\text{Reg}}\) by a code discriminator, \(CDis_{\mathbf{\theta^{\prime}}}\), by playing an adversarial game with \(Enc_{\mathbf{\eta}}\) to approximate \(f_{Enc_{\mathbf{\eta}}}(\mathbf{z}|\mathbf{x})\) such that the latent posterior matches to the latent prior \(f_{\mathbf{Z}}(\mathbf{z})\). In fact, the code discriminator prompts the encoder to accurately encode the real distribution to the latent space, ensuring a more efficient and effective encoding process. Now, by considering the decoder as \(Gen_{\mathbf{\omega}}\), 3D \(\alpha\)-WGPGAN is trained by minimizing the hybrid loss function
\[\mathcal{L}_{\text{EGen}}(\mathbf{\omega},\mathbf{\eta}) =-E_{F_{Enc_{\mathbf{\eta}}}(\mathbf{z}|\mathbf{x})}[Dis_{\mathbf{\theta}}(Gen_{ \mathbf{\omega}}(\mathbf{z}_{e}))]-E_{F_{\mathbf{Z}}}[Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}} (\mathbf{z}_{r}))]\] \[\quad+\lambda_{1}\left\|\mathbf{x}-Gen_{\mathbf{\omega}}(\mathbf{z}_{e})\right\| _{1}, \tag{5}\] \[\mathcal{L}_{\text{Dis}}(\mathbf{\theta}) =E_{F_{Enc_{\mathbf{\eta}}}(\mathbf{z}|\mathbf{x})}[Dis_{\mathbf{\theta}}(Gen_{ \mathbf{\omega}}(\mathbf{z}_{e}))]+E_{F_{\mathbf{Z}}}[Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}} (\mathbf{z}_{r}))]-2E_{F}[Dis_{\mathbf{\theta}}(\mathbf{x})],\] \[\quad+\lambda_{2}L_{\text{GP-Dis}}\] (6) \[\mathcal{L}_{\text{CDis}}(\mathbf{\theta^{\prime}}) =E_{F_{Enc_{\mathbf{\eta}}}(\mathbf{z}|\mathbf{x})}[CDis_{\mathbf{\theta^{\prime}} }(\mathbf{z}_{e})]-E_{F_{\mathbf{Z}}}[CDis_{\mathbf{\theta^{\prime}}}(\mathbf{z}_{r})]+ \lambda_{2}L_{\text{GP-CDis}}, \tag{7}\]
where \(\mathbf{z}_{r}\) is a noise vector observed from distribution \(F_{\mathbf{Z}}\). The encoder and generator create a network that is optimized using the loss function \(\mathcal{L}_{\text{EGen}}(\mathbf{\omega},\mathbf{\eta})\) in (5). First terms in Equations (5) and (6) refer to the WGAN loss. More precisely, the objective function of WGAN is constructed as
\[\arg\min_{\mathbf{\omega}}\mathcal{W}(F,F_{Gen_{\mathbf{\omega}}}), \tag{8}\]
where
\[\mathcal{W}(F,F_{Gen_{\mathbf{\omega}}})=\max_{\mathbf{\theta}\in\Theta}E_{F}[Dis_{ \mathbf{\theta}}(\mathbf{x})]-E_{F_{Gen_{\mathbf{\omega}}}}[Dis_{\mathbf{\theta}}(Gen_{\mathbf{ \omega}}(\mathbf{z}))] \tag{9}\]
is the Wasserstein distance obtained by using the Kantorovich-Rubinstein duality in which \(\Theta\) contains \(\mathbf{\theta}\)'s whose corresponding discriminators is 1-Lipschitz (Villani, 2008). Note that Equation (9) evaluates to zero if \(F=F_{Gen_{\mathbf{\omega}}}\).
Equation (8) is reformulated through Equations (5) and (6). The first expectation in Equation (9), which does not depend on \(\mathbf{\omega}\), is not considered in the minimization gradient descent with respect to \(\mathbf{\omega}\) and is omitted in Equation (5). As the reconstruction sample is treated as the fake data, then, the loss function (8) using \(Gen_{\mathbf{\omega}}(\mathbf{z}_{e})\) is also added to the 3D \(\alpha\)-WGPGAN loss function. The last term in Equation (5) is the \(L_{1}\)-norm reconstruction loss which is obtained by assigning a Laplace distribution with mean zero and scale parameter \(\lambda_{1}\) to the generator distribution in the sense that \(f_{Gen_{\mathbf{\omega}}}(\mathbf{x}|\mathbf{z})\propto e^{-\lambda_{1}\|\mathbf{x}-Gen_{\bm {\omega}}(\mathbf{z}_{e})\|_{1}}\).
The gradient penalty \(L_{\text{GP-CDis}}=E_{F_{\mathbf{X}}}[(\left\lVert\nabla_{\hat{\mathbf{x}}}Dis_{ \mathbf{\theta}}(\hat{\mathbf{x}})\right\rVert_{2}-1)^{2}]\) with coeficient \(\lambda_{2}\) is added to Equation (6) to force the 1-Lipschitz constraint to the discriminator, where \(\widehat{\mathbf{x}}=t\mathbf{x}+(1-t)\widetilde{\mathbf{x}}\), \(0\leq t\leq 1\), \(\widetilde{\mathbf{x}}\) is any generated sample by \(Gen_{\mathbf{\omega}}\), and \(F_{\widehat{\mathbf{X}}}\) is the distribution function of \(\hat{\mathbf{x}}\). If \(\mathbf{\theta}^{\star}\) be the optimized parameter of \(Dis_{\mathbf{\theta}}\) that maximizes \(\mathcal{L}_{W}(\mathbf{\theta},\mathbf{\omega})\), then \(Dis_{\mathbf{\theta}^{\star}}\) should minimizes \(L_{\text{GP-Dis}}\)(Gulrajani et al., 2017). The code discriminator loss in Equation (7) has the same structure as \(\mathcal{L}_{\text{Dis}}(\mathbf{\theta})\), but with \(\mathbf{z}_{e}\) treated as fake and \(\mathbf{z}\) as real observed data. Kwon et al. (2019) designed the encoder and discriminator networks with five 3D convolutional layers followed by batch normalization (BatchNorm) layers and leaky rectified linear unit (LeakyReLU) activation functions.
The generator network includes a transpose convolutional (TransposeConv) layer, four 3D convolutional layers, and a BatchNorm layer with a ReLU activation function in each
Figure 1: The general architecture of 3D \(\alpha\)-WGPGAN comprises three convolutional networks (encoder, generator, and discriminator) and a fully connected-based network (code discriminator).
layer. Typically, BatchNorm and ReLU are applied to ensure network stability. The TransposeConv layer enables the network to "upsample" the input noise vector and generate an output tensor with a larger spatial resolution. The upscale layers are also implemented in the last four layers of the generator network to increase the spatial resolution of the input feature maps. The code discriminator consists of three fully connected layers followed by BatchNorm and LeakyReLU activation functions. Figure 1 provides a detailed illustration of the architecture of the 3D \(\alpha\)-WGPGAN.
### Dirichlet Process Prior
The DP is a widely used prior in BNP methods, introduced by Ferguson (1973). It can be seen as a generalization of the Dirichlet distribution, where a random probability measure \(F\) is constructed around a fixed probability measure \(H\) (the base measure) with variation controlled by a positive real number \(a\) (the concentration parameter). Within the context of this statement, \(H\) represents the extent of the statistician's expertise in data distribution, while \(a\) denotes the level of intensity of this knowledge.
Formally, \(F\) is a DP on a space \(\mathfrak{X}\) with a \(\sigma\)-algebra \(\mathcal{A}\) of subsets of \(\mathfrak{X}\) if, for every measurable partition \(A_{1},\ldots,A_{k}\) of \(\mathfrak{X}\) with \(k\geq 2\), the joint distribution of the vector \((F(A_{1}),\ldots,F(A_{k}))\) has a Dirichlet distribution with parameters \((aH(A_{1}),\ldots,aH(A_{k}))\). Additionally, it is assumed that \(H(A_{j})=0\) implies \(F(A_{j})=0\) with probability one. One of the most important properties of the DP is its conjugacy property, where the posterior distribution of \(F\) given a sample \(x\) drawn from \(F\sim DP(a,H)\), denoted by \(F^{pos}\), is also a DP with concentration parameter \(a+n\) and base measure \(H^{*}=a(a+n)^{-1}H+n(a+n)^{-1}F_{x}\), where \(F_{x}\) is the empirical cumulative distribution function of the sample \(x\). This property allows for easy computation of the posterior distribution of \(F\).
Alternative definitions for DP have been proposed, including infinite series representations by Bondesson (1982) and Sethuraman (1994). The method introduced by Sethuraman (1994) is commonly referred to as the stick-breaking representation and is widely used for DP inference. However, Zarepour and Al-Labadi (2012) noted that, unlike the series of Bondesson (1982), the stick-breaking representation lacks normalization terms that convert it into a probability measure. Additionally, simulating from infinite series is only feasible with a truncation approach for the terms inside the series. Ishwaran and Zarepour (2002) introduced an approximation of DP in the form of a finite series (10), which can be easily simulated.
\[F_{N}=\sum_{i=1}^{N}J_{i,N}\delta_{Y_{i}}, \tag{10}\]
where \((J_{1,N},\ldots,J_{N,N})\sim\text{Dir}(a/N,\ldots,a/N)\), and \(Y_{i}\overset{i.i.d.}{\sim}H\). In this paper, the variables \(J_{i,N}\) and \(Y_{i}\) are used to represent the weight and location of the DP, respectively.
Ishwaran and Zarepour (2002) demonstrated that \((F_{N})_{N\geq 1}\) converges in distribution to \(F\), where \(F_{N}\) and \(F\) are random values in the space \(M_{1}(\mathbb{R})\) of probability measures on \(\mathbb{R}\) endowed with the topology of weak convergence. To generate \((J_{i,N})_{1\leq i\leq N}\), one can put \(J_{i,N}=H_{i,N}/\sum_{i=1}^{N}H_{i,N}\), where \((H_{i,N})_{1\leq i\leq N}\) is a sequence of independent and identically distributed Gamma\((a/N,1)\) random variables that are independent of \((Y_{i})_{1\leq i\leq N}\). This form of approximation has been used so far in various applications, including hypothesis testing
and GANs, and reflected outstanding results. It also leads to some excellent outcomes in subsequent sections.
### Maximum Mean Discrepancy Measure
The MMD measure, introduced by Gretton et al. (2012), is a kernel-based measure that evaluates the similarity between the features of samples from two high-dimensional distributions. More precisely, let \(F\) and \(F_{Gen_{\mathbf{\omega}}}\) denote the distribution of the real and fake data, respectively. Then, for given \(\mathbf{X},\mathbf{X}^{\prime}\stackrel{{ i.i.d.}}{{\sim}}F\), and \(\mathbf{Y},\mathbf{Y}^{\prime}\stackrel{{ i.i.d.}}{{\sim}}F_{Gen_{ \mathbf{\omega}}}\), the MMD is defined by
\[MMD^{2}(F,F_{Gen_{\mathbf{\omega}}})=E_{F}[k(\mathbf{X},\mathbf{X}^{\prime})]-2E_{ F,F_{Gen_{\mathbf{\omega}}}}[k(\mathbf{X},\mathbf{Y})]+E_{F_{Gen_{\mathbf{\omega}}}}[k( \mathbf{Y},\mathbf{Y}^{\prime})], \tag{11}\]
where \(k(\cdot,\cdot)\) is a kernel function with feature space corresponding to a universal reproducing kernel Hilbert space. Due to the inaccessibility of \(F\) and \(F_{Gen_{\mathbf{\omega}}}\), Equation (11) is usually estimated by
\[MMD^{2}_{n,m}(F,F_{Gen_{\mathbf{\omega}}})=\frac{1}{n^{2}}\sum_{i\neq j}^{n}k( \mathbf{X}_{i},\mathbf{X}_{j})-\frac{2}{mn}\sum_{i=1}^{n}\sum_{j=1}^{m}k( \mathbf{X}_{i},\mathbf{Y}_{j})+\frac{1}{m^{2}}\sum_{i\neq j}^{m}k(\mathbf{Y}_{ i},\mathbf{Y}_{j}), \tag{12}\]
using two samples \((\mathbf{X}_{1},\ldots,\mathbf{X}_{n})\sim F\) and \((\mathbf{Y}_{1},\ldots,\mathbf{Y}_{m})\sim F_{Gen_{\mathbf{\omega}}}\). The MMD is a non-negative measure that is zero if and only if \(F\) and \(F_{Gen_{\mathbf{\omega}}}\) are identical (Gretton et al., 2012). It can be used as a regularizer in GANs to encourage the generator to produce data that has similar features to the real dataset. Dziugaite et al. (2015) considered the loss function (13) to train the generator:
\[\arg\min_{\mathbf{\omega}}MMD^{2}(F,F_{Gen_{\mathbf{\omega}}}). \tag{13}\]
However, Li et al. (2015) mentioned that by incorporating the square root of the MMD measure into the GAN loss function, the gradients used to update the generator can be more stable, preventing them from becoming too small and leading to gradient vanishing.
### Semi-BNP MMD GAN
The Semi-BNP MMD GAN is constructed based on training a generator network by optimizing a Bayesian two-sample statistic test treated as a discriminator (Fazeli-Asl et al., 2023). Given an input data \(\mathbf{X}\sim F\) and assuming a prior distribution \(F^{pri}:=F\sim DP(a,H)\), the prior-based MMD distance \(MMD^{2}(F^{pri},F_{Gen_{\mathbf{\omega}}})\) is defined using the weights and locations of the DP approximation proposed by Ishwaran and Zarepour (2002). For generated samples \(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{m}\sim F_{Gen_{\mathbf{\omega}}}\), the posterior-based MMD distance after updating from _a priori_ to _a posteriori_ is defined by
\[MMD^{2}(F^{pos},F_{Gen_{\mathbf{\omega}}}) =\sum_{\ell,t=1}^{N}J_{\ell,N}^{*}J_{t,N}^{*}k(\mathbf{V}_{\ell}^ {*},\mathbf{V}_{t}^{*})-\frac{2}{m}\sum_{\ell=1}^{N}\sum_{t=1}^{m}J_{\ell,N}^ {*}k(\mathbf{V}_{\ell}^{*},\mathbf{Y}_{t})\] \[+\frac{1}{m^{2}}\sum_{\ell,t=1}^{m}k(\mathbf{Y}_{\ell},\mathbf{Y} _{t}), \tag{14}\]
where
\[\mathbf{V}_{1}^{*},\ldots,\mathbf{V}_{N}^{*}\overset{i.i.d.}{\sim}\frac{a}{a+n}H+ \frac{n}{a+n}F_{\mathbf{x}},\ \ (J_{1,N}^{*},\ldots,J_{N,N}^{*})\sim\text{Dir}(\frac{a+n}{N},\ldots,\frac{a+n}{N}),\]
and \(F_{\mathbf{x}}\) denotes the empirical distribution of observed data. This procedure considers \(k(\cdot,\cdot)\) as a mixture of Gaussian kernels using various bandwidth parameters. For instance, for a set of fixed bandwidth parameters such as \(\{\sigma_{1},\ldots,\sigma_{T}\}\) and two vectors \(\mathbf{V}_{\ell}^{*}\) and \(\mathbf{Y}_{t}\), \(k(\mathbf{V}_{\ell}^{*},\mathbf{Y}_{t})=\sum_{t^{\prime}=1}^{T}\exp\frac{-|| \mathbf{V}_{\ell}^{*}-\mathbf{Y}_{t}||^{2}}{2\sigma_{t^{\prime}}^{2}}\).
Let density functions of the square root of the posterior and prior-based MMD measures be denoted by \(\pi_{MMD}(\cdot|\mathbf{X})\) and \(\pi_{MMD}(\cdot)\), respectively. The semi-BNP MMD GAN can be trained by optimizing the objective function:
\[\arg\max_{\mathbf{\omega}}\text{RB}_{MMD(F,F_{Gen_{\mathbf{\omega}}})}(0|\mathbf{X}), \tag{15}\]
where \(\text{RB}_{MMD(F,F_{Gen_{\mathbf{\omega}}})}(0|\mathbf{X})=\pi_{MMD}(0|\mathbf{X}) /\pi_{MMD}(0)\) is the relative belief (RB) ratio, a Bayesian statistic that measures the change in the belief of \(MMD(F,F_{Gen_{\mathbf{\omega}}})=0\) being true from _a priori_ to _a posteriori_(Evans, 2015). A value of \(\text{RB}_{MMD(F,F_{Gen_{\mathbf{\omega}}})}(0|\mathbf{X})>1\) indicates evidence in favor of the features of the generated samples being matched to those of the real data. The generator in this procedure was implemented using the original architecture proposed by Goodfellow et al. (2014), and the model architecture is illustrated in Figure 2. Indeed, the discriminator calculates the RB ratio, and the generator should aim to maximize this value. Fazeli-Asl et al. (2023) remarked that the semi-BNP MMD GAN is equivalently trained by minimizing simple loss function: \(\mathcal{L}_{Gen}(\mathbf{\omega})=MMD(F^{pos},F_{Gen_{\mathbf{\omega}}})\).
## 3 A Stochastic Representation for Wasserstein Distance
In this section, we present a stochastic procedure for measuring the Wasserstein distance between a fixed probability measure and a random probability measure modeled by a DP
Figure 2: The general architecture of semi-BNP MMD GAN with four ReLU layers and a sigmoid activation function in the final layer.
prior. For a fixed value of \(a\) and a probability measure \(H\), we model the unknown data distribution \(F\) as a random probability measure using the following model:
\[\mathbf{X} \sim F, \tag{16}\] \[F^{pri} :=F\sim DP(a,H),\] (17) \[F^{pos} :=F|\mathbf{X} \sim DP(a+n,H^{*}), \tag{18}\]
where \(\mathbf{X}=(\mathbf{X}_{1},\ldots,\mathbf{X}_{n})\) represent \(n\) samples in \(\mathbb{R}^{d}\). Let \(G\) be any fixed distribution and \(\{D_{\boldsymbol{\theta}}\}_{\boldsymbol{\theta}\in\Theta}\) be a parametrized family of continuous functions that all are 1-Lipschitz. We propose an approximation for Wasserstein between \(F^{pos}\) and \(G\) as
\[\mathcal{W}(F^{pos},G)=\max_{\boldsymbol{\theta}\in\Theta}\sum_{i=1}^{N}\left( J_{i,N}^{*}D_{\boldsymbol{\theta}}(\mathbf{V}_{i}^{*})-\frac{D_{\boldsymbol{ \theta}}(\mathbf{Y}_{i})}{N}\right), \tag{19}\]
where \((J_{1,N}^{*},\ldots,J_{N,N}^{*})\sim Dir(\frac{a+n}{N},\ldots,\frac{a+n}{N})\), \(\mathbf{V}_{1}^{*},\ldots,\mathbf{V}_{N}^{*}\stackrel{{ i.i.d.}}{{\sim}}H^{*}\), and \(Y_{1},\ldots,Y_{N}\) is a sample generated from \(G\). The next theorem presents some asymptotic properties of \(\mathcal{W}(F^{pos},G)\) with respect to \(N,n\), and \(a\).
**Theorem 1**: _Assuming (16)-(18), it follows that for any fixed probability distribution \(G\):_
1. \(\mathcal{W}(F^{pos},G)\stackrel{{ a.s.}}{{\longrightarrow}} \mathcal{W}(F,G)\) _as_ \(N,n\rightarrow\infty\)_._
2. \(\mathcal{W}(F^{pos},G)\stackrel{{ a.s.}}{{\longrightarrow}} \mathcal{W}(H,G)\) _as_ \(N,a\rightarrow\infty\)_,_
_where \(\stackrel{{ a.s.}}{{\longrightarrow}}\)" represents almost surely convergence and \(W(F,G)\) is defined in (9) with \(Dis_{\boldsymbol{\theta}}=D_{\boldsymbol{\theta}}\) and \(F_{Gen_{\boldsymbol{\omega}}}=G\)._
**Proof** Given that \(E_{F^{pos}}(J_{i,N}^{*})=\frac{1}{N}\) for all \(i\in\{1,\ldots,N\}\), we can use Chebyshev's inequality to obtain
\[\Pr\left\{\big{|}J_{i,N}^{*}-1/N\big{|}\geq\epsilon\right\}\leq\frac{Var_{F^{ pos}}(J_{i,N}^{*})}{\epsilon^{2}}, \tag{20}\]
for any \(\epsilon>0\). Substituting \(Var_{F^{pos}}(J_{i,N}^{*})=\frac{N-1}{N^{2}(a+n+1)}\) into (20) and setting \(n=k^{2}+b\), where \(k\in\mathbb{N}\) and \(b\in\{0,1,\ldots\}\), yields
\[\Pr\left\{\big{|}J_{i,N}^{*}-1/N\big{|}\geq\epsilon\right\}\leq\frac{1}{k^{2} \epsilon^{2}}.\]
The convergence of the series \(\sum_{\kappa=0}^{\infty}\kappa^{-2}\) implies that \(\sum_{\kappa=0}^{\infty}\Pr\left\{\Big{|}J_{i,N}^{*}-1/N\Big{|}\geq\epsilon \right\}<\infty\). As \(k\rightarrow\infty\) or equivalently \(n\rightarrow\infty\), the first Borel-Cantelli lemma implies that \(J_{i,N}^{*}\stackrel{{ a.s.}}{{\longrightarrow}}1/N\), as required. In contrast, as \(n\) approaches infinity, the Glivenko-Cantelli theorem implies that \(F_{\boldsymbol{x}}\) converges to \(F\) and subsequently, \(H^{*}\) converges to \(F\). This convergence indicates that the probability of drawing a sample from \(F\) approaches 1. As \(n\rightarrow\infty\), \(\mathbf{V}_{i}^{*}\rightarrow\mathbf{X}_{i}\) where \(\mathbf{X}_{i}\) is a random variable following distribution \(F\), for \(i=1,\ldots,N\). By applying the continuous mapping theorem, it follows that \(D_{\boldsymbol{\theta}}(\mathbf{V}_{i}^{*})\) converges to \(D_{\boldsymbol{\theta}}(\mathbf{X}_{i})\), as \(n\rightarrow\infty\), and thus, we have
\[I=\sum_{i=1}^{N}\left(J_{i,N}^{*}D_{\boldsymbol{\theta}}(\mathbf{V}_{i}^{*})- \frac{D_{\boldsymbol{\theta}}(\mathbf{Y}_{i})}{N}\right)\stackrel{{ a.s.}}{{\longrightarrow}}\frac{1}{N}\sum_{i=1}^{N}(D_{ \boldsymbol{\theta}}(\mathbf{X}_{i})-D_{\boldsymbol{\theta}}(\mathbf{Y}_{i})). \tag{21}\]
Applying the strong law of large numbers to the right-hand side of Equation (21) yields
\[I\xrightarrow{a.s.}E_{F}[D_{\mathbf{\theta}}(\mathbf{X}_{1})]-E_{G}[D_{\mathbf{\theta}}( \mathbf{Y}_{1})],\]
as \(N\to\infty.\) Since \(\max(\cdot)\) is a continuous function, the proof of (i) is completed by using the continuous mapping theorem. The proof of (ii) is followed by a similar approach, with \(a=\kappa^{2}c\) being considered in \(Var_{F^{pos}_{1}}(J^{*}_{i,N}),\) for \(\kappa\in\{0,1,\ldots\}\) and a fixed positive value of \(c.\)\(\blacksquare\)
The next corollary demonstrates a crucial property of the \(\mathcal{W}(F^{pos},G)\) metric, which makes it a convenient tool for comparing two models.
**Corollary 2**: _Let \(\{G_{k}\}_{k\geq 1}\) be a sequence of distribution functions on the data space. Assuming the conditions of Theorem 1, then \(\mathcal{W}(F^{pos},G_{k})\to 0\), if and only if \(G_{k}\xrightarrow{d}F\), as \(N,n,k\to\infty\), where \(``\xrightarrow{d}"\) indicates convergence in distribution._
**Proof** Arjovsky et al. (2017, Theorem 2) showed that \(\mathcal{W}(F,G_{k})\to 0\) if and only if \(G_{k}\xrightarrow{d}F\), as \(k\to\infty\). The proof is completed by applying this result to part (i) of Theorem 1. \(\blacksquare\)
The next Lemma provides a lower bound for the expectation of \(\mathcal{W}(F^{pos},G)\).
**Lemma 3**: _Under assumptions of Theorem 1, we have_
\[E[W(F^{pos},G)]\geq W(H^{*},G).\]
**Proof** By virtue of the convexity property of the maximum function, Jensen's inequality implies
\[E\left[\max_{\mathbf{\theta}\in\Theta}\sum_{i=1}^{N}\left(J^{*}_{i,N }D_{\theta}(\mathbf{V}^{*}_{i})-\frac{D_{\mathbf{\theta}}(\mathbf{Y}_{i})}{N} \right)\right] \geq\max_{\mathbf{\theta}\in\Theta}E\left[\sum_{i=1}^{N}\left(J^{*}_ {i,N}D_{\theta}(\mathbf{V}^{*}_{i})-\frac{D_{\mathbf{\theta}}(\mathbf{Y}_{i})}{N} \right)\right]\] \[=\max_{\mathbf{\theta}\in\Theta}\sum_{i=1}^{N}\frac{1}{N}\left(E_{H^ {*}}[D_{\theta}(\mathbf{V}^{*}_{i})]-E_{G}[D_{\mathbf{\theta}}(\mathbf{Y}_{i})]\right) \tag{22}\] \[=\max_{\mathbf{\theta}\in\Theta}E_{H^{*}}[D_{\theta}(\mathbf{V}^{*}_ {1})]-E_{G}[D_{\mathbf{\theta}}(\mathbf{Y}_{1})]. \tag{23}\]
Equation (22) is derived from the property of the Dirichlet distribution, while Equation (23) is a result of identical random variables \((\mathbf{V}^{*}_{i})_{1\leq i\leq N}\) and \((\mathbf{Y}_{i})_{1\leq i\leq N}\). \(\blacksquare\)
As \(\mathcal{W}(F^{pos},G)\) is interpreted as a BNP approximation of \(\mathcal{W}(F,G)\), it is important to evaluate the accuracy of this estimation. To address this concern, the following lemma provides an asymptotic bounds for the approximation error.
**Lemma 4**: _Let \(G\) be any fixed distribution and \(\{D_{\mathbf{\theta}}\}_{\mathbf{\theta}\in\Theta}\) be a parameterized family of continuous functions that all are 1-Lipschitz. Let \(\mathbf{\theta}^{*}_{BNP}\in\Theta\) be the value that optimizes
\(\mathcal{W}(F^{pos},G)\), that is,_
\[\mathcal{W}(F^{pos},G)=\sum_{i=1}^{N}\left(J_{i,N}^{*}D_{\boldsymbol{\theta}_{BNP }^{*}}(\mathbf{V}_{i}^{*})-\frac{D_{\boldsymbol{\theta}_{BNP}^{*}}(\mathbf{Y}_ {i})}{N}\right),\]
_and let \(\boldsymbol{\theta}^{*}\in\Theta\) be the value that optimizes \(\mathcal{W}(F,G)\), that is,_
\[\mathcal{W}(F,G)=E_{F}[D_{\boldsymbol{\theta}^{*}}(\mathbf{X})]-E_{G}[D_{ \boldsymbol{\theta}^{*}}(\mathbf{Y})].\]
_Then,_
\[\lim_{N,n\rightarrow\infty}|\mathcal{W}(F^{pos},G)-\mathcal{W}(F,G)|\leq\delta,\]
_where,_
\[\delta=E_{F}\left[\left|D_{\boldsymbol{\theta}_{BNP}^{*}}( \mathbf{X}_{1})\right|+\left|D_{\boldsymbol{\theta}^{*}}(\mathbf{X}_{1}) \right|\right]+E_{G}\left[\left|D_{\boldsymbol{\theta}_{BNP}^{*}}(\mathbf{Y} _{1})\right|+\left|D_{\boldsymbol{\theta}^{*}}(\mathbf{Y}_{1})\right|\right]\]
**Proof** Consider \(I=E_{F}\left|D_{\boldsymbol{\theta}^{*}}(\mathbf{X}_{1})\right|+E_{G}\left|D_ {\boldsymbol{\theta}^{*}}(\mathbf{Y}_{1})\right|\), the triangle inequality implies
\[\lim_{N,n\rightarrow\infty}|\mathcal{W}(F^{pos},G)-\mathcal{W}( F,G)| \leq\lim_{N,n\rightarrow\infty}\sum_{i=1}^{N}\left(\left|J_{i,N} ^{*}D_{\boldsymbol{\theta}_{BNP}^{*}}(\mathbf{V}_{i}^{*})\right|+\left|\frac {D_{\boldsymbol{\theta}_{BNP}^{*}}(\mathbf{Y}_{i})}{N}\right|\right)+I\] \[=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{i=1}^{N}\left(\left|D _{\boldsymbol{\theta}_{BNP}^{*}}(\mathbf{X}_{i})\right|+\left|D_{\boldsymbol{ \theta}_{BNP}^{*}}(\mathbf{Y}_{i})\right|\right)+I \tag{24}\] \[=E_{F}\left|D_{\boldsymbol{\theta}_{BNP}^{*}}(\mathbf{X}_{1}) \right|+E_{G}\left|D_{\boldsymbol{\theta}^{*}}(\mathbf{Y}_{1})\right|+I, \tag{25}\]
where Equations (24) and (25) are obtained by employing a similar strategy to that used in the proof of Theorem 1. The proof is then completed by rearranging the terms in Equation (25).
## 4 A BNP VAE-GAN Model for Data Generation
### Model Structure
We present a Bayesian generative model that incorporates expert knowledge into the prior distribution, instead of assuming a specific distribution for the data population. This is accomplished by selecting the base measure \(H\) in the BNP model defined by (16)-(18) to reflect the expert's opinion about the data distribution. A Gaussian distribution is a common choice for \(H\), covering the entire data space, with mean vector and covariance matrix given by (26).
\[\bar{\mathbf{X}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i},\hskip 28.452756ptS_{ \mathbf{X}}=\sum_{i=1}^{n}(\mathbf{X}_{i}-\bar{\mathbf{X}})(\mathbf{X}_{i}- \bar{\mathbf{X}})^{T}. \tag{26}\]
In our proposed BNP generative model, we use this choice of \(H\) in the DP prior (17) to model the data distribution. Furthermore, we employ a maximum a posteriori (MAP) estimate to choose the optimal value of the concentration parameter \(a\). This is accomplished by maximizing the log-likelihood of \(F^{pos}\) fitted to the given dataset over a range of \(a\) values. The Bayesian optimization, on the other hand, is a technique we use to find \(a\) with the highest posterior probability, which corresponds to the MAP estimate by treating the posterior distribution as a black-box function. To limit the impact of the prior \(H\) on the results, we consider an upper bound less than \(n/2\) for values of \(a\) during the Bayesian optimization process, as mentioned in Fazeli-Asl et al. (2023). By adhering to this upper bound, we can ensure that the chance of drawing samples from the observed data is at least twice as likely as generating samples from \(H\). Employing this approach enables us to establish the appropriate level of intensity for our prior knowledge.
Our generative model is then developed by constructing a hybrid model including a GAN, a VAE, and an AE+GMMN within the BNP framework. The GAN is the core of the model and we augmented it with a VAE by substituting the GAN generator with the VAE decoder. This step plays a crucial role in mitigating the mode collapse in the generator and enhancing its capacity to produce sharp images. We also integrated the idea of AE+GMMN into our GAN structure by incorporating a code generator in the latent space. This additional step encourages the generator to produce images with less noise, higher quality, and diversity (Li et al., 2015).
### Loss Function
We propose a BNP objective function that uses a combination of Wasserstein and MMD distances to improve the stability and quality of the GAN with a generator network \(Gen_{\mathbf{\omega}}\) that is fed by the noise vector \(\mathbf{Z}_{r}\sim F_{\mathbf{Z}}\), and a discriminator network \(Dis_{\mathbf{\theta}}\),. This approach not only prevents mode collapse but also results in better feature matching between the generated and real data distributions by capturing different aspects of the data distribution. To achieve this, we replace \(MMD(F,F_{Gen_{\mathbf{\omega}}})\) in the RB statistic given by Equation (15) with the mixed distance in Equation (27) to define a new GAN objective function given by Equation (28).
\[d_{\text{WMMD}}(F,F_{Gen_{\mathbf{\omega}}})=MMD(F,F_{Gen_{\mathbf{\omega}}})+\mathcal{ W}(F,F_{Gen_{\mathbf{\omega}}}). \tag{27}\]
\[\arg\max_{\mathbf{\omega}}\text{RB}_{d_{\text{WMMD}}(F,F_{Gen_{\mathbf{\omega}}})^{1}} (0|\mathbf{X}). \tag{28}\]
Here, \(\text{RB}_{d_{\text{WMMD}}}\) calculates the ratio of the density function of \(d_{\text{WMMD}}(F^{\text{pos}},F_{Gen_{\mathbf{\omega}}})\) to the density function of \(d_{\text{WMMD}}(F^{\text{pri}},F_{Gen_{\mathbf{\omega}}})^{2}\) at zero. If \(\text{RB}_{d_{\text{WMMD}}}>1\), it indicates evidence in favor of the fake and real samples being indistinguishable. Conversely, if \(\text{RB}_{d_{\text{WMMD}}}\leq 1\), it indicates evidence in favor of the fake and real samples being distinguishable.
Unlike the original objective function of the semi-BNP GAN in Equation (15), which relied solely on \(Gen_{\mathbf{\omega}}\), the updated objective function in Equation (28) takes into account both \(Gen_{\mathbf{\omega}}\) and \(Dis_{\mathbf{\theta}}\). This is accomplished by including \(\mathcal{W}(F,F_{Gen_{\mathbf{\omega}}})\) defined in Equation
(9) in which \(Dis_{\mathbf{\theta}}\) is considered a continuous 1-Lipschitz function. To enforce this requirement, we will follow the methodology outlined in Gulrajani et al. (2017) and include a gradient penalty term in the discriminator loss function. The generator tries to maximize \(\text{RB}_{d_{\text{WMDD}}}\) to produce highly realistic samples, while the discriminator tries to minimize this value to effectively distinguish between real and fake samples.
According to Section 2.7, optimizing (28) can be interpreted as the generator's attempt to minimize \(d_{\text{WMDD}}(F^{\text{pos}},F_{Gen_{\mathbf{\omega}}})\), which is the mixture of BNP Wasserstein and MMD distances, given by (19) and (14), respectively. Simultaneously, the discriminator attempts to maximize \(d_{\text{WMDD}}(F^{\text{pos}},F_{Gen_{\mathbf{\omega}}})\) through optimizing \(\mathcal{W}(F^{\text{pos}},F_{Gen_{\mathbf{\omega}}})\). Hence, optimizing the GAN objective function should be changed to
\[\arg\min_{\mathbf{\omega}}d_{\text{WMDD}}(F^{pos},F_{Gen_{\mathbf{\omega}}}).\]
Following the methodology proposed by Larsen et al. (2016), we connect our GAN to a VAE by integrating the generator network in the GAN and the decoder network in a VAE. Additionally, we adopt the perspective suggested by Kwon et al. (2019), where the encoder and generator are treated as two sub-networks within a network to construct a unified loss function. Therefore, in addition to feeding \(Gen_{\mathbf{\omega}}\) with \(\mathbf{Z}_{r}\), it should also be fed with the encoded sample \(\mathbf{Z}_{e}\) generated by a parametrized encoder network \(Enc_{\mathbf{\eta}}(\mathbf{X})\). Now, instead of relying on a code discriminator network in 3D \(\alpha\)-WGPGAN, we propose using the objective function
\[\arg\min_{\mathbf{\eta}}MMD(F_{\mathbf{Z}},F_{Enc_{\mathbf{\eta}}}) \tag{29}\]
in code space to approximate variational distribution \(F_{Enc_{\mathbf{\eta}}}\). Here, \(F_{\mathbf{Z}}\) treats as the distribution of the real noise while \(F_{Enc_{\mathbf{\eta}}}\) treats as the distribution of the fake noise. The objective function (29) serves as the regularization term in VAE learning and optimizing it implies that \(\mathbf{Z}_{e}\) is well-matched to \(\mathbf{Z}_{r}\), and thus, the generator thoroughly covers the decoded space. This suggests that the generator has effectively prevented mode collapse (Kwon et al., 2019; Jafari et al., 2023). We have observed that our method not only produces accurate results, but it also significantly reduces the training time of the hybrid network.
On the other hand, to further enhance the coverage of the code space, we draw inspiration from AE+GMMN and incorporate an additional generator, \(CGen_{\mathbf{\omega}^{\prime}}\), in the code space. This generator takes the random noise sample \(\mathbf{Z}_{r}^{\prime}\) drawn from \(F_{\mathbf{Z}^{\prime}}\) in the sub-latent space \(\mathbb{R}^{q}\), and outputs the code sample \(\widetilde{\mathbf{Z}}_{e}:=CGen_{\mathbf{\omega}^{\prime}}(\mathbf{Z}_{r}^{ \prime})\) in the latent space \(\mathbb{R}^{p}\), where \(q<p\) and \(F_{\mathbf{Z}^{\prime}}\) is typically considered as a standard Gaussian distribution. The code generator \(CGen_{\mathbf{\omega}^{\prime}}\) fills in gaps or unexplored areas of the code space that the VAE may have missed, resulting in better code space coverage and reducing the risk of mode collapse. By generating more code samples using \(CGen_{\mathbf{\omega}^{\prime}}\), the performance of the VAE can be improved, particularly in scenarios with limited or small datasets. To train \(CGen_{\mathbf{\omega}^{\prime}}\) we employ objective function (30) by treating \(F_{Enc_{\mathbf{\eta}}}\) as the distribution of the real code and \(F_{CGen_{\mathbf{\omega}^{\prime}}}\) as the distribution of the fake code3.
Footnote 3: The objective functions in Equations (29) and (30) are beyond the scope of the BNP framework, as they involve comparing parametric distributions.
\[\arg\min_{\mathbf{\omega}^{\prime}}MMD(F_{Enc_{\mathbf{\eta}}},F_{CGen_{\mathbf{\omega}^{ \prime}}}) \tag{30}\]
For a set of noise vectors \(\mathbf{Z}_{r}=\{\mathbf{Z}_{r_{i}}\}_{i=1}^{N}\), real code vectors \(\mathbf{Z}_{e}=\{\mathbf{Z}_{e_{i}}\}_{i=1}^{N}\), and generated code vectors \(\widetilde{\mathbf{Z}}_{e}=\{\widetilde{\mathbf{Z}}_{e_{i}}\}_{i=1}^{N}\), we treat all \(Gen_{\mathbf{\omega}}(\mathbf{Z}_{r})\), \(Gen_{\mathbf{\omega}}(\mathbf{Z}_{e})\), and \(Gen_{\mathbf{\omega}}(\widetilde{\mathbf{Z}}_{e})\) as fake data and calculate the posterior mixed distance for these generated samples. Next, to train our hybrid model, we use stochastic gradient descent to minimize the following loss functions:
\[\mathcal{L}_{\text{EGen}}(\mathbf{\omega},\mathbf{\eta}) =-\sum_{i=1}^{N}\left(\frac{Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}( \mathbf{Z}_{r_{i}}))+Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}(\mathbf{Z}_{e_{i}}))+ Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}(\widetilde{\mathbf{Z}}_{e_{i}}))}{N}\right)\] \[+MMD(F^{pos},F_{Gen_{\mathbf{\omega}}(\mathbf{Z}_{r})})+MMD(F^{pos},F_ {Gen_{\mathbf{\omega}}(\widetilde{\mathbf{Z}}_{e})})\] \[+MMD(F_{\mathbf{Z}},F_{Enc_{\mathbf{\eta}}}),\] \[\mathcal{L}_{\text{Dis}}(\mathbf{\theta}) =\sum_{i=1}^{N}\left(\frac{Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}( \mathbf{Z}_{r_{i}}))+Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}(\mathbf{Z}_{e_{i}}))+ Dis_{\mathbf{\theta}}(Gen_{\mathbf{\omega}}(\widetilde{\mathbf{Z}}_{e_{i}}))}{N}\right.\] \[\left.\hskip 28.452756pt-3J_{i,N}^{*}Dis_{\mathbf{\theta}}(\mathbf{V}_ {i}^{*})\right)+\lambda L_{\text{GP-Dis}},\] \[\mathcal{L}_{\text{CGen}}(\mathbf{\omega}^{\prime}) =MMD(F_{Enc_{\mathbf{\eta}}},F_{CGen_{\mathbf{\omega}^{\prime}}}).\]
We excluded the terms that are independent of \(\mathbf{\omega}\) and \(\mathbf{\eta}\) in \(d_{\text{WMMD}}(F^{\text{pos}},F_{Gen_{\mathbf{\omega}}})\) from \(\mathcal{L}_{\text{EGen}}(\mathbf{\theta})\), as they do not contribute to gradient descent with respect to these parameters. Similarly, in \(\mathcal{L}_{\text{Dis}}(\mathbf{\theta})\), we ignored posterior MMD-based measures that are independent of \(\mathbf{\theta}\). The posterior MMD-based measures in \(\mathcal{L}_{\text{EGen}}(\mathbf{\omega},\mathbf{\eta})\) compare the reconstruction and posterior samples, they can also be considered as the posterior reconstruction term in VAE training.
### Network Architecture
The architecture of our networks is inspired by the network structure proposed by Kwon et al. (2019) due to its excellent properties. Specifically, we utilize the same layers shown in Figure 1 to construct a five 2D convolutional layers network for each of \(Enc_{\mathbf{\eta}}\), \(Gen_{\mathbf{\omega}}\), and \(Dis_{\mathbf{\theta}}\), as the main task of this paper is to generate samples in 2-dimensional space. We also follow Kwon et al. (2019) in setting the dimension of the latent space to be \(p=1000\). In the code space, we use a more sophisticated network architecture for generator \(CGen_{\mathbf{\omega}^{\prime}}\), which includes three 2D convolutional layers and a fully connected layer, as opposed to the simple network architecture depicted in Figure 2. To begin, we set the sub-latent input vector size to \(q=100\) in the first layer. Each convolutional layer is accompanied by a batch normalization layer, a ReLU activation function, and a max pooling (MaxPool) layer. The MaxPool layer reduces the spatial size of the feature maps and allows for the extraction of the most significant features of the code samples. Furthermore, it imparts the translation invariance property to the code samples, making \(Gen_{\mathbf{\omega}}\) more robust to variations in the code space. We then use a fully connected layer to transform the feature maps into a code of size \(p=1000\), followed by the addition of a Hyperbolic tangent activation function to the final layer, which squashes the outputs between -1 and 1. To ensure fair comparisons in the code space, we rescale \(\mathbf{Z}_{r}\), \(\mathbf{Z}_{e}\), and \(\widetilde{\mathbf{Z}}_{e}\) to be between -1 and 1. The overall architecture of our model is depicted in Figure 3.
## 5 Experimental Results
In this section, we present the results of our experiments on four datasets to evaluate the performance of our models. We implement our models using the PyTorch library in Python. We set the mini-batch size to 16 and the number of workers to 64. As the Hyperbolic tangent activation function is used on the last layer of the generator, we scaled all datasets to the range of \(-1\) to \(1\) to ensure compatibility with the generator outputs. For comparison purposes, we evaluated our model against Semi-BNP MMD GAN (Fazeli-Asl et al., 2023) and AE+GMMN4 Li et al. (2015). To provide a comprehensive comparison, we also attempted to modify the 3D \(\alpha\)-WGPGAN5 settings to the 2D dimension case and included its relevant results.
Footnote 4: The relevant codes can be found at [https://github.com/yujiali/gmmn.git](https://github.com/yujiali/gmmn.git)
Footnote 5: The basic code for 3D generation are availble at [https://github.com/cyclomon/3dbraingen](https://github.com/cyclomon/3dbraingen)
### Labeled Datasets
To evaluate model performance, we analyzed two handwritten datasets comprising of numbers (MNIST) and letters (EMNIST). MNIST consists of 60,000 handwritten digits, including 10 numbers from 0 to 9 (labels), each with 784 (\(28\times 28\)) dimensions. This dataset was divided into 50,000 training and 10,000 testing images, and we use the training set to train the network (LeCun, 1998). EMNIST is freely available online6 and is sourced from Cohen et al. (2017). It contains 372,450 samples of the 26 letters A-Z (labels). Each letter is represented in a \(28\times 28\) dimension. We allocate 85% of the samples to the training dataset, and the rest to the testing dataset.
Footnote 6: [https://www.kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-in-csv-format](https://www.kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-in-csv-format)
#### 5.1.1 Evaluating Mode collapse
To examine the capability of the model in covering all modes or preventing the mode collapse, we train a convolutional network to predict the label of each generated sample.
Figure 3: The general architecture of the proposed hybrid model consists of four convolutional networks for the encoder, generator, code generator, and discriminator.
The structure of this network is provided in Figure 4. If the generator has effectively tackled the issue of the mode collapse, we anticipate having a similar relative frequency for labels in both training and generated datasets, indicating successful training. Plots (a) and (b) in Figure 6 represent the relative frequency of labels in handwritten numbers and letters datasets, respectively.
To train the classifier, we use the cross-entropy loss function and update the network's weights with the Adam optimizer over 60 epochs. We assess the classifier's efficacy by presenting the mean of the loss function across all mini-batch testing samples and the percentage of correct classification (accuracy rate) in Figure 5. The figure showcases the classifier's exceptional accuracy in classifying the dataset.
The relative frequency plots of predicted labels are depicted in Figure 6, parts (c)-(j), for 1000 generated numbers (left-hand side) and letters (right-hand side) using various generative models. The ratios of the numbers in the training dataset are expected to be consistent, as indicated by plot (a). Examining the plot of generated samples reveals that each model has a distinct bias towards certain digits. Specifically, our model t
Figure 4: The classifier network architecture for predicting the handwritten dataset’s labels. The output of the fully connected layer is passed through a log softmax function to convert the raw output into a probability distribution over the classes.
Figure 5: The convolutional classifier’s mean loss (ML) and accuracy rate (AR) with a learning rate of 0.0002 across all mini-batch testing samples of numbers (solid line) and letters (dashed line) datasets.
at a frequency 4.64% higher than that in the training dataset (\(14.50\%-9.86\%\)), while the semi-BNP MMD GAN exhibits a similar bias towards digit 3 at a 4.18% higher frequency than the training dataset. Nevertheless, these differences are relatively minor compared to AE+GMMN and \(\alpha\)-WGPGAN, which demonstrate a significant tendency to memorize some modes and overlook certain digits, such as 4 and 8. Similar results can be observed from the relative frequency plots of predicted labels for the letters dataset. Plot (j) in Figure 6 clearly shows the failure of \(\alpha\)-WGPGAN to maintain the balance of the relative frequency of the data and generate the letter "M". In contrast, plot (d) indicates that our proposed model successfully preserves the proportion of modes in the generated samples and avoids mode collapse.
#### 5.1.2 Assessing Patterns and Evaluating Model Quality
We employed the principal component analysis (PCA) technique to illustrate the patterns and correlations among data points in a two-dimensional space. Part (a) and (b) in Figure 7 represent the PCA plots for numbers and letters, respectively. Each axis in the plots represents a principal component, with the relevant real dataset used as a reference. It is important to note that PCA provides a necessary condition to verify the similarity pattern between real and fake data distribution. The dissimilarity between PCA plots for real and generated samples indicates that they do not follow the same distribution. However, it is crucial to acknowledge that two similar PCA plots do not necessarily guarantee a similar distribution for real and generated samples. Here, the results presented in Figure 7 demonstrate that all models follow the pattern of the relevant real datasets except for the \(\alpha\)-WGPGAN in the letters dataset. It obviously indicates a different shape and orientation than the structure of the real dataset.
For a more comprehensive analysis, we adopt a mini-batch strategy suggested by Fazeli-Asl et al. (2023) to compute the MMD score, as given by Equation (12), between the generated and real samples. We present the discrepancy scores in density and box plots, along with violin plots in Figure 7, parts (c) and (d). Overall, the MMD scores of all four models suggest some level of convergence around zero. However, the results of the proposed model and the semi-BNP model are comparable, with both models showing better convergence than the other two models. Specifically, part (d) shows our proposed model demonstrates even better convergence than the semi-BNP model, highlighting an improvement of the semi-BNP MMD model by extending it to the VAE+WMMD model.
#### 5.1.3 Visualisation
To better demonstrate the visual capabilities of our proposed model in generating samples, we have displayed 60 samples generated from the model and have compared them to the samples generated by other models, as depicted in Figure 8. While the semi-BNP MMD model displays a range of generated characters in parts (e) and (f) of Figure 8, the images contain some noise that detracts from their quality. On the other hand, the results of AE+GMMN, displayed in parts (g) and (h), reveal blurry outputs that fall short of our desired standards. In contrast, the outputs of our model and \(\alpha\)-WGPGAN exhibit higher-resolution samples without any noise. However, it appears that the generated samples of the
\(\alpha\)-WGPGAN model contain slightly more ambiguous images compared to ours, suggesting that our model converges faster than \(\alpha\)-WGPGAN.
### Unlabaled Datasets
The performance of a GAN can vary depending on the characteristics of the training dataset, including its complexity, diversity, quality, and size. Thus, it is crucial to assess the model's effectiveness on more intricate datasets. For a comprehensive evaluation of the model's performance, facial and medical images are the two most important datasets to consider. In this regard, we use the following two main data sources and resize all images within them to 64\(\times\)64 pixels to train all models.
#### 5.2.1 Brain MRI Dataset
The brain MRI images present a complex medical dataset that poses a significant challenge for researchers. These images can be easily accessed online7, with both training and testing sets available, comprising a total of 7,023 images of human brain MRI. The dataset includes glioma, meningioma, no tumor, and pituitary tumors (Nickparvar, 2021). The training set is composed of 5,712 images of varying sizes, each with extra margins. To ensure consistency and reduce noise in the training data, a pre-processing code8 is used to remove margins before feeding them into the networks for training.
Footnote 7: [https://www.kaggle.com/dsv/2645886](https://www.kaggle.com/dsv/2645886)
Part (a) of Figure 9 illustrates the PCA plots of the generated samples for all models, highlighting that the dispersion and direction of samples generated by \(\alpha\)-WGPGAN model differ the most from the real dataset compared to the other models. Meanwhile, part (c) of Figure 9 shows almost identical convergence of MMD scores around zero for the compared models. However, Figure 10 portrays noisy and blurry outputs generated by the semi-BNP and AE+GMMN models, whereas our model and the \(\alpha\)-WGPGAN produce clear outputs.
#### 5.2.2 CelebFaces Attributes Dataset
The CelebFaces attributes dataset (CelebA), collected by Liu et al. (2015), includes 202,599 images of celebrities that are publicly available online9. The dataset features people in various poses, with different backgrounds, hairstyles and colors, skin tones, and wearing or not wearing glasses and hats, providing a rich resource for evaluating the performance of data augmentation models. While part (b) of Figure 9 shows a slight variation in the direction of the generated sample pattern, part (d) of the same figure highlights a significant gap between the convergence of MMD scores of our proposed approach and \(\alpha\)-WGPGAN compared to semi-BNP MMD and AE+GMMN around zero. Our proposed model yields even lower MMD scores than \(\alpha\)-WGPGAN.
Footnote 8: [https://github.com/masoudnick/Brain-Tumor-MRI-Classification/blob/main/Preprocessing.py](https://github.com/masoudnick/Brain-Tumor-MRI-Classification/blob/main/Preprocessing.py)
Footnote 9: [http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)
On the other hand, despite the limited number of samples shown in Figure 11, the generated images by our proposed model exhibit a remarkable diversity that encompasses a range of hair colors and styles, skin tones, and accessories such as glasses and hats. This variety suggests that our results are comparable to those produced by \(\alpha\)-WGPGAN. However, it is worth noting that the images generated by the AE+GMMN model not only
suffer from blurriness but also appear to be heavily biased toward female faces, indicating a potential issue with mode collapse in this type of dataset. While the samples generated by the semi-BNP MMD model displayed better results than the AE+GMMN, there is still a level of noise present, indicating that more iterations are needed to ensure model convergence.
## 6 Conclusion
We have proposed a powerful hybrid generative model that has produced realistic samples in the BNP framework. We have extended the semi-BNP MMD GAN, introduced by Fazeli-Asl et al. (2023), by incorporating both Wasserstein and MMD measures into the GAN loss function along with a VAE and an additional code generator to enhance its ability to produce diverse outputs. Different types of datasets have been used to examine the performance of the proposed model, indicating that it is a competitive model. Our model has also been compared to several generative models, such as \(\alpha\)-WGPGAN, and has outperformed them in terms of mitigating mode collapse and producing noise-free images.
To improve the efficiency and effectiveness of communication between the generator and discriminator networks, we plan to employ multiplex networks in our model. Multiplex networks can tackle the problem of data sparsity in GANs by incorporating multiple types of interactions and relationships between nodes. This allows the model to learn from a larger and more diverse set of data, improving its ability to generate realistic samples. For future work, we are considering extending our proposed idea to include 3D medical datasets for detecting anomalies like Down syndrome in fetuses. This development could prevent the birth of defective babies. We hope that a more powerful generative model can help solve important issues in medical imaging.
Figure 6: The frequency percentage of true 22nd predicted labels for handwritten datasets.
Figure 7: Top: PCA plots of 1000 generated samples versus real samples with fitted ellipse curves for handwritten datasets, indicating the spread of the samples in the corresponding directions. Bottom: Violin plots of MMD scores including density and box plots.
Figure 8: Visualisation of handwritten training samples and generated samples using various generative models after 400,000 iterations.
Figure 9: Top: PCA plots of 1000 generated samples versus real samples with fitted ellipse curves for MRI and celebA datasets, indicating the spread of the samples in the corresponding directions. Bottom: Violin plots of MMD scores including density and box plots.
Figure 10: Visualisation of MRI training samples and generated samples using various generative models after 400,000 iterations.
Figure 11: Visualisation of celebA training samples and generated samples using various generative models after 400,000 iterations. |
2303.01051 | H2 superglass on an amorphous carbon substrate | The phase diagram of a para-H2 monolayer absorbed on a experimentally
syntetized amorphous carbon sheet was calculated using a diffusion Monte Carlo
technique. We found that the ground state of that system changed drastically
from a perfectly flat substrate to a situation in which the carbon atoms were
allowed a certain degree of disorder in the $z$ direction. In the first case,
at zero pressure we have a glass of density 0.056 $\pm$ 0.003 \AA$^{-2}$ in
equilibrium with an incommensurate solid of 0.068 $\pm$ 0.002 \AA$^{-2}$. At
the equilibrium density, the glass was found to have a tiny, but non-negligible
superfluid faction of less than 1 \% (0.44 $\pm$ 0.05 \%). In the
$z$-disordered substrate, we observe a significant enhancement of the
superfluid fraction in the glass phase as well as a smaller but not zero value
in the incommensurate crystal. | M. C. Gordillo, J. Boronat | 2023-03-02T08:14:39Z | http://arxiv.org/abs/2303.01051v1 | # H\({}_{2}\) superglass on an amorphous carbon substrate
###### Abstract
The phase diagram of a \(para\)-H\({}_{2}\) monolayer absorbed on a experimentally synthetized amorphous carbon sheet was calculated using a diffusion Monte Carlo technique. We found that the ground state of that system changed drastically from a perfectly flat substrate to a situation in which the carbon atoms were allowed a certain degree of disorder in the \(z\) direction. In the first case, at zero pressure we have a glass of density 0.056 \(\pm\) 0.003 A\({}^{-2}\) in equilibrium with an incommensurate solid of 0.068 \(\pm\) 0.002 A\({}^{-2}\). At the equilibrium density, the glass was found to have a tiny, but non-negligible superfluid factor of less than 1 % (0.44 \(\pm\) 0.05 %). In the \(z\)-disordered substrate, we observe a significant enhancement of the superfluid fraction in the glass phase as well as a smaller but not zero value in the incommensurate crystal.
It is well-known that the most stable form of carbon is graphite. It is also well-known that one can isolate one of those single carbon layers and obtain a stable structure termed graphene [1; 2]. Even tough the electric properties of graphene are quite different from those of a three-dimensional arrangement [3; 4], theoretical calculations failed to find any significant difference between the adsorption behavior of quantum species (\({}^{4}\)He, H\({}_{2}\) and D\({}_{2}\)) on graphene and graphite [5; 6; 7].
The honeycomb structure of graphene is made up exclusively of carbon hexagons, apart from occasional defects. However, amorphous structures, in which we can have carbon pentagons, hexagons and even squares in addition to six-fold rings, can be created by bombarding graphene with an electron beam [8; 9] or synthetized directly by chemical vapor deposition [10]. The main features of the latter structure can be captured by a two-dimensional 40 \(\times\) 40 A\({}^{2}\) patch (Supplementary information of Ref. [10]) with no holes. The projection of those carbon coordinates in the \(x-y\) plane are displayed in Fig. 1 as blue squares.
The goal of this work is to study the behavior of H\({}_{2}\) when adsorbed on an amorphous carbon surface. To do so, we will consider that substrate as adequately represented by the the above coordinates, but bearing in mind that the carbon layer is not perfectly flat [10]. We solved the Schodinger equation that describes the set of H\({}_{2}\) molecules on this new adsorbent using the diffusion Monte Carlo (DMC) method both in flat and corrugated carbon structures. Our results show that a stable H\({}_{2}\) glass phase is formed irrespectively of the substrate. That glass has a tiny superfluid fraction if the underlying carbon sheet is flat, fraction that is considerably enhanced for the \(z\)-disordered structure, i.e., we have an stable superglass. In the case of H\({}_{2}\), there is only a previous theoretical work that predicts a metastable three-dimensional superglass [11]. That glass would present a sizeable superfluid density around \(\sim\) 1 K.
The DMC method allows for obtaining exactly the ground state of an ensemble of interacting bosons, within the statistical uncertainties inherent to any Monte Carlo technique [12]. To do so, we have first to write down the Hamiltonian describing a monolayer of hydrogen on top of the amorphous carbon substrate. This is:
\[H=\sum_{i=1}^{N}\left[-\frac{\hbar^{2}}{2m}\nabla_{i}^{2}+V_{\rm ext}(x_{i},y _{i},z_{i})\right]+\sum_{i<j}^{N}V_{\rm H_{2}-H_{2}}(r_{ij}). \tag{1}\]
\(x_{i}\), \(y_{i}\), and \(z_{i}\) are the coordinates of the each of the
Figure 1: Reconstruction of a monolayer amorphous carbon layer as given in Ref. [10]. Blue squares, carbon atoms. Full red circles, adsorption positions for H\({}_{2}\) molecules at the equilibrium density of the glass for a planar surface.
H\({}_{2}\) molecules with mass \(m\). \(V_{\rm ext}(x_{i},y_{i},z_{i})\) is the interaction potential between each molecule and all the carbon atoms in the 40 \(\times\) 40 A\({}^{2}\) patch that models the amorphous structure. As in previous works [6; 13; 14], that interaction was chosen to be of the Lennard-Jones type, with parameters obtained from Ref. [15]. \(V_{\rm H_{2}-H_{2}}\) is modeled by the standard Silvera and Goldman potential [16]. As indicated above, we consider two possibilities for the carbon substrate: a flat one, in which the carbon atoms are located in the \(z=0\) plane, and an irregular one, in which each \(z\) coordinate was chosen randomly in the interval [-0.4,0.4] A. This \(z\)-displacement is similar to the vertical distortion of the lattice found in previous ab initio calculations of amorphous graphite [17; 18]. To avoid the effects in the phase diagram of any particular \(z\) carbon distribution, all the simulations were repeated ten times with different carbon configurations and the results averaged over.
To actually solve the Schrodinger equation defined by the many-body Hamiltonian in Eq. 1, one uses a trial wave function to reduce the variance to a manageable level. We use a symmetrized Nosanow-Jastrow wave function split as the product of two terms, the first one being:
\[\Phi_{J}({\bf r}_{1},\ldots,{\bf r}_{N})=\prod_{i<j}^{N}\exp\left[-\frac{1}{2 }\left(\frac{b}{r_{ij}}\right)^{5}\right] \tag{2}\]
that depends on the distances, \(r_{ij}\), between each pair of H\({}_{2}\) molecules and on \(b\), a variationally optimized parameter whose value was found to be 3.195 A[6; 14]. The second one is:
\[\Phi_{s}({\bf r}_{1},\ldots,{\bf r}_{N})=\prod_{i}^{N}\prod_{J}^ {N_{C}}\exp\left[-\frac{1}{2}\left(\frac{b_{\rm C}}{r_{iJ}}\right)^{5}\right]\] \[\times\prod_{I=1}^{N}\left[\sum_{i=1}^{N}\exp\{-c[(x_{i}-x_{\rm site,}I)^{2}+(y_{i}-y_{\rm site,}I)^{2}]\}\ \right]\] \[\times\prod_{i}^{N}\exp(-a(z_{i}-z_{site})^{2}) \tag{3}\]
Here, \(b_{C}\) was chosen to be 2.3 A, as in previous works [6; 14]. The \(z_{site}\) and \(a\) values that minimize the energy in the infinite dilution limit were \(z_{site}\) = 2.94 Aand \(a\)=3.06 A\({}^{-1}\). If we consider the H\({}_{2}\) phase to be translationally invariant \(c\) = 0, otherwise (i.e. for a solid or glass), \(c\) = 0.61 A\({}^{-2}\). The latter value for \(c\) was taken from Ref. [6] in which it was variationally optimized for a incommensurate solid; nevertheless, we checked that changes in its value of up 50 % produced always worse energies when used in DMC. For both values of \(c\) the form of the _trial_ function allows the H\({}_{2}\) molecules to be involved in exchanges and recover indistinguishability, something necessary if we are to consider the possibility of a stable superfluid. The same form of the trial function was used both for the flat and corrugated carbon substrates.
In Eq. 3, (\(x_{\rm site},y_{\rm site}\)) are the positions of the nodes that define the network we are interested in. For a incommensurate hydrogen solid, those will be the coordinates of the crystallographic sites of the quasi-two dimensional triangular lattice. On the other hand, the glass is defined by a set of local energy minima irregularly arranged. To define those minima, we created a two-dimensional grid of regularly spaced points at a distance \(z_{site}\) above the carbon layer and calculated \(V_{\rm ext}(x,y,z_{site})\) at such positions. After that, we chose the point of the grid for which \(V_{\rm ext}\) is minimum. Then, we searched for the point in the grid with the next-to-minimum value of the external potential located at a distance from the first of at least \(\sigma_{C-H_{2}}\) (Lennard-Jones parameter of the C-H\({}_{2}\) interaction, 2.97 A [15]). This is done to avoid H\({}_{2}\)-H\({}_{2}\) interactions that would contribute with positive terms to the total energy in the full DMC scheme. The entire process is iterated until it is not possible to locate more hydrogen molecules at distances of at least \(\sigma_{C-H_{2}}\) from each other. After that, we are left with a list of (\(x_{\rm site},y_{\rm site}\)) positions ordered from minimum to maximum _potential_ energy. However, what we need is to have a list of nodes ordered from lowest to highest _total_ energy. To get it, we performed DMC calculations including one single molecule on each of those sites and reordered the list with respect to those total single-molecule energies. By following that procedure we minimize the risk of getting metastable states once we start filling that network with H\({}_{2}\). This is so because the difference between the full DMC energy of a system of \(N\) molecules and the set of \(N\) increasing individual energies provided in the algorithm just described comes from the H\({}_{2}\)-H\({}_{2}\) interaction. This contribution relays less and less on the details of the network for increasing density, since it depends primarily on the average first-neighbor distances. The number of maximum nodes found using this procedure was 104, and that was the maximum number of molecules used to describe the liquid and glass phases for the density range displayed in Fig. 2. On the other hand, that number oscillated between 90 and 120 for the incommensurate solid, the number of walkers in the DMC procedure being 300. The remaining of the simulation details are similar to those in Ref. [14] and omitted here for simplicity. The locations of the nodes of the glass are displayed in Fig. 1 as red circles on top of the carbon coordinates (blue squares). To be sure that the election of the cutoff distance does not change the nodes of the glass network, we repeated the entire procedure for exclusion values \(\sigma_{C-H_{2}}\pm\) 10%, finding exactly the same positions for the minima. We checked also that for the densities considered in Fig. 2 to fill the glass network in a different order that the one described above, or to consider a different set of nodes (by starting the building up from another node), did not alter the total energies in the density range displayed there.
In Fig. 2, we show the energy per H\({}_{2}\) molecule as a function of the two-dimensional density for the three phases considered in this work: a liquid (full circles), an incommensurate triangular solid (full squares), and
a glass (full triangles) on a carbon flat amorphous substrate. In that figure, we display also the results for graphene, taken from Ref. [6]. Since both the graphene and the amorphous substrate have the same carbon density, 0.38 A\({}^{-2}\), this will allow us to assess the effects of the randomness on the phase diagram of the two-dimensional H\({}_{2}\). What we see is that, at least in this case, the disorder in the substrate makes both the liquid and solid phases more stable than their corresponding counterparts in graphene. In any case, the triangular solid is still more stable that the liquid by 3.1 K at the densities corresponding to zero pressure (liquid binding energy, 453.8 \(\pm\) 0.5 K; solid binding energy 456.9 \(\pm\) 0.5 K). Obviously, the lack of periodicity makes impossible to have a commensurate structure, its place being taken by a glass arrangement of variable density. According to Fig. 2, the maximum binding energy for this structure is 457.0 \(\pm\) 0.5 K at a density of \(\rho\)= 0.056 \(\pm\) 0.003 A\({}^{-2}\). This density is appreciably smaller than the 0.068 \(\pm\) 0.002 A\({}^{-2}\) corresponding to the solid at the minimum of its curve, but equal to the one corresponding to the liquid structure (\(\rho\)= 0.057 \(\pm\) 0.003 A\({}^{-2}\)). However, the irregularity of the substrate produces a less stable phase than the \(\sqrt{3}\times\sqrt{3}\) solid in graphene. In any case, from the results displayed in Fig. 2 we can draw an horizontal double-tangent Maxwell construction line between the minima of the glass and solid curves. This means that between 0.056 and 0.068 A\({}^{-2}\), we would have a mixture of a glass and a triangular solid in the adequate proportions to produce a system with the desired density. From 0.068\(\pm\)0.002 A\({}^{-2}\) up, the stable phase will be a triangular solid.
A very recent calculation [14] suggests that we can find supersolid behavior for a H\({}_{2}\) second layer adsorbed on graphite in a very narrow density window around 0.1650 A\({}^{-2}\). By a supersolid we mean a solid structure (diagonal order) with a superfluid fraction different from zero (off-diagonal long-range order). By extension, a superglass would be a phase in which the molecules are arranged in an amorphous setup with a superfluid fraction larger than zero, Following the same procedure as in that work we estimated that fraction, \(\rho_{s}/\rho\), both for the equilibrium densities of the glass and incommensurate triangular solids. To do so, we used, as in previous literature for similar systems [14; 19] the zero-temperature winding number estimator derived in Ref. [20],
\[\frac{\rho_{s}}{\rho}=\lim_{\tau\rightarrow\infty}\alpha\left(\frac{D_{s}( \tau)}{\tau}\right)\, \tag{4}\]
with \(\tau\) the imaginary time used in the quantum Monte Carlo simulation. Here, \(\alpha=N_{2}/(4D_{0})\), \(D_{0}=\hbar^{2}/(2m)\), and \(D_{s}(\tau)=\langle[{\bf R}_{CM}(\tau)-{\bf R}_{CM}(0)]^{2}\rangle\). \({\bf R}_{CM}\) is the position of the center of mass of the \(N\) H\({}_{2}\) molecules considering only their \(x\) and \(y\) coordinates. The results are shown in Fig. 3 for the glass phase. Each symbol correspond to an average of ten independent Monte Carlo histories for each value of imaginary time, the straight line being a least-squares fit to those points. The error bars correspond to the statistical noise. The superfluid fraction is the slope of the curve in the limit \(\tau\rightarrow\infty\). In Fig. 3 we represent the that value instead of the equivalent average of \(\alpha D_{s}(\tau)/\tau\) for each value of \(\tau\) because in that way is easier to appreciate the superfluid fraction when its value is very small. The slope for the glass implies \(\rho_{s}/\rho\) = 0.44 \(\pm\) 0.05 %, of the same order as the result in the second layer of graphite. To increase the number of Monte Carlo histories does not change the superfluid fraction within the error bar given for that magnitude. The corresponding curve for the incommensurate solid, not shown for simplicity, is completely flat, indicating a normal solid.
Since the amorphous carbon layer is not flat [10], we introduced some disorder in the \(z\)-direction to assess the effects of that randomness in the calculated observables. The results for the energies show again the same two stable structures. A double-tangent Maxwell construction indicates a first-order phase transition between a glass of density 0.055 \(\pm\) 0.003 A\({}^{-2}\), and a two-dimensional incommensurate crystal with \(\rho\) = 0.0650 \(\pm\) 0.0025 A\({}^{-2}\). This means that the locus of the coexistence region is basically untouched by the introduction of disorder in \(z\). However, the change in the superfluid character of the two phases is much relevant. The results obtained are shown in Fig. 4. This figure is similar to Fig. 3 but, instead of depicting the movement of the center of mass, it shows the full superfluid estimator as defined in Eq. 4. The values represented are \(\rho_{s}/\rho\) = 0.21 \(\pm\) 0.05 for a glass of density 0.053 A\({}^{-2}\) (upper triangles), and \(\rho_{s}/\rho\) = 0.14 \(\pm\) 0.05 for a triangular solid with \(\rho\) = 0.065 A\({}^{-2}\). Therefore, our
Figure 2: Energy per H\({}_{2}\) molecule as a function of the density for hydrogen on top of flat graphene (open symbols) and amorphous carbon (full symbols). Circles, quasi two-dimensional liquid; squares, incommensurate triangular solid; full triangles, glass structure; open triangle, \(\sqrt{3}\times\sqrt{3}\) structure on graphene. When not shown, error bars are of the size of the symbols.
results show that we should have a superglass around a density around 0.055 A\({}^{-2}\), independently of the disorder of the substrate in the \(z\) direction. Moreover, the disorder in \(z\) induces supersolidity also in the incommensurate solid phase, in contrast with the flat adsorption surface.
The finite value of the superfluid fraction in both phases means that particles do not remain isolated around the lattice points but interchanges are possible. To show how this feature is observed in the DMC simulations, we plot in Fig. 5 some snapshots for both the glass and incommensurate crystal for the \(z\)-disordered carbon substrate. Different colors stand for different set of walkers (particle configurations) corresponding to different Monte Carlo steps along the simulation. The spreading of every cloud is an indication of the quantum delocalization of the particles. One can see that these clouds are mainly located around the nodes of the respective lattices (glass or incommensurate), but that we have also displacements between different sites. This is the key signal for superfluidity. We also show in the same figure the \(x-y\) static structure factors for both phases. As expected, the one for the glass does not show any Bragg peak and looks rather similar to \(S(k)\) for a liquid at the same density. Instead, the triangular crystal shows a clear Bragg peak, but relatively small due to the delocalization of particles.
In this work, we have studied the adsorption of H\({}_{2}\) on an amorphous substrate. To do so, we have used a set of coordinates that were supposed to model adequately an amorphous two-dimensional carbon material experimentally obtained [10]. We considered a both a flat substrate and a corrugated one. Surprisingly, the results are quite similar in one important question: there is at least a region around 0.055 A\({}^{-2}\) for which we have an stable glass. We have also found that the superfluid density of that glass can be tiny or sizable, but not zero. This result is compatible with a recent calculation for the second layer of H\({}_{2}\) on graphite [14] that found a tiny supersolid density in a very thin density region. As in that work, we can adscribe the superfluidity to the relative low density of the glass at equilibrium. This prompts us to suggest that we can expect to find a superglass in a real disordered substrate similar to that of Ref. [10]. In the worst case scenario, a superfluid density of the order of the one we found for the flat substrate can be detected using the perfected torsional oscillator technique used in Ref. [21] for \({}^{4}\)He on graphite.
###### Acknowledgements.
We acknowledge financial support from Ministerio de Ciencia e Innovacion MCIN/AEI/10.13039/501100011033 (Spain) under Grants No. PID2020-113565GB-C22 and No. PID2020-113565GB-C21, and from Junta de Andalucia group PAIDI-205. M.C.G. acknowledges funding from Fondo Europeo de Desarrollo Regional (FEDER) and Consejeria de Economia, Conocimiento, Empresas y Universidad de la Junta de Andalucia, en marco del programa operativo FEDER Andalucia 2014-2020. Objetivo especifico 1.2.3. "Fomento y generacion de concolimiento frontera y de concolimiento orientado a los retos de la soiedad, desarrollo de tenodoga emergentes" under Grant No. UPO-1380159. Porcentaje de cofinanciaicion FEDER 80%. J.B. acknowledges financial support from Secretaria d'Universitats i Recerca del Departament d'Empresa i Coneixement de la Generalitat de Catalunya, cofunded by the European Union Regional Development Fund within the ERDF Operational
Figure 3: Estimator of the superfluid density for the glass phase at its equilibrium density Full squares, simulation results. The straight line represent a linear least-squares fit to the symbols displayed for \(\tau>3\) K\({}^{-1}\). Since the slope is different from zero, the disordered structure is a superglass.
Figure 4: Superfluid fraction for the irregular substrate. for two different phases and densities. Full triangles, glass phase of density \(\rho\)= 0.053 Å\({}^{-2}\); full squares, triangular solid with \(\rho\)=0.065 Å\({}^{-2}\).
Program of Catalunya (project QuantumCat, Ref. No. 001-P-001644). We also acknowledge the use of the C3UPO computer facilities at the Universidad Pablo de Olavide.
|
2308.03534 | On the non-Transversality of the Hyperelliptic Locus and the
Supersingular Locus for $g=3$ | This paper gives a criterion for a moduli point to be a point of
non-transversal intersection of the hyperelliptic locus and the supersingular
locus in the Siegel moduli stack $\mathfrak{A}_3 \times \mathbb{F}_p$. It is
shown that for infinitely many primes $p$ there exists such a point. | Andreas Pieper | 2023-08-07T12:32:08Z | http://arxiv.org/abs/2308.03534v2 | # On the non-transversality of the hyperelliptic locus and the supersingular locus for \(g=3\)
###### Abstract.
This paper gives a criterion for a moduli point to be a point of non-transversal intersection of the hyperelliptic locus and the supersingular locus in the Siegel moduli stack \(\mathfrak{A}_{3}\times\mathbb{F}_{p}\). It is shown that for infinitely many primes \(p\) there exists such a point.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Hyperelliptic curves
* 2.2 Diedonne modules
* 2.3 Polarized flag type quotients
* 3 Deformation theory
* 3.1 Notation
* 3.2 Dieudonne modules
* 3.3 Kodaira-Spencer map
* 3.4 Irreducible components of formal neighborhoods
* 4 Non-transversality criteria
* 4.1 General case
* 4.2 Simplifications for \(a=1\)
* 5 Examples
* 5.1 CM-examples (\(a=1\))
* 5.2 One example with \(a=3\)
## 1. Introduction
Let \(k\) be an algebraically closed field of characteristic \(p>2\). Denote by \(\mathfrak{A}_{g}\) the Siegel moduli space parametrizing \(g\)-dimensional principally polarized abelian varieties over \(k\). The study of the intersection \(\mathfrak{H}_{3}\cap\mathcal{S}_{3}\) of the locus \(\mathfrak{H}_{3}\subset\mathfrak{A}_{3}\) of Jacobians of smooth hyperelliptic curves and the supersingular locus \(\mathcal{S}_{3}\subset\mathfrak{A}_{3}\) was initiated by Oort's seminal article [13]. He showed that this intersection is equidimensional of dimension \(1\). The interest in this particular situation arises because it is one of the simplest instances of the
series of difficult questions around the intersection of Newton polygon strata and loci in \(\mathfrak{A}_{g}\) defined by Jacobians of certain curves, e.g. the Torelli locus. We recommend the recent survey article by Pries [14] and the references therein for the readers interested in this circle of ideas.
As Pries observes in her survey, for each prime \(p\) the Torelli locus and the supersingular locus intersect for infinitely many values of \(g\). But this means that the intersection is non-transversal for infinitely many \(g\) because the expected dimension of the intersection is
\[\dim(\mathfrak{A}_{g})-\dim(\mathcal{M}_{g})-\dim(\mathcal{S}_{g})=\frac{g(g +1)}{2}-(3g-3)-\left\lfloor\frac{g^{2}}{4}\right\rfloor\]
which is negative if \(g\geqslant 9\).
A natural question is: What is the smallest \(g\) for which a Newton polygon stratum intersects a locus defined via curve geometry non-transversely? The answer came quite as a surprise to the author: Already the simplest open case, i.e. the hyperelliptic locus for \(g=3\) exhibits this phenomenon.
Just to clear things up, we will view \(\mathfrak{A}_{3}\) as a stack, in order to avoid complications with the quotient singularities appearing on the coarse moduli space. The reader who does not want to stick to stacks may instead assume that we add a level-\(N\) structure, \(N>2\), \(p\nmid N\), making the moduli functor representable by a scheme.
The main result of this article looks at an irreducible component \(\mathcal{W}\) of the formal neighborhood of a moduli point in \(\mathcal{S}_{3}\). Our theorem will give a necessary and sufficient condition when \(\mathcal{W}\) meets \(\mathcal{H}_{3}\) non-transversely.1
Footnote 1: There is one case where \(\mathcal{W}\) can be singular, which will be excluded, so that the notion of non-transversality is well-defined.
To a smooth hyperelliptic supersingular curve \(C\) and an irreducible component \(\mathcal{W}\) of the formal neighborhood of the moduli point \([\operatorname{Jac}(C)]\) in \(\mathcal{S}_{3}\) we associate a certain geometric configuration in \(\mathbb{P}^{2}\), to wit: We define at a conic \(Q\), a line \(l\) and a point \(P\) on \(l\). The theorem is as follows:
**Theorem A**.: _The following are equivalent:_
1. _The component_ \(\mathcal{W}\) _meets_ \(\mathfrak{H}_{3}\) _non-transversally at_ \([\operatorname{Jac}(C)]\)_._
2. _The line_ \(l\) _touches_ \(Q\) _at the point_ \(P\)_._
To explain the origin of this geometric configuration: \(Q\) is the image of the canonical map \(C\to\mathbb{P}^{2}\) of our hyperelliptic curve. \(P\) and \(l\) will be defined via the PFTQ theory of Li and Oort [10].
In the special case where \(a=1\), the theorem simplifies as then the irreducible component \(\mathcal{W}\) will be unique. Furthermore, the point \(P\) and the line \(l\) have a convenient description in terms of the Cartier-Manin matrix of \(C\). Then Theorem A becomes:
**Theorem B**.: _Let \(C\) be a hyperelliptic supersingular curve of genus \(3\) with \(a(\operatorname{Jac}(C))=1\). Then the following are equivalent:_
1. _The supersingular locus_ \(\mathcal{S}_{3}\) _meets_ \(\mathfrak{H}_{3}\) _non-transversally at_ \([C]\)
_._
2. _There exists a point_ \(P\in C\) _such that for any hyperelliptic equation_ \[y^{2}=f(x)\] _for_ \(C\) _that satisfies_ \(x(P)=0\) _the corresponding Cartier-Manin matrix (formed by extracting suitable coefficients of_ \(f^{\frac{p-1}{2}}\)_) has the shape_ \[\begin{pmatrix}0&0&0\\ *&0&0\\ *&*&0\end{pmatrix}\]
The last stage of the article will be an example section. Examples for curves satisfying the criterion of Theorem B will be constructed as CM-reductions. The respective CM curve needs to satisfy a special condition on the CM-action that has a similar look-alike as ii) in Theorem B.
To obtain a CM curve with this property we apply a special case of the construction of Tautz-Top-Verberkmoes [16] for curves with RM by \(\mathbb{Q}(\zeta_{l}+\zeta_{l}^{-1})\). Putting \(l=7\) this can be used to get a genus \(3\) hyperelliptic curve with CM by \(\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1},i)\).
The idea behind the choice of the field \(\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1},i)\) is as follows: The most obvious CM curve \(y^{2}=x^{7}-1\) with CM by the cyclotomic field \(\mathbb{Q}(\zeta_{7})\) almost satisfies the required lower triagonality conidition of Theorem B ii). But it falls short, in fact, the triagonality is only achieved after an illegal permutation of the rows and columns of the Cartier-Manin matrix.
For that reason the author searched for a curve with CM by a field which is not quite cyclotomic, but close. The field \(\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1},i)\) is the first option and fortunately it satisfies our needs.
**Acknowledgments:** The author thanks Irene Bouw, David Lubicz, Laurent Moret-Bailly, Mathieu Romagny, Jeroen Sijsling, and Stefan Wewers for enlightening and encouraging discussions on the subject of this article, and Rachel Pries for a helpful E-mail correspondence.
## 2. Preliminaries
### Hyperelliptic curves
In this section we recall some preliminaries around the deformation theory of curves.
Let us denote by \(\mathfrak{H}_{g}\) the moduli stack of hyperelliptic genus \(g\) curves over a field of characteristic \(\neq 2\). Furthermore we look at the Torelli morphism
\[t:\mathfrak{H}_{g}\longrightarrow\mathfrak{A}_{g},\,[C]\mapsto[(\operatorname {Jac}(C),\Theta)]\,.\]
The map induced by \(t\) on tangent spaces has the following properties:
**Lemma 2.1**.: _Let \(C\) be a hyperelliptic curve. The map \(t\) induces on tangent spaces a map_
\[T_{[C]}\mathfrak{H}_{g}\longrightarrow T_{\operatorname{Jac}(C)}\mathfrak{A} _{g}\cong S^{2}\operatorname{H}^{0}(C,\Omega_{C})^{\vee}\]
_which_
1. _is injective_
2. _If_ \(C\) _is a hyperelliptic curve, then_ \(T_{[C]}\mathfrak{H}_{g}\) _is a hyperelliptic curve._
Proof.: We first prove the claim. Let \(\mathfrak{H}_{g}\) be the moduli stack of hyperelliptic genus \(g\) curves over a field of characteristic \(\neq 2\). Let \(\mathfrak{H}_{g}\) be the moduli stack of hyperelliptic genus \(g\) curves over a field of characteristic \(\neq 2\).
_and its image equals the orthogonal complement of the kernel of the multiplication map_
\[S^{2}\operatorname{H}^{0}(C,\Omega_{C})\longrightarrow\operatorname{H}^{0}(C, \Omega_{C}^{\otimes 2})\,.\]
Proof.: See [2, p. 223]. The authors of that book assume that the base field \(\mathbb{C}\), however, it is not difficult to see that their proof works over any field of characteristic \(\neq 2\).
The preceding lemma and the Torelli theorem together imply that the map \(t:\mathfrak{H}_{g}\to\mathfrak{A}_{g}\) is a locally closed immersion. By abuse of notation we will identify \(\mathfrak{H}_{g}\) with its image in \(\mathfrak{A}_{g}\).
### Diedonne modules
In [4][6] (relative) Dieudonne modules are defined. The purpose of this section is to review these defininitions and the important theorems. Indeed, let \(A\) be a regular local ring essentially of finite type over \(k\). Berthelot & Messing and de Jong consider much more general rings, but these assumptions suffice for our purposes.
Now [4, Proposition 1.1.7] implies the existence of a lift of \(A\), i.e., a flat \(\mathbb{Z}_{p}\)-algebra \(\tilde{A}\) which is \(p\)-adically complete and satisfies \(\tilde{A}/p\cong A\). Furthermore, there exists a lift of Frobenius \(\sigma:\tilde{A}\longrightarrow\tilde{A}\) by [4, Corollaire 1.2.7]. Notice that \(\tilde{A}\) is unique up to isomorphism, however, \(\sigma\) is not unique. We choose \(\sigma\) once and for all.
We consider the continuous differential forms
\[\hat{\Omega}_{\tilde{A}}=\varprojlim_{n}\Omega_{A_{n}/\mathbb{Z}_{p}}\]
where \(A_{n}=\tilde{A}/p^{n}\). By [4, Proposition 1.3.1], \(\hat{\Omega}_{\tilde{A}}\) is a free \(\tilde{A}\)-module of rank \(2\). We will exhibit a basis below.
Now we make the following definition, essentially a special case of [6, Definition 2.3.4]:
**Definition 2.2**.: A _Dieudonne module over \(\tilde{A}\)_ is a finite locally free \(\tilde{A}\)-module \(M\) equipped with an integrable, topologically quasi-nilpotent connection
\[\nabla:M\to M\hat{\otimes}_{\tilde{A}}\hat{\Omega}_{\tilde{A}}\]
and \(\nabla\)-horizontal, \(\tilde{A}\)-linear maps
\[F:M\hat{\otimes}_{\sigma}\tilde{A}\longrightarrow M,\,V:M\longrightarrow M \hat{\otimes}_{\sigma}\tilde{A}\]
satisfying \(F\circ V=V\circ F=p\).
Here topologically quasi-nilpotent means: For any derivation \(\delta\in\operatorname{Hom}_{\tilde{A}}(\hat{\Omega}_{\tilde{A}},\tilde{A})\) and any \(m\in M\), there exists an \(n\in\mathbb{N}\) such that \(\nabla_{\delta}^{\circ n}(m)\in pM\).
We now recall the following theorem from [6]:
**Theorem 2.3**.: _There is an anti-equivalence of categories_
\[\mathbb{D}:\left\{p\text{-divisible groups over}\ \ \operatorname{Spec}(A) \right\}\longrightarrow\left\{\text{Dieudonne modules over }\tilde{A}\right\}\]
Proof.: See the discussion after Definition 2.3.4 and Theorem 4.1.1 in [6].
To distinguish this relative theory of Dieudonne modules from the story over \(k\), we will denote the functor
\[\left\{p\text{-divisible groups over}\quad\text{Spec}(k)\right\}\longrightarrow \left\{\text{Dieudonne modules over }W(k)\right\}\]
by \(D\) and call it the classical Diedonne module.
### Polarized flag type quotients
In this section we recall the results from [10] we shall need. We will specialize to the case \(g=3\), although Li & Oort treat the case of general \(g\).
Choose once and for all \(E/k\), a supersingular elliptic curve defined over \(\mathbb{F}_{p}\) such that the relative Frobenius satisfies the equation \(F^{2}+p=0\). The theory of Li & Oort gives a bijection between the set of irreducible components of \(\mathcal{S}_{3}\) and polarizations \(\eta\) on \(E^{3}\) satisfying \(\ker(\eta)=E^{3}[p]\). The bijection is obtained by writing down families of certain principally polarized quotients of \((E^{3},\eta)\), so-called polarized flag-type quotients. The definition is as follows:
**Definition 2.4**.: Let \(\eta\) be as in the discussion preceding the definition. Let \(S\) be a \(k\)-scheme. A _polarized flag-type quotient_ (PFTQ for short) starting in \((E^{3},\,\eta)\) is a sequence of isogenies of abelian \(S\)-schemes
\[E^{3}\times_{k}S=\mathcal{Y}_{2}\overset{\rho_{2}}{\longrightarrow}\mathcal{Y }_{1}\overset{\rho_{1}}{\longrightarrow}\mathcal{Y}_{0}\]
such that:
* \(\ker(\rho_{i})\) is an \(\alpha\)-group scheme of rank \(i\).
* \(\eta\) descends along the \(\rho_{i}\) to polarizations \(\eta_{i}\) on \(\mathcal{Y}_{i}\) for \(i=0,1\).
* \(\ker(\eta_{1})\subset\mathcal{Y}_{1}[F]\).
Two PFTQs
\[E^{3}\times_{k}S=\mathcal{Y}_{2}\overset{\rho_{2}}{\longrightarrow}\mathcal{ Y}_{1}\overset{\rho_{1}}{\longrightarrow}\mathcal{Y}_{0}\]
are said to be isomorphic if there exist isomorphisms \(\varphi_{i}:\mathcal{Y}_{i}\longrightarrow\mathcal{Y}_{i}^{\prime}\) with \(i=0,1\) such that
commutes.
Li & Oort proved the following representability result:
**Theorem 2.5**.: _Let \(\eta\) be as above. The functor:_
\[k-\text{Sch}\rightarrow\text{Sets}\]
\[S\mapsto\left\{\text{ PFTQs starting in }(E^{3},\,\eta)\right\}/\text{iso}.\]
_is representable by a smooth projective integral \(k\)-scheme, denoted \(\mathcal{P}_{3,\eta}\)._
_More precisely \(\mathcal{P}_{3,\eta}\) admits the following description:_
1. _There is a map_ \(\pi:\mathcal{P}_{3,\eta}\longrightarrow\mathbb{P}^{2}\) _(induced from forgetting_ \(\mathcal{Y}_{0}\)_)._
2. _The image of_ \(\pi\) _is_ \(\operatorname{im}(\pi)=\mathcal{V}(X_{0}^{p+1}+X_{1}^{p+1}+X_{2}^{p+1})\)_, i.e., the standard Hermitian curve. We will denote it by_ \(\mathfrak{C}_{H}\)_._
3. _The map_ \(\pi\) _is a_ \(\mathbb{P}^{1}\)_-bundle. More precisely, there is an isomorphism_ \[\mathcal{P}_{3,\eta}\cong\mathbb{P}_{\mathfrak{C}_{H}}(\mathcal{O}(1)\oplus \mathcal{O}(-1))\]
Proof.: See [10, Lemma 3.7] for the representability. The explicit description is [10, Section 9.4].
By the Yoneda lemma there is a universal family \(\mathcal{Y}_{2}\rightarrow\mathcal{Y}_{1}\rightarrow\mathcal{Y}_{0}\) of PFTQs over \(\mathcal{P}_{3,\eta}\). The Abelian scheme \(\mathcal{Y}_{0}\) carries a principal polarization (obtained from descending \(\eta\)). We denote by
\[\varphi_{\mathrm{LO}}:\mathcal{P}_{3,\eta}\rightarrow\mathfrak{A}_{3}\]
the corresponding map to the Siegel moduli stack. Clearly the image is contained in the supersingular locus. Now Li & Oort have proven the following theorem:
**Theorem 2.6**.:
1. _The map_ \[\varphi_{LO}:\mathcal{P}_{3,\eta}\rightarrow\mathcal{S}_{3}\] _contracts a curve_ \(T\) _and is quasi-finite on the complement of_ \(T\)_._
2. _If we take the disjoint union over the (finitely many) equivalence classes of_ \(\eta\) _modulo automorphisms of_ \(E^{3}\)_, the obtained map_ \[\coprod_{\eta}\mathcal{P}_{3,\eta}\rightarrow\mathcal{S}_{3}\] _is surjective._
Proof.: The curve \(T\) is defined to be the image of the section
\[s:\mathfrak{C}_{H}\rightarrow\mathcal{P}_{3,\eta}\]
of \(\pi\) corresponding to the map
\[\operatorname{pr}_{2}:\mathcal{O}(1)\oplus\mathcal{O}(-1)\rightarrow\mathcal{ O}(-1)\,.\]
See [10, pp. 58-59] for the details. The other statements are [10, Corollary 4.1].
## 3. Deformation theory
### Notation
We are going to fix the following notation for the rest of the article: \(p>2\) is a prime number and \(k\) is an algebraically closed field of characteristic \(p\).
### Dieudonne modules
In this section we describe the variation of the crystalline cohomology in a Li-Oort family. This is a technical tool we need for computing the Kodaira-Spencer map in the next section.
Indeed, let \(\mathcal{Y}_{0}\longrightarrow\mathcal{P}_{3,\eta}\) be the principally polarized abelian scheme constructed by Li & Oort and \(\xi\in\mathcal{P}_{3,\eta}\) be a closed point. Assume that \(\xi\notin T\). Our goal is to describe the relative crystalline cohomology of \(\mathcal{Y}_{0}\) in an open neighborhood of \(\xi\). To this end, consider the stalk \(\mathcal{O}_{\mathcal{P}_{3,\eta},\xi}\) and denote it by \(A\). Choose \(\tilde{A}\), \(\sigma\), a lift of \(A\) together with a lift of Frobenius as in Section 2.2. Our goal is to describe the Dieudonne module of the Abelian scheme \(\mathcal{Y}_{0}\times_{\mathcal{P}_{3,\eta}}\operatorname{Spec}(A)\). Intuitively this means that we want to understand the variation of the crystalline cohomology in an open neighborhood of the point \(\xi\).
To start with, recall that we are given a supersingular elliptic curve \(E\) and a polarization \(\eta\) on \(E^{3}\) satisfying \(\ker(\eta)=E^{3}[p]\). Denote by \(M_{2}\) the Dieudonne module of \(E^{3}\) (in the classical sense). Then \(\eta\) induces an alternating pairing
\[\langle\cdot,\cdot\rangle:M_{2}^{t}\times M_{2}^{t}\longrightarrow W(k)\]
where \(M_{2}^{t}\) is the dual Dieudonne module.
By [10, Lemma 6.1] there exists a \(W(k)\)-basis \(m_{0},Fm_{0},m_{1},Fm_{1},m_{2},Fm_{2}\) of \(M_{2}\) such that for all \(i=0,1,2\)
\[F\cdot m_{i}=Fm_{i},\,F\cdot Fm_{i}=-p\,,\]
and such that the pairing \(\langle\cdot,\cdot\rangle\) is given by the matrix \(\begin{pmatrix}0&p\\ -p&0\end{pmatrix}^{\oplus 3}\) with respect to the basis dual to \(m_{0},\dots,Fm_{2}\).
Now consider the constant Abelian scheme \(E^{3}_{A}=E^{3}\times\operatorname{Spec}(A)\). We have
\[\mathbb{D}(E^{3}_{A})=M_{2}\hat{\otimes}_{W(k)}\tilde{A}\]
where the map \(W(k)\longrightarrow\tilde{A}\) is the unique map lifting the inclusion \(k\hookrightarrow A\) (which exists because \(k\) is perfect and \(\tilde{A}\) is \(p\)-adically complete). Denote \(\mathbb{M}_{2}=\mathbb{D}(E^{3}_{A})\). It carries a connection
\[\nabla:\mathbb{M}_{2}\longrightarrow\mathbb{M}_{2}\hat{\otimes}\hat{\Omega}_ {\tilde{A}}\,,\]
which equals the trivial connection since \(E^{3}_{A}\) is a constant family.
Before we proceed to determine the Dieudonne module of \(\mathcal{Y}_{0}\), we digress and describe a certain affine open subset of \(\mathcal{P}_{3,\eta}\). Indeed, recall that we have a \(\mathbb{P}^{1}\)-bundle
\[\pi:\mathcal{P}_{3,\eta}\rightarrow\mathfrak{C}_{H}\]
where \(\mathfrak{C}_{H}=\mathcal{V}(X_{0}^{p+1}+X_{1}^{p+1}+X_{2}^{p+1})\subset \mathbb{P}^{2}\). Let us denote by \(U_{0}\subset\mathfrak{C}_{H}\) the standard affine open where \(X_{0}\) does not vanish. After relabeling the coordinates we can assume without loss that \(\xi\in U_{0}\).
We put \(x_{i}=\frac{X_{i}}{X_{0}}\). Now the \(\mathbb{P}^{1}\)-bundle \(\pi\) has a trivialization over \(U_{0}\) given by the standard trivialization of the vector bundle \(\mathcal{O}(1)\oplus\mathcal{O}(-1)\) on \(U_{0}\)
This determines an isomorphism
\[\pi^{-1}(U_{0})\setminus T\cong U_{0}\times\mathbb{A}^{1}\,.\]
We denote the latter affine open by \(U\subset\mathcal{P}_{3,\eta}\). By assumption, we have \(\xi\in U\). Let us denote by \(t\) the function in the coordinate ring of \(U\) corresponding to the coordinate on the second factor of \(U_{0}\times\mathbb{A}^{1}\). By localization we get three elements in \(A=\mathcal{O}_{\mathcal{P}_{3,\eta},\xi}\) which we also denote \(x_{1},x_{2},t\) by slight abuse of notation.
We get to the main lemma of this section.
**Lemma 3.1**.: _The Dieudonne module \(\mathbb{D}(\mathcal{Y}_{0}\times\mathrm{Spec}(A))\) is given by the \(\tilde{A}\)-submodule of \(\mathbb{M}_{2}\) generated by_
\[m_{0}+\tilde{\mathrm{x}}_{1}^{p}\,m_{1}+\tilde{\mathrm{x}}_{2}^{p}\,m_{2}+ \tilde{\mathrm{t}}^{p}\,Fm_{0}\,,-\tilde{\mathrm{x}}_{1}^{p}\,Fm_{0}+Fm_{1}\,, -\tilde{\mathrm{x}}_{2}^{p}\,Fm_{0}+Fm_{2}\,,p\mathbb{M}_{2}\]
_where \(\tilde{\mathrm{x}}_{1},\tilde{\mathrm{x}}_{2},\tilde{\mathrm{t}}\in\tilde{A}\) are lifts of \(x_{1},x_{2},t\) respectively._
Proof.: Let us denote by \(\mathbb{M}_{0}\) the \(\tilde{A}\)-submodule of \(\mathbb{M}_{2}\) generated by
\[m_{0}+\tilde{\mathrm{x}}_{1}^{p}\,m_{1}+\tilde{\mathrm{x}}_{2}^{p}\,m_{2}+ \tilde{\mathrm{t}}^{p}\,Fm_{0}\,,-\tilde{\mathrm{x}}_{1}^{p}\,Fm_{0}+Fm_{1} \,,-\tilde{\mathrm{x}}_{2}^{p}\,Fm_{0}+Fm_{2}\,,p\mathbb{M}_{2}\,.\]
First we notice that \(\mathbb{M}_{0}\) is clearly independent of the choice of the lifts \(\tilde{\mathrm{x}}_{1},\tilde{\mathrm{x}}_{2},\tilde{\mathrm{t}}\).
To prove that \(\mathbb{M}_{0}\) is a Dieudonne submodule, we must prove that it is finite locally free and closed under \(\nabla,F,V\). The former assertion follows easily from the short exact sequence
\[0\longrightarrow\mathbb{M}_{0}\longrightarrow\mathbb{M}_{2}\longrightarrow \mathbb{M}_{2}/\mathbb{M}_{0}\longrightarrow 0\]
and the observation that \(\mathbb{M}_{2}/\mathbb{M}_{0}\) is a free \(\tilde{A}/p\)-module with basis \(m_{1},m_{2},Fm_{0}\).
For the closedness under \(\nabla\) we compute
\[\nabla(m_{0}+\tilde{\mathrm{x}}_{1}^{p}\,m_{1}+\tilde{\mathrm{x}}_{2}^{p}\,m _{2}+\tilde{\mathrm{t}}^{p}\,Fm_{0})=p\,\tilde{\mathrm{x}}_{1}^{p-1}\,m_{1} \otimes d\,\tilde{\mathrm{x}}_{1}+p\,\tilde{\mathrm{x}}_{2}^{p-1}\,m_{2} \otimes d\,\tilde{\mathrm{x}}_{2}+p\,\tilde{\mathrm{t}}^{p-1}\,Fm_{0}\otimes d \,\tilde{\mathrm{t}}\]
which is in \(p\mathbb{M}_{2}\hat{\otimes}\hat{\Omega}_{\tilde{A}}\) and similarly for the other generators. Thus we see that the \(p\)-th powers in the definition of the generators are essential for the closedness under \(\nabla\).
The closedness under \(F,V\) is an easy verification left to the reader.
We conclude that \(\mathbb{M}_{0}\) is a Dieudonne submodule of \(\mathbb{M}_{2}\). The fact that it is equal to the image of
\[\mathbb{D}(\mathcal{Y}_{0}\times\mathrm{Spec}(A))\rightarrow\mathbb{D}(E_{A} ^{3})=\mathbb{M}_{2}\]
follows from the explicit description of the \(g=3\) Li-Oort family in [10, Section 9.4].
### Kodaira-Spencer map
Let \(\mathcal{Y}_{0}\rightarrow\mathcal{P}_{3,\eta}\) be a Li-Oort family and \(\xi\in\mathcal{P}_{3,\eta}\) be a closed point as in the previous section. The Kodaira-Spencer map
\[\kappa:T_{\xi}\mathcal{P}_{3,\eta}\longrightarrow\mathrm{Hom}^{\mathrm{sym}} \left(\mathrm{H}^{0}(\mathcal{Y}_{0,\xi},\Omega_{\mathcal{Y}_{0,\xi}}) \longrightarrow\mathrm{H}^{1}(\mathcal{Y}_{0,\xi},\mathcal{O}_{\mathcal{Y}_{ 0,\xi}})\right)\,,\]
first introduced in the algebraic setting by Illusie [9, 2.1.5.7], may be viewed as the differential of the morphism induced by the universal property of the
Siegel moduli stack. In this section we will compute the Kodaira-Spencer map.
We begin by choosing a basis of \(T_{\xi}\mathcal{P}_{3,\eta}\). Indeed, the first basis element is \(\delta_{1}=\frac{\partial}{\partial t}\). The one-dimensional vector space generated by \(\frac{\partial}{\partial t}\) is canonically defined because it is the direction parallel to the fibers of the \(\mathbb{P}^{1}\)-bundle. The second basis element we will choose is not canonically defined as it will depend on our choice of a trivialization of this \(\mathbb{P}^{1}\)-bundle. Let us take \(\delta_{2}=-x_{2}^{p}\frac{\partial}{\partial x_{1}}+x_{1}^{p}\frac{\partial }{\partial x_{2}}\). The reader will have no difficulties to see that this formula defines a derivation on the affine open \(U\subset\mathcal{P}_{3,\eta}\) that does not vanish at \(\xi\).
We are now ready to compute the Kodaira-Spencer map. Indeed, consider the \(A\)-module with connection \(\mathcal{H}^{1}_{\mathrm{dR}}=(\mathbb{M}_{0}/p,\nabla)\). Then \(\mathcal{H}^{1}_{\mathrm{dR}}\) is isomorphic to the relative de Rham cohomology of \(\mathcal{Y}_{0}\times_{\mathcal{P}_{3,\eta}}\operatorname{Spec}(A)\) over \(\operatorname{Spec}(A)\).
Therefore, it only remains to identify the submodule of differentials. For that purpose we look at the exact sequence
\[\mathcal{H}^{1}_{\mathrm{dR}}\overset{v}{\longrightarrow}(\mathcal{H}^{1}_{ \mathrm{dR}})^{(p)}\overset{f}{\longrightarrow}\mathcal{H}^{1}_{\mathrm{dR}}\,,\]
where \((\mathcal{H}^{1}_{\mathrm{dR}})^{(p)}=(\mathcal{H}^{1}_{\mathrm{dR}})\otimes_ {A,\varpi\to x^{p}}A\) and \(f\) (resp. \(v\)) is the map induced by \(F\) (resp. \(V\)). We will use the following Lemma from de Jong's [6]:
**Lemma 3.2**.: _There is a locally free, locally direct summand \(\omega\subset\mathcal{H}^{1}_{dR}\) such that \(\omega^{(p)}\subset(\mathcal{H}^{1}_{dR})^{(p)}\) is equal to \(\operatorname{im}(v)=\ker(f)\). It is uniquely determined by these conditions._
_Furthermore \(\omega=\operatorname{H}^{0}(\mathcal{Y}_{0}\times_{\mathcal{P}_{3,\eta}} \operatorname{Spec}(A),\Omega_{\mathcal{Y}_{0}\times_{\mathcal{P}_{3,\eta}} \operatorname{Spec}(A)})\)._
Proof.: The first assertion is [6, Proposition 2.5.2]. The second assertion follows from [3, Proposition 4.3.10].
In our case, \(\omega\) is generated by the four classes
\[Fm_{0}+\tilde{\mathrm{x}}_{1}\ Fm_{1}+\tilde{\mathrm{x}}_{2}\ Fm_{2}-p\, \tilde{\mathrm{t}}\,m_{0},\,p\,\tilde{\mathrm{x}}_{1}\ m_{0}-p\,m_{1},\,p\, \tilde{\mathrm{x}}_{2}\ m_{0}-pm_{2},\,p\,Fm_{0}\]
We will now assume that \(x_{2}\) does not vanish at \(\xi\) which we can, without loss, after switching \(x_{1},x_{2}\) if necessary. With this assumption it is easy to see that \(p\,\tilde{\mathrm{x}}_{2}\ m_{0}-pm_{2}\) is a linear combination of the other classes in \(\omega\). Therefore,
\[g_{1}=Fm_{0}+\tilde{\mathrm{x}}_{1}\ Fm_{1}+\tilde{\mathrm{x}}_{2}\ Fm_{2}-p \,\tilde{\mathrm{t}}\,m_{0},\,g_{2}=p\,\tilde{\mathrm{x}}_{1}\ m_{0}-p\,m_{1}, \,g_{3}=p\,Fm_{0}\]
is a basis for \(\omega\) as an \(A\)-module.
Similarly we get the basis
\[c_{1}=pm_{0}\,,c_{2}=\tilde{\mathrm{x}}_{1}^{p}\ Fm_{0}-Fm_{1},\,c_{3}=m_{0}+ \tilde{\mathrm{x}}_{1}^{p}\,m_{1}+\tilde{\mathrm{x}}_{2}^{p}\,m_{2}+\tilde{ \mathrm{t}}\,Fm_{0}\]
for \((\mathcal{H}^{1}_{\mathrm{dR}})/\omega\). The Kodaira-Spencer map is now calculated as follows:
**Lemma 3.3**.: \[\kappa(\delta_{1})(g_{1})=-pm_{0}=-c_{1},\,\kappa(\delta_{1})(g_{2})=\kappa( \delta_{1})(g_{3})=0\]
\[\kappa(\delta_{2})(g_{1})=\tilde{\mathrm{x}}_{2}^{p}\ Fm_{1}-\tilde{\mathrm{ x}}_{1}^{p}\ Fm_{2}\in\operatorname{span}(c_{1},c_{2}),\,\kappa(\delta_{2})(g_{2})= \tilde{\mathrm{x}}_{2}^{p}\,pm_{0},\,\kappa(\delta_{2})(g_{3})=0\]
Proof.: The Kodaira-Spencer map
\[\kappa:T_{\xi}\mathcal{P}_{3,\eta}\longrightarrow\operatorname{Hom}^{\operatorname{ sym}}\left(\operatorname{H}^{0}(\mathcal{Y}_{0,\xi},\Omega_{\mathcal{Y}_{0,\xi}}) \longrightarrow\operatorname{H}^{1}(\mathcal{Y}_{0,\xi},\mathcal{O}_{ \mathcal{Y}_{0,\xi}})\right)\]
is computed as follows: For a derivation \(\delta\) we look at the map
\[\omega\to\mathcal{H}^{1}_{\operatorname{dR}}/\omega\]
induced from \(\nabla_{\delta}\) and restrict it to the fiber at \(\xi\). This gives the desired map
\[\kappa(\delta)\in\operatorname{Hom}^{\operatorname{sym}}\left(\operatorname{H }^{0}(\mathcal{Y}_{0,\xi},\Omega_{\mathcal{Y}_{0,\xi}})\longrightarrow \operatorname{H}^{1}(\mathcal{Y}_{0,\xi},\mathcal{O}_{\mathcal{Y}_{0,\xi}}) \right)\,.\]
The calculation is elementary and left to the reader.
The assertion that \(\tilde{\operatorname{x}}_{2}^{p}\ Fm_{1}-\tilde{\operatorname{x}}_{1}^{p}\ Fm_{2}\in \operatorname{span}(c_{1},c_{2})\) also requires a small verification.
**Corollary 3.4**.: _The map_
\[\varphi_{LO}:\mathcal{P}_{3,\eta}\to\mathcal{S}_{3}\]
_is unramified on the open set \(\mathcal{P}_{3,\eta}\setminus T\)._
Proof.: The differential of \(\varphi_{\operatorname{LO}}\) at the point \(\xi\) is given by the Kodaira-Spencer map which we computed in the previous lemma. The point \(\xi\) was assumed to be in \(U\) and satisfied \(x_{2}\neq 0\). It is not hard to verify that the two Kodaira-Spencer classes are linearly independent.
This implies that \(\varphi_{\operatorname{LO}}\) is unramified on the open set \(U\cap\{x_{2}\neq 0\}\) and similarly for the open sets we obtain by permuting coordinates. Since these open sets cover \(\mathcal{P}_{3,\eta}\setminus T\), the lemma follows.
### Irreducible components of formal neighborhoods
Let \(Y\) be a supersingular p.p.a.v. In this section we study the formal completion of \(\mathcal{S}_{3}\) at the moduli point \([Y]\). Let us denote this formal completion by \(\hat{\mathcal{S}}_{3,[Y]}\). We show that every irreducible component of \(\hat{\mathcal{S}}_{3,[Y]}\) is smooth with one exception when \(a(Y)=3\). All the results in this section are well-known to the experts, but we include them here as the author could not find proofs in the literature.
**Lemma 3.5**.: _Let \(Y\) be as above. There is a bijection_
\[\left\{\begin{aligned} &\text{Irreducible components}\\ &\text{of $\hat{\mathcal{S}}_{3,[Y]}$}\end{aligned}\right\} \longleftrightarrow\left\{\begin{aligned} &\text{Isogenies $E^{3}\longrightarrow Y$ that}\\ &\text{admit a PETQ structure}\end{aligned}\right\}_{/\sim}\]
_where the equivalence relation \(\sim\) is defined as follows: \(\pi_{1},\pi_{2}:E^{3}\longrightarrow Y\) are called equivalent if there exists an automorphism \(\varphi\) of \(E^{3}\) such that_
_commutes._
Proof.: We define a map
\[f:\left\{\begin{aligned} &\text{Isogenies $E^{3}\longrightarrow Y$ that}\\ &\text{admit a PFTQ structure}\end{aligned}\right\}_{/\sim} \longrightarrow\left\{\begin{aligned} &\text{Irreducible components}\\ &\text{of $\hat{\mathcal{S}}_{3,[Y]}$}\end{aligned}\right\}\]
as follows: Let \(\pi:E^{3}\longrightarrow Y\) be an isogeny that admits a PFTQ structure. Denote by \(\eta\) the pullback of the principal polarization on \(Y\) via \(\pi\). Then, by assumption, there exists a PFTQ starting in \((E^{3},\eta)\)
\[E^{3}\longrightarrow Y_{1}\longrightarrow Y_{0}=Y\]
that composes to \(\pi\). As a consequence of [10, Equation 9.4.11] and the properties of the notion of rigid PFTQs in (loc. cit.) this PFTQ is unique unless \(\ker(\pi)=E^{3}[F]\). In the former case one gets a unique point \(\xi\in\mathcal{P}_{3,\eta}\) such that \(\varphi_{\text{LO}}(\xi)=[Y]\) under the map
\[\varphi_{\text{LO}}:\mathcal{P}_{3,\eta}\longrightarrow\mathcal{S}_{3}\]
We define \(f(E^{3}\longrightarrow Y)\) to be the irreducible component \(\hat{\mathcal{S}}_{3,[Y]}\) given by the image of the map induced by \(\varphi_{\text{LO}}\) on the formal completion at \(\xi\).
The latter case \(\pi=F\) can only happen if \(a(Y)=3\). In this case the map
\[\varphi_{\text{LO}}:\mathcal{P}_{3,\eta}\longrightarrow\mathcal{S}_{3}\]
contracts the curve \(T\) to the point \([Y]\). We define \(\Xi\) to be the irreducible component of \(\hat{\mathcal{S}}_{3,[Y]}\) given by the image of the map induced by \(\varphi_{\text{LO}}\) on the formal completion along \(T\). Furthermore we put \(f(E^{3}\to Y)=\Xi\). It is not hard to verify that the map \(f\) is well-defined.
We have to show that \(f\) is bijective. For that purpose we equip \(\mathcal{S}_{3}\) with a level \(N\) structure for some \(N\geqslant 3,\,p\nmid N\) and denote the corresponding moduli space by \(\mathcal{S}_{3,N}\). Then, \(\mathcal{S}_{3,N}\) is a closed subscheme of the Siegel moduli scheme \(\mathcal{A}_{g,N}\) (equipped with a level \(N\) structure). Furthermore, one can show that the Li-Oort construction can also be equipped with a level structure yielding a map
\[\varphi:\coprod\mathcal{P}_{3,\eta}\rightarrow\mathcal{S}_{3,N}\]
with the same properties as in Theorem 2.6. The disjoint union is taken over the finite set \(\Lambda_{N}\) of all triples \((l_{N},\eta)\) where \(l_{N}\) is a level \(N\) structure on \(E^{3}\) and \(\eta\) is a polarization on \(E^{3}\) satisfying \(\ker(\eta)=E^{3}[p]\) (taken modulo automorphisms of \(E^{3}\)).
Furthermore \(\varphi\) is birational by [10, Remark 6.4]. Then, since \(\mathcal{S}_{3,N}\longrightarrow\mathcal{S}_{3}\) is etale and \(k\) is algebraically closed, it induces an isomorphism on completions. We choose \(l_{N}\), a level \(N\) structure on \(Y\) and it remains to prove the claim about the irreducible components for the completion of \(\mathcal{S}_{3,N}\) at \((Y,l_{N})\). Indeed, assume first that \(a(Y)=3\). Then there is an isomorphism \(Y\cong E^{3}\) as abelian varieties and the composition \(\pi:E^{3}\overset{F}{\longrightarrow}E^{3}\cong Y\) admits the structure of a PFTQ. This leads to the irreducible component \(\Xi\) defined above. As that component is kind of problematic, we want to remove \(\Xi\) and prove that \(f\) restricts to a bijection between the remaining sets.
To this end we consider \(l^{\prime}_{N}\) the level \(N\) structure on \(E^{3}\) obtained from \(l_{N}\) by pulling back along the isomorphism
\[E^{3}[N]\longrightarrow Y[N]\]
induced by \(\pi\). Furthermore denote by \(\eta\) the polarization on \(E^{3}\) defined as the pullback of the principal polarization on \(Y\) via \(\pi\). Consider the restricted map
\[\coprod_{\Lambda_{N}\setminus\{(l^{\prime}_{N},\eta)\}}\mathcal{P}_{3,\eta} \rightarrow\mathcal{S}_{3,N}\]
and denote its image by \(X\). The closed subscheme \(X\subset\mathcal{S}_{3,N}\) has one irreducible component less than \(\mathcal{S}_{3,N}\). As a last bit of notation we put \(\Lambda^{\prime}_{N}=\Lambda_{N}\setminus\{(l^{\prime}_{N},\eta)\}\).
If instead \(a(Y)\leqslant 2\), then we do not have to remove an irreducible component and we put \(\Lambda^{\prime}_{N}=\Lambda_{N}\) and \(X=\mathcal{S}_{3,N}\).
In any case we have a map, also denoted \(\varphi\) by slight abuse of notation
\[\varphi:\coprod_{\Lambda^{\prime}_{N}}\mathcal{P}_{3,\eta}\to X\,.\]
The map \(\varphi\) is birational, proper (because \(\mathcal{P}_{3,\eta}\) is projective). Now we choose \(U\subset X\), a Zariski open neighborhood of \((Y,l_{N})\) small enough so that \(U\) does not contain any points with \(a=3\) except maybe \((Y,l_{N})\) itself. Then we look at the cartesian diagram
The map \(\varphi^{\prime}\) is proper. It is also quasi-finite by Theorem 2.6 because \(U\) does not contain any points with \(a=3\) except maybe \((Y,l_{N})\). Indeed, by construction we removed the only component of \(\coprod_{\Lambda_{N}}\mathcal{P}_{3,\eta}\) containing the unique positive dimensional component of the fiber \(\varphi^{-1}(Y,l_{N})\). As a consequence we conclude that \(\varphi^{\prime}\) is proper, quasi-finite and thus finite.
On the other hand, \(U^{\prime}\) is an open subscheme of the smooth \(k\)-scheme \(\coprod_{\Lambda^{\prime}_{N}}\mathcal{P}_{3,\eta}\) and in particular \(U^{\prime}\) is normal. In summary, we have that \(\varphi^{\prime}\) is a finite birational map with normal source. By [15, Lemma 035Q, (3)] this implies that \(\varphi^{\prime}\) is the normalization of \(U\).
Now since \(U\) is excellent, [15, Lemma 0C23] implies that normalization commutes with completion. Finally [15, Lemma 035Q, (2)] gives the desired bijection for the irreducible components of the completion.
**Corollary 3.6**.: _In the notation as above._
* _If_ \(a(Y)\leqslant 2\)_, then all the irreducible components of_ \(\hat{\mathcal{S}}_{3,[Y]}\) _are smooth._
* _If_ \(a(Y)=3\)_, then all the irreducible components of_ \(\hat{\mathcal{S}}_{3,[Y]}\) _are smooth, except one._
Proof.: Let \(\mathcal{W}\) be an irreducible component of \(\hat{\mathcal{S}}_{3,[Y]}\). Assume that either \(a(Y)\leqslant 2\) or \(a(Y)=3\) and \(\mathcal{W}\) is not the singular component discussed in the proof of the previous lemma.
Then we know that there exists a polarization \(\eta\) on \(E^{3}\) satisfying \(\ker(\eta)=E^{3}[p]\) and a point \(\xi\in\mathcal{P}_{3,\eta}\) such that the formal completion \(\hat{\mathcal{P}}_{3,\eta,\xi}\) dominates \(\mathcal{W}\) via the map
\[\varphi_{LO}:\mathcal{P}_{3,\eta}\longrightarrow\mathcal{S}_{3}\,.\]
Let us denote \(\hat{\mathcal{P}}_{3,\eta,\xi}\) by \(\mathcal{W}^{\prime}\) and the induced map \(\mathcal{W}^{\prime}\to\mathcal{W}\) by \(\varphi\).
Then, by our assumptions on \(\mathcal{W}\), we have \(\xi\notin T\) and thus \(\varphi\) is unramified (Corollary 3.4). Furthermore \(\mathcal{W}\) is integral and geometrically unibranched being an irreducible component of the spectrum of a reduced complete local algebra over an algebraically closed field. Therefore, [8, Theoreme 18.10.1] implies that \(\varphi\) is etale. Since \(\mathcal{W}^{\prime}\) is smooth, it follows that \(\mathcal{W}\) is smooth as well.
## 4. Non-transversality criteria
### General case
Let \(C\) be a hyperelliptic supersingular curve of genus \(3\). Let \(\mathcal{W}\) be an irreducible component of the completion of \(\mathcal{S}_{3}\) at the moduli point \([C]\). Assume that \(\mathcal{W}\) is smooth at \([C]\). In this section we derive a necessary and sufficient criterion for the non-transversality of the intersection of \(\mathcal{W}\) and \(\mathfrak{H}_{3}\) at \([C]\). We first describe the criterion geometrically. Later we specialize to the case \(a=1\), where the criterion becomes equivalent to a condition on the Cartier-Manin matrix.
We begin with a lemma on PFTQs:
**Lemma 4.1**.: _Let \(E^{3}\longrightarrow Y_{0}\) be a PFTQ (we omit \(Y_{1}\) from the notation). Then_
* \(E^{3}\to Y_{0}\) _factors through multiplication by_ \(p\)_. Denote by_ \[\pi:Y_{0}\longrightarrow E^{3}\] _the factored map._
* _The images of the two maps_ \[\pi^{*}:\mathrm{H}^{0}(E^{3},\Omega_{E^{3}})\longrightarrow\mathrm{H}^{0}(Y_ {0},\Omega_{Y_{0}})\] \[\pi^{*}:\mathrm{H}^{1}(E^{3},\mathcal{O}_{E^{3}})\longrightarrow\mathrm{H}^{1}( Y_{0},\mathcal{O}_{Y_{0}})\] _are both_ \(1\)_-dimensional, unless_ \(\pi=F\)_. (In the latter case they are both_ \(0\)_.)_
* _Furthermore, the two images in ii) are orthogonal with respect to the natural pairing induced by the principal polarization on_ \(Y_{0}\)_._
Proof.: The first assertion follows trivially from the fact that the polarization \(\eta\) in Definition 2.4 satisfies \(\ker(\eta)=E^{3}[p]\). For the assertion ii) we use the explicit description of the Dieudonne modules. Recall that \(M_{2}\) was the Dieudone of \(E^{3}\) with the additional struture as in Section 2.2. We will
denote by \(M_{0}\) the Dieudonne module of \(Y_{0}\) viewed as a submodule of \(M_{2}\) via the map induced by \(E^{3}\to Y_{0}\).
Then the map \(Y_{0}\to E^{3}\) induces on Dieudonne modules the inclusion \(pM_{2}\subset M_{0}\). The map
\[\pi^{*}:\operatorname{H}^{0}(E^{3},\Omega_{E^{3}})\longrightarrow\operatorname {H}^{0}(Y_{0},\Omega_{Y_{0}})\]
is identified with
\[\frac{V(pM_{2})}{p(pM_{2})}\to\frac{VM_{0}}{pM_{0}}\]
under the isomorphism from [12, Corollary 5.11]. From the explicit description of the Dieudonne modules in Section 2.2 (which uses the assumption that \(\pi\neq F\)) one readily sees that the image is one-dimensional with basis \(g_{3}=pFm_{0}\). Similarly the map
\[\pi^{*}:\operatorname{H}^{1}(E^{3},\mathcal{O}_{E^{3}})\longrightarrow \operatorname{H}^{1}(Y_{0},\mathcal{O}_{Y_{0}})\]
equals the map
\[\frac{(pM_{2})}{V(pM_{2})}\to\frac{M_{0}}{VM_{0}}\,.\]
whose image has the basis \(c_{1}=pm_{0}\). This shows ii).
For iii) we need to make the pairing between \(\operatorname{H}^{0}(Y_{0},\Omega_{Y_{0}})\) and \(\operatorname{H}^{1}(Y_{0},\mathcal{O}_{Y_{0}})\) explicit. Indeed, the polarization on \(\eta\) induces an alternating non-degenerate pairing
\[M_{2}^{t}\times M_{2}^{t}\longrightarrow W(k)\,.\]
This gives an isomorphism \(M_{2}^{t}\left[\frac{1}{p}\right]\stackrel{{\sim}}{{\longrightarrow }}M_{2}\left[\frac{1}{p}\right]\). Therefore, we get a pairing
\[M_{2}\times M_{2}\longrightarrow W(k)\left[\frac{1}{p}\right]\,.\]
The restriction to \(M_{0}\) has again values in \(W(k)\) because \(\eta\) descends to a principal polarization on \(Y_{0}\). Thus we get a pairing
\[\langle\cdot,\cdot\rangle:M_{0}\times M_{0}\longrightarrow W(k)\,.\]
From this and the explicit description of the pairing \(\langle\cdot,\cdot\rangle\) in Section 2.2 one computes \(\langle pm_{0},pFm_{0}\rangle=p\). Claim iii) follows.
Now let \(C\) be as in the head of this chapter. The choice of an irreducible component \(\mathcal{W}\) of the completion of \(\mathcal{S}_{3}\) at the moduli point \([C]\) is equivalent to the choice of a PFTQ ending in \(\operatorname{Jac}(C)\) by Lemma 3.5. This gives us a map
\[\pi:\operatorname{Jac}(C)\longrightarrow E^{3}\]
as in Lemma 4.1. Recall that we assumed that \(\mathcal{W}\) was non-singular at \([C]\), which by Corollary 3.6 is equivalent to \(\pi\neq F\).
**Definition 4.2**.: We define the following filtration on \(V=\operatorname{H}^{0}(C,\Omega_{C})\):
* \(V_{0}=V\).
* \(V_{1}\) is defined to be the orthogonal complement of \[\operatorname{im}\bigl{(}\pi^{*}:\operatorname{H}^{1}(E^{3},\mathcal{O}_{E^{3}}) \longrightarrow\operatorname{H}^{1}(\operatorname{Jac}(C),\mathcal{O}_{ \operatorname{Jac}(C)})\cong\operatorname{H}^{1}(C,\mathcal{O}_{C})\bigr{)}\] with respect to the Serre duality pairing.
* \(V_{2}\) is defined to be \[V_{2}=\operatorname{im}\bigl{(}\pi^{*}:\operatorname{H}^{0}(E^{3},\Omega_{E^{3} })\longrightarrow\operatorname{H}^{0}(\operatorname{Jac}(C),\Omega_{ \operatorname{Jac}(C)})\cong\operatorname{H}^{0}(C,\Omega_{C})\bigr{)}\]
Notice that Lemma 4.1 implies that \(V_{2}\subset V_{1}\) and \(\dim(V_{i})=3-i,\,i=0,1,2\).
Consider now the canonical map \(\varphi_{\operatorname{can}}:C\longrightarrow\mathbb{P}(\operatorname{H}^{0 }(C,\Omega_{C}))\cong\mathbb{P}^{2}\). The image of \(\varphi_{\operatorname{can}}\) is a conic.
On the other hand, the filtration \(V_{2}\subset V_{1}\subset V_{0}=\operatorname{H}^{0}(C,\Omega_{C})\) defines a point \(P\) and a line \(l\) containing \(P\) in \(\mathbb{P}(\operatorname{H}^{0}(C,\Omega_{C}))\):
We are now ready to state Theorem A from the introduction. Let \(\mathcal{W}\) be as above. By Lemma 3.5 there is a PFTQ \(\pi:E^{3}\to\operatorname{Jac}(C)\) corresponding to \(\mathcal{W}\). Furthermore the smoothness assumption on \(\mathcal{W}\) is equivalent to saying that \(\pi\neq F\). We can thus define the line \(l\) and the point \(P\) via the previous definition.
**Theorem 4.3**.: _The following are equivalent:_
* _The component_ \(\mathcal{W}\) _meets_ \(\mathfrak{H}_{3}\) _non-transversally at_ \([C]\)_._
* _The line_ \(l\) _touches_ \(\operatorname{im}(\varphi_{\operatorname{can}})\) _at the point_ \(P\)_._
Proof.: The tangent space of \(\mathcal{W}\) at the point \([C]\) is two-dimensional with basis given by the two Kodaira-Spencer classes \(\kappa(\delta_{1}),\kappa(\delta_{2})\) as in Lemma 3.3. We can interpret \(\kappa(\delta_{1}),\kappa(\delta_{2})\) as elements in \(S^{2}\operatorname{H}^{0}(C,\Omega_{C})^{\vee}\). By Lemma 2.1, the component \(\mathcal{W}\) meets \(\mathfrak{H}_{3}\) non-transversally at \([C]\) if and only if \(\kappa(\delta_{1}),\kappa(\delta_{2})\) kill the kernel of
\[S^{2}\operatorname{H}^{0}(C,\Omega_{C})\longrightarrow\operatorname{H}^{0}(C,\Omega_{C}^{\otimes 2})\,.\]
Since \(g=3\) this kernel is \(1\)-dimensional and generated by the quadratic form cutting out the conic \(\operatorname{im}(\varphi_{\operatorname{can}})\).
Thus we need to show that the orthogonality of this quadratic form to the Kodaira-Spencer classes \(\kappa(\delta_{1}),\kappa(\delta_{2})\) has the desired geometric interpretation.
Indeed, recall that we computed in Lemma 3.3
\[\kappa(\delta_{1})(g_{1})=-pm_{0}=-c_{1},\,\kappa(\delta_{1})(g_{2})=\kappa( \delta_{1})(g_{3})=0\]
\[\kappa(\delta_{2})(g_{1})\in\operatorname{span}(c_{1},c_{2}),\,\kappa(\delta_ {2})(g_{2})=\tilde{\operatorname{x}}_{2}^{p}\,pm_{0},\,\kappa(\delta_{2})(g_{3 })=0\]
Figure 1. Generic picture
under the additional assumption that \(x_{2}\) is not zero at \(\xi\) which we can assume without loss of generality.
We will now compute the Serre duality pairing
\[\langle\cdot,\cdot\rangle:\operatorname{H}^{0}(C,\Omega_{C})\times\operatorname{ H}^{1}(C,\mathcal{O}_{C})\longrightarrow k\]
on some of the given basis elements \(g_{1},g_{2},g_{3}\) resp. \(c_{1},c_{2},c_{3}\). Indeed this pairing equals the pairing induced by the principal polarization on \(Y_{0}\cong\operatorname{Jac}(C)\). This has the explicit description given in the proof of Lemma 4.1.
Thus we can compute
\[\langle g_{2},c_{1}\rangle=\langle g_{3},c_{1}\rangle=\langle g_{3},c_{2} \rangle=0\,.\]
This implies that \(\operatorname{span}(c_{1})^{\perp}=\operatorname{span}(g_{2},g_{3})\). But that means that \(\kappa(\delta_{1})\) is the unique map (up to scalar)
\[\operatorname{H}^{0}(C,\Omega_{C})\longrightarrow\operatorname{H}^{1}(C, \mathcal{O}_{C})\]
that kills the two-dimensional vector space \(\operatorname{span}(c_{1})^{\perp}\) and has image \(\operatorname{span}(c_{1})\).
It is not difficult to see that this means that the quadratic forms in \(S^{2}\operatorname{H}^{0}(C,\Omega_{C})\) orthogonal to \(\kappa(\delta_{1})\) are exactly those vanishing at the point in
\[\mathbb{P}(\operatorname{H}^{0}(C,\Omega_{C}))\]
corresponding to \(\operatorname{span}(c_{1})\). But that point is equal to \(P\) by its definition. Therefore, we conclude that \(\kappa(\delta_{1})\) represents a tangent direction to the hyperelliptic locus if and only if \(P\in\operatorname{im}(\varphi_{\operatorname{can}})\).
Next, we analyze \(\kappa(\delta_{2})\). Indeed, we showed above
\[\operatorname{span}(c_{1},c_{2})^{\perp}=\operatorname{span}(g_{3})\,.\]
Therefore
\[\operatorname{im}(\kappa\left(\delta_{2}\right))\subseteq\operatorname{span} (c_{1},c_{2})\,\]
\[\kappa\left(\delta_{2}\right)\Bigl{(}\operatorname{span}(c_{1})^{\perp} \Bigr{)}\subseteq\operatorname{span}(c_{1})\,\]
\[\kappa\left(\delta_{2}\right)\Bigl{(}\operatorname{span}(c_{1},c_{2})^{\perp} \Bigr{)}=0\,.\]
Let us denote by \(\Upsilon\subset\operatorname{Hom}^{\operatorname{sym}}(\operatorname{H}^{0}( C,\Omega_{C})\longrightarrow\operatorname{H}^{1}(C,\mathcal{O}_{C}))\) the linear subspace of maps satisfying these conditions. One has \(\dim(\Upsilon)=2\) because \(\Upsilon\) is equal to the vector space of maps whose matrix is of the form
\[X=\begin{pmatrix}*&*&0\\ *&0&0\\ 0&0&0\end{pmatrix},\,X=X^{T}\]
with respect to the basis \(c_{1},c_{2},c_{3}\) on \(\operatorname{H}^{1}(C,\mathcal{O}_{C})\) and its dual basis on \(\operatorname{H}^{0}(C,\Omega_{C})\).
But we also have \(\kappa(\delta_{1})\in\Upsilon\). Since \(\kappa(\delta_{1}),\kappa(\delta_{2})\) are linearly independent, we conclude that \(\Upsilon=\operatorname{span}(\kappa(\delta_{1}),\kappa(\delta_{2}))\).
Now let \(Q\in S^{2}\operatorname{H}^{0}(C,\Omega_{C})\) be arbitrary. Then \(Q\) is contained in the orthogonal complement of \(\Upsilon\) if and only if \(\mathcal{V}(Q)\) is a conic touching the line \(\mathcal{V}(g_{3})\) at the point \(P\). (This can be seen from the matrix representation above.) But that line equals \(l\) by definition.
Thus we may conclude: The component \(\mathcal{W}\) meets \(\mathfrak{H}_{3}\) non-transversally at \([C]\) if and only of the line \(l\) touches \(\operatorname{im}(\varphi_{\operatorname{can}})\) at the point \(P\). The theorem follows.
### Simplifications for \(a=1\)
We will now specialize to the case \(a=1\). By [10, Remark on p. 38] this is equivalent to saying that there is a unique component of \(\mathcal{S}_{3}\) passing through our moduli point.
Our first lemma will describe the filtration \(V_{2}\subset V_{1}\subset V=\operatorname{H}^{0}(C,\Omega_{C})\) in terms of the Cartier operator
\[\mathcal{C}:\operatorname{H}^{0}(C,\Omega_{C})\longrightarrow\operatorname{H }^{0}(C,\Omega_{C})\]
as follows:
**Lemma 4.4**.: _Assume further that \(a=1\). Let \(E^{3}\longrightarrow\operatorname{Jac}(C)\) be the unique PFTQ ending at \(\operatorname{Jac}(C)\). Let further \(V_{2}\subset V_{1}\subset V=\operatorname{H}^{0}(C,\Omega_{C})\) be as in Definition 4.2._
_Then \(V_{2}=\ker(\mathcal{C})\) and \(V_{1}=\ker\bigl{(}\mathcal{C}^{2}\bigr{)}\)._
Proof.: The assumption \(a=1\) implies that \(\dim(\ker(\mathcal{C}))=1\). Therefore, it suffices to show that \(V_{2}\subseteq\ker(\mathcal{C})\). This follows from
\[V_{2}=\operatorname{im}\bigl{(}\pi^{*}:\operatorname{H}^{0}(E^{3},\Omega_{E^{ 3}})\longrightarrow\operatorname{H}^{0}(\operatorname{Jac}(C),\Omega_{ \operatorname{Jac}(C)})\cong\operatorname{H}^{0}(C,\Omega_{C})\bigr{)}\]
and the naturality of the Cartier operator.
To show that \(V_{1}=\ker\bigl{(}\mathcal{C}^{2}\bigr{)}\) it suffices again to prove \(V_{1}\subseteq\ker(\mathcal{C}^{2})\). Recall that \(V_{1}\) is defined to be the orthogonal complement of
\[\operatorname{im}\bigl{(}\pi^{*}:\operatorname{H}^{1}(E^{3},\mathcal{O}_{E^{3 }})\longrightarrow\operatorname{H}^{1}(\operatorname{Jac}(C),\mathcal{O}_{ \operatorname{Jac}(C)})\cong\operatorname{H}^{1}(C,\mathcal{O}_{C})\bigr{)}\]
with respect to the Serre duality pairing. But since \(F\) vanishes on this image, the identity
\[\langle Fx,y\rangle=\langle x,\mathcal{C}(y)\rangle^{p}\]
shows that \(V_{1}\) is closed under \(\mathcal{C}\). Now \(\mathcal{C}\) is nilpotent as a consequence of the supersingularity of \(C\). Therefore,
\[V_{1}\subseteq\ker\Bigl{(}\mathcal{C}^{\dim(V_{1})}\Bigr{)}=\ker\bigl{(} \mathcal{C}^{2}\bigr{)}\.\]
This proves the lemma.
We can now state a reformulation of Theorem 4.3 in the special case \(a=1\) leading to Theorem B from the introduction:
**Theorem 4.5**.: _Let \(C\) be a hyperelliptic supersingular curve of genus \(3\) with \(a(\operatorname{Jac}(C))=1\). Denote by \(\tau:C\longrightarrow C\) the hyperelliptic involution. Then the following are equivalent:_
* _The supersingular locus_ \(\mathcal{S}_{3}\) _meets_ \(\mathfrak{H}_{3}\) _non-transversally at_ \([C]\)_._
* _There exists a point_ \(P\in C\) _such that the filtration_ \(V_{2}\subset V_{1}\subset V_{0}=\operatorname{H}^{0}(C,\Omega_{C})\) _of Lemma_ 4.4 _agrees with the filtration_ \[\operatorname{H}^{0}\left(C,\Omega_{C}(-2P-2\tau(P)\right)\subset \operatorname{H}^{0}\left(C,\Omega_{C}(-P-\tau(P)\right)\subset\operatorname{H }^{0}\left(C,\Omega_{C}\right)\right.\]
3. _There exists a point_ \(P\in C\) _such that for any hyperelliptic equation_ \[y^{2}=f(x)\] _for_ \(C\) _that satisfies_ \(x(P)=0\) _the corresponding Cartier-Manin matrix (formed by extracting suitable coefficients of_ \(f^{\frac{p-1}{2}}\)_) has the shape_ \[\begin{pmatrix}0&0&0\\ *&0&0\\ *&*&0\end{pmatrix}\]
Proof.: In i) we use that \(a=1\) implies that there is unique component of \(\mathcal{S}_{3}\) passing through \([C]\)[10, p. 38] which furthermore is smooth at \([C]\) (Corollary 3.4). The implication "(i)\(\Leftrightarrow\)(ii)" is a reformulation of Theorem 4.3.
Assume now that ii) is satisfied. We will show that iii) holds true with the same point \(P\in C\). Indeed let
\[y^{2}=f(x)\]
be an arbitrary hyperelliptic equation for \(C\) satisfying \(x(P)=0\). Then the filtration
\[\mathrm{H}^{0}\left(C,\Omega_{C}(-2P-2\tau(P))\subset\mathrm{H}^{0}\left(C, \Omega_{C}(-P-\tau(P)\right)\subset\mathrm{H}^{0}\left(C,\Omega_{C}\right)\right.\]
is explictely given by
\[\mathrm{span}\!\left(x^{2}\frac{dx}{y}\right)\subset\mathrm{span}\!\left(x \frac{dx}{y},x^{2}\frac{dx}{y}\right)\subset\mathrm{span}\!\left(\frac{dx}{y},x\frac{dx}{y},x^{2}\frac{dx}{y}\right)\,.\]
By assumption and Lemma 4.4 one then has
\[\mathrm{span}\!\left(x^{2}\frac{dx}{y}\right)=\ker(\mathcal{C}),\,\mathrm{ span}\!\left(x\frac{dx}{y},x^{2}\frac{dx}{y}\right)=\ker\!\left(\mathcal{C}^{2} \right)\,.\]
Together with the nilpotence of \(\mathcal{C}\) this implies that the matrix of \(\mathcal{C}\) with respect to the basis \(\frac{dx}{y},x\frac{dx}{y},x^{2}\frac{dx}{y}\), i.e., the Cartier-Manin matrix has the shape
\[\begin{pmatrix}0&0&0\\ *&0&0\\ *&*&0\end{pmatrix}\]
proving iii).
The implication iii)\(\Rightarrow\) ii) is shown by reversing the previous argument. This concludes the proof of the theorem.
The utility of part iii) of the Theorem is the fact that it is easy to check whether the criterion is satisfied for a given hyperelliptic equation \(y^{2}=f(x)\).
**Definition 4.6**.:
* Let \(P\in C(k)\) be abitrary. We define the _filtration associated to_ \(P\) to be the filtration in Theorem 4.5 ii): \[\mathrm{H}^{0}\left(C,\Omega_{C}(-2P-2\tau(P)\right)\subset\mathrm{H}^{0} \left(C,\Omega_{C}(-P-\tau(P)\right)\subset\mathrm{H}^{0}\left(C,\Omega_{C}\right)\]
* When \(a(\operatorname{Jac}(C))=1\) and one (and hence both) of conditions ii) or iii) of Theorem 4.5 are satisfied, we call \(P\) a _touchpoint for the Cartier operator_.
It is easy to see that a touchpoint for the Cartier operator (if it exists) is unique up to replacing \(P\) by \(\tau(P)\).
## 5. Examples
### CM-examples (\(a=1\))
In this section we will construct an example of a curve \(C\) over \(\mathbb{Q}\) with the property that the reduction at any prime \(p\) with \(p\equiv\pm 2\mod 7,p\equiv 3\mod 4\) satisfies the criterion of Theorem 4.5.
To setup our notation consider a hyperelliptic genus \(3\) curve \(C\) over \(\mathbb{C}\). Suppose that \(\operatorname{Jac}(C)\) has CM by a field \(K\). Let us look at the differential forms \(V=\operatorname{H}^{0}(C,\Omega_{C})\). By CM-theory there is a decomposition
\[V=\bigoplus_{\iota\in\Phi^{c}}V_{\iota}\]
into one dimensional \(\mathbb{C}\)-vector spaces. Here \(\Phi\subset\operatorname{Hom}(K,\mathbb{C})\) is the CM-type and \(\Phi^{c}\) its complement. An element \(a\in K\) acts on \(V_{\iota}\) via multiplication by \(\iota(a)\).
Furthermore we denote by \(L\) the Galois closure of \(K\) over \(\mathbb{Q}\) and by \(G=\operatorname{Gal}(L/\mathbb{Q})\) the Galois group.
**Definition 5.1**.: Let \(C\) be as above and let \(P\in C(\mathbb{C})\) be a point. We say that \(P\)_is a touchpoint for the CM-action_ if the following condition holds true:
* Denote by \(V=V_{0}\supset V_{1}\supset V_{2}\) the filtration associated to \(P\). Then we demand that there is an ordering \(\iota_{0},\iota_{1},\iota_{2}\) of \(\Phi^{c}\) such that \[V_{\iota_{i}}\subseteq V_{i}\,.\]
* The choice of the ordering in i) gives an ordering \(\iota_{0},\iota_{1},\iota_{2},\overline{\iota_{0}},\overline{\iota_{1}}, \overline{\iota_{2}}\) of all the embeddings of \(K\) into \(\mathbb{C}\). This gives a map \[\operatorname{per}:G\hookrightarrow S_{6}\] induced from the action of \(G\) on the complex embeddings of \(K\). The second condition is that \[(123456)\in\operatorname{im}(\operatorname{per})\,.\]
**Remark 5.2**.: The motivation for the terminology is that we can draw the following picture: The decomposition \(V=\bigoplus_{\iota\in\Phi^{c}}V_{\iota}\) gives \(3\) points \(P_{0},P_{1},P_{2}\) in \(\mathbb{P}^{2}\) and condition i) is equivalent to \(P_{0}=\varphi_{\operatorname{can}}(P)\) and the line \(P_{0}P_{1}\) should be tangent to \(\operatorname{im}(\varphi_{\operatorname{can}})\):
Notice that the existence of a touchpoint \(P\) is very restrictive. Most of the CM-curves do not have any. Contrary to touchpoints for the Cartier operator there could exist more than one touchpoint (modulo the hyperelliptic involution) because in the picture above it might happen that \(P_{2}\) is on \(\mathrm{im}(\varphi_{\mathrm{can}})\) with tangent line \(P_{1}P_{2}\)
A touchpoint can also be a Weierstrass point. We will now discuss an example where that happens.
**Example 5.3**.: Consider the hyperelliptic curve \(C_{14}\) over \(\mathbb{C}\) given by the affine equation
\[y^{2}=x\left(x^{14}-1\right)\,.\]
It is known [5] that the automorphism group of \(C_{14}\) is isomorphic to the group \(U_{14}\), where
\[U_{n}=\left\langle a,b\mid a^{2}=b^{2n}=abab^{n+1}=1\right\rangle.\]
The action is given by
\[a:\begin{array}{l}x\mapsto\frac{-1}{x}\\ y\mapsto\frac{y}{x^{8}}\end{array},\,b:\begin{array}{l}x\mapsto-\zeta_{7}x \\ y\mapsto i\zeta_{7}^{4}y\end{array}\]
where \(\zeta_{7}=e^{\frac{2\pi i}{7}}\in\mathbb{C}\).
We now define \(C\) to be the quotient \(C_{14}/a\). Denote by
\[\pi:C_{14}\longrightarrow C\]
the quotient map. We define a point \(P\) on \(C\) given by the image of
\((0,0)\in C_{14}\) under \(\pi\). The situation is summarized by the following diagram.
\[\begin{array}{ccc}(0,0)&\in&C_{14}\\ \big{\updownarrow}&&\big{\updownarrow}\pi\\ P&\in&C\end{array}\]
To facilitate the reading of the proof of the following proposition, we remark that we do not choose coordinates on \(C\). Thus the coordinates \(x,y\) always refer to \(C_{14}\).
**Proposition 5.4**.: _In the notation from above._
* _The curve_ \(C\) _is hyperelliptic of genus_ \(3\)_._
* \(\mathrm{Jac}(C)\) _has CM by_ \(K=\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1},\,i)\)_._
* _The point_ \(P\) _is a touchpoint of the CM action._
Figure 2. CM touchpoint
Proof.: For i) notice that the automorphism \(a\) of \(C_{14}\) has \(4\) fixed points, namely the points with \(x\)-coordinate \(\pm i\). Then the Riemann-Hurwitz formula shows that \(g(C)=3\). Moreover, \(C\) is hyperelliptic because every curve with a finite map from a hyperelliptic curve is either (hyper)-elliptic or has genus \(0\) (look at the push-forward of the divisor defining the hyperelliptic pencil).
We will now show ii) and iii). Indeed, consider the action of the group \(U_{14}\) on \(C_{14}\). It induces a map
\[\varphi:\mathbb{Q}[U_{14}]\longrightarrow\operatorname{End}_{0}( \operatorname{Jac}(C_{14}))=\operatorname{End}(\operatorname{Jac}(C_{14})) \otimes\mathbb{Q}\,.\]
Consider the two elements
\[\operatorname{i}=\varphi\left(b^{7}\right),\,\alpha=\varphi\bigg{(}\frac{b^{8} +b^{-8}+ab^{8}+ab^{-8}}{2}\bigg{)}\in\operatorname{End}_{0}(\operatorname{Jac }(C_{14}))\,.\]
We claim that \(\operatorname{i},\alpha\) give endomorphisms in \(\operatorname{End}_{0}(\operatorname{Jac}(C))\) by restricting them to the image of the map
\[\pi^{*}:\operatorname{Jac}(C)\hookrightarrow\operatorname{Jac}(C_{14})\,.\]
Indeed, since \(\operatorname{im}(\pi^{*})=\ker(1-a)^{o}\) it suffices to point out that
\[ab^{7} =b^{7}a\] \[a\frac{b^{8}+b^{-8}+ab^{8}+ab^{-8}}{2} =\frac{b^{8}+b^{-8}+ab^{8}+ab^{-8}}{2}a\,.\]
This proves the claim.
It is now elementary to check that the map
\[K=\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1},\,i)\longrightarrow\operatorname{End}_{ 0}(\operatorname{Jac}(C)),\,\zeta_{7}+\zeta_{7}^{-1}\mapsto\alpha,\,i\mapsto \operatorname{i}\]
is a well-defined ring homomorphism. Therefore ii) follows.
To show iii) we need to compute the action of \(K\) on the differential forms on \(C\). To this end, consider the action of the group \(U_{14}\) on \(\operatorname{H}^{0}(C_{14},\Omega_{C_{14}})\). Indeed, the vector space \(\operatorname{H}^{0}(C_{14},\Omega_{C_{14}})\) has the basis \(x^{j}\frac{dx}{y},j=0,\dots,6\). With respect to this basis the automorphisms \(a\) and \(b\) act via the matrices
\[\begin{pmatrix}&&&1\\ &&&&-1\\ &&&1\\ &&&-1\\ &&1\\ &-1\\ &-1\\ 1\end{pmatrix}\text{resp.}\begin{pmatrix}i\zeta_{7}^{4}&&&\\ &(i\zeta_{7}^{4})^{3}&&\\ &&\ddots&\\ &&&(i\zeta_{7}^{4})^{13}\end{pmatrix}\,.\]
This implies that the image of
\[\pi^{*}:\operatorname{H}^{0}(C,\Omega_{C})\longrightarrow\operatorname{H}^{0} (C_{14},\Omega_{C_{14}})\]
has the basis
\[\omega_{0}=\left(1+x^{6}\right)\frac{dx}{y},\,\omega_{1}=\left(x-x^{5}\right) \frac{dx}{y},\omega_{2}=\left(x^{2}+x^{4}\right)\frac{dx}{y}\]
(as it is given by the differential forms fixed by \(a\)). Moreover, one can compute
\[\mathfrak{i}^{*}\left(\omega_{0}\right)=-i\omega_{0},\,\alpha^{*}\left(\omega_{0} \right)=(\zeta_{7}^{3}+\zeta_{7}^{-3})\omega_{0}\]
\[\mathfrak{i}^{*}\left(\omega_{1}\right)=i\omega_{1},\,\alpha^{*}\left(\omega_{1 }\right)=(\zeta_{7}^{2}+\zeta_{7}^{-2})\omega_{1}\]
\[\mathfrak{i}^{*}\left(\omega_{2}\right)=-i\omega_{2},\,\alpha^{*}\left(\omega_{ 2}\right)=(\zeta_{7}+\zeta_{7}^{-1})\omega_{2}\,.\]
This gives us the desired description of the action of \(K\) on the differential forms on \(C\). It is now an elementary verification to show that the point \(P=\pi\left((0,0)\right)\) is a touchpoint for the CM-action.
**Remark 5.5**.: The construction for the curve \(C\) can be seen as a special case of Tautz-Top-Verberkmoes [16] in the case where (in their notation) \(p=7\), \(t=0\).
This explains the rationale behind our choice for \(C\): We use the Tautz-Top-Verberkmoes construction to get RM by \(\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1})\). By implementing the additional order \(4\) automorphism \(b\) into the covering curve \(C_{14}\) we can achieve CM by \(\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1},i)\).
The idea to use CM fields containing \(i\) to force the curve to be hyperelliptic goes back to Weng [17]. In fact, our curve \(C\) happens to be isomorphic to the first curve that appears in her list (p. 18 loc. cit.).
For the next theorem we use the following notation: Let \(C\) be a hyperelliptic genus \(3\) curve over \(\mathbb{C}\) such that \(\operatorname{Jac}(C)\) has CM by a field \(K\). We denote by \(L\) the Galois closure of \(K/\mathbb{Q}\) and by \(G=\operatorname{Gal}(L/\mathbb{Q})\) the Galois group. Assume, furthermore, that there exists a point \(P\in C\) which is a touchpoint for the CM-action. Let us denote by
\[\operatorname{per}:G\hookrightarrow S_{6}\]
the group homomorphism introduced in Definition 5.1.ii)
By CM-theory \(C\) is defined over a number field, say \(M\). We assume that \(M\) is large enough, such that \(M\) contains \(L\), all endomorphisms of \(\operatorname{Jac}(C)\) are defined over \(M\), and such that \(P\) is defined over \(M\) (it is easy to see that \(P\) must be defined over \(\overline{\mathbb{Q}}\)).
**Theorem 5.6**.: _Let \(\mathfrak{P}\) be a prime ideal of \(M\) such that \(C\) has good reduction at \(\mathfrak{P}\). Assume that \(p\) does not divide \(\operatorname{disc}(R)\) where \(R=K\cap\operatorname{End}(\operatorname{Jac}(C))\) is the CM-order and \(p\) is the residue characteristic of \(\mathfrak{P}\)._
_Denote by \(\overline{C}\) the reduction of \(C\) and by \(\overline{P}\) the reduction of the point \(P\) and by \(\mathfrak{p}\) the intersection \(\mathfrak{P}\cap L\)._
_Assume that \(\operatorname{per}(\operatorname{Frob}_{\mathfrak{p}})=(123456)^{-1}\)._
_Then the curve \(\overline{C}\) is supersingular and the pair (\(\overline{C},\overline{P}\)) satisfies the criterion of Theorem 4.5, i.e., \(\overline{P}\) is a touchpoint for the Cartier operator on \(\overline{C}\)._
Proof.: In order to prove the theorem we need to describe the action of the Cartier operator on \(\operatorname{H}^{0}(\overline{C},\Omega_{\overline{C}})\) in terms of CM-theory. By [12, Corollary 5.11] it suffices for that purpose to understand the Verschiebung operator
on the Dieudonne module \(M=D(\operatorname{Jac}(\overline{C}))\). Now the description of the Dieudonne module of the reduction of a CM abelian variety is known (see e.g. [7] cited after [11] or look at [1] for a more general result in \(p\)-adic Hodge theory). Nevertheless, we will reproduce a proof for the special case at our hand because we need the inner mechanics of the identification for showing the other claims of the theorem.
Indeed, choose \(k\), an algebraically closed field containing the residue field of \(\mathfrak{P}\). By slight abuse of notation we will denote the base change of \(\overline{C}\) to \(k\) by the same letter.
We will now show that ii) implies i). Indeed, assuming ii) we claim that the Dieudonne module \(M=D(\operatorname{Jac}(\overline{C}))\) has a \(W(k)\) basis \(m_{1},\dots,m_{6}\) such that
\[Vm_{1}=m_{2},Vm_{2}=m_{3},Vm_{3}=pm_{4},Vm_{4}=pm_{5},Vm_{5}=pm_{6},Vm_{6}=m_{ 1}\,.\]
Indeed \(M\) naturally carries the structure of an \(R\otimes_{\mathbb{Z}}W(k)\)-module. By assumption \(R\otimes_{\mathbb{Z}}\mathbb{Z}_{p}\) is an etale \(\mathbb{Z}_{p}\)-algebra and therefore
\[R\otimes_{\mathbb{Z}}W(k)\cong\bigoplus_{i=1}^{6}W(k)\]
where the copies of \(W(k)\) are indexed by \(\operatorname{Hom}(R,W(k))\). Now under our assumptions we claim that there is a bijection \(\operatorname{Hom}(K,\mathbb{C})\xrightarrow{\sim}\operatorname{Hom}(R,W(k))\). (Here we see why it is awkward to define CM-types as embeddings into \(\mathbb{C}\), but anyway.) Indeed, we assumed that \(C\) is defined over the number field \(M\subset\mathbb{C}\). Furthermore \(M\) was assumed to be large enough to contain \(L\), a Galois closure of \(K\). This gives an embedding \(L\hookrightarrow\mathbb{C}\). Now every embedding \(\iota:K\hookrightarrow\mathbb{C}\) must factor through \(L\). We define the map \(\operatorname{Hom}(K,\mathbb{C})\longrightarrow\operatorname{Hom}(R,W(k))\) to be the composition
followed by the composition \(\operatorname{Hom}(R,\mathcal{O}_{L}/\mathfrak{p})\rightarrow\operatorname{ Hom}(R,k)\rightarrow\operatorname{Hom}(R,W(k))\) where the first map is induced from the natural inclusion
\[\mathcal{O}_{L}/\mathfrak{p}\hookrightarrow\mathcal{O}_{M}/\mathfrak{P}\hookrightarrow k\]
and the second map exists because of the identification \(\operatorname{Hom}(R,W(k))\xrightarrow{\sim}\operatorname{Hom}(R\otimes_{ \mathbb{Z}}\mathbb{Z}_{p},W(k))\) (a consequence of \(p\)-adic completeness, the assumption that \(R\otimes_{\mathbb{Z}}\mathbb{Z}_{p}\) is an etale \(\mathbb{Z}_{p}\)-algebra, and general properties of the Witt ring). This gives us the map \(\operatorname{Hom}(K,\mathbb{C})\longrightarrow\operatorname{Hom}(R,W(k))\). It is not too hard to verify that this map is a bijection, e.g. by writing down a chain of maps inverting all the arrows.
We are now allowed to index the direct sum decomposition
\[R\otimes_{\mathbb{Z}}W(k)\cong\bigoplus_{i=1}^{6}W(k)\]
with the ordering corresponding to the ordering \(\iota_{0},\iota_{1},\iota_{2},\overline{\iota}_{0},\overline{\iota}_{1}, \overline{\iota}_{2}\) of \(\operatorname{Hom}(K,\mathbb{C})\).
This gives a direct sum decomposition
\[M=\bigoplus_{i=1}^{6}M_{i}\]
as \(W(k)\)-modules such that \(R\) acts on \(M_{i}\) via multiplication by a scalar. More precisely, if \(\psi_{1},\dots,\psi_{6}\) are the maps \(R\to W(k)\) corresponding to \(\iota_{0},\iota_{1},\iota_{2},\overline{\iota}_{0},\overline{\iota}_{1}, \overline{\iota}_{2}\), then \(a\in R\) acts on \(M_{i}\) via multiplication by \(\psi_{i}(a)\).
Choose at first an arbitrary generator \(m_{1}\in M_{1}\). (We will later adjust \(m_{1}\).) Then for any \(a\in R\) we have
\[a\cdot Vm_{1}=V(a\cdot m_{1})=V(\psi(a)m_{1})=(\sigma^{-1}\circ\psi_{1})(a)Vm_{1}\]
where \(\sigma:W(k)\to W(k)\) is the lift of Frobenius. Now the assumption ii) implies that \(\sigma^{-1}\circ\psi_{1}=\psi_{2}\) and thus \(m_{2}\in M_{2}\). We claim that \(m_{2}\) generates \(M_{2}\). Indeed, we look at the isomorphism
\[M/VM\cong\operatorname{H}^{1}(\overline{C},\mathcal{O}_{\overline{C}})\,.\]
The image must be generated by the image of \(M_{4},M_{5},M_{6}\), because the maps \(\psi_{4},\psi_{5},\psi_{6}\) correspond to the embeddings contained in the CM-type \(\Phi\), i.e. \(\overline{\iota}_{0},\overline{\iota}_{1},\overline{\iota}_{2}\). Therefore \(M_{2}\) is contained in \(VM\) and this proves the claim.
Similarly we can define \(m_{3}=Vm_{2}\) and \(m_{3}\) must generate \(M_{3}\). Now consider \(Vm_{3}\). We have \(Vm_{3}\in M_{4}\). But \(Vm_{3}\) does not generate \(M_{4}\) because \(M_{4}/pM_{4}\) injects into \(\operatorname{H}^{1}(\overline{C},\mathcal{O}_{\overline{C}})\) by the argument above. Therefore, \(Vm_{3}\in pM_{4}\) and we define \(m_{4}=\frac{1}{p}Vm_{3}\). Since this implies that \(Fm_{4}=m_{3}\), we see that \(m_{4}\) must be a generator of \(M_{4}\).
Along the same lines of reasoning we obtain generators \(m_{5}\), (resp. \(m_{6}\)) of \(M_{5}\) (resp. \(M_{6}\)) satisfying the relations \(Vm_{4}=pm_{5},Vm_{5}=pm_{6}\).
Then, \(Vm_{6}\) will also be a generator of \(M_{1}\), say \(Vm_{6}=um_{1}\) for some unit \(u\in W(k)^{\times}\). Now, since \(k\) is algebraically closed, one can show that there exists a unit \(\tilde{u}\in W(k)^{\times}\) such that \(\sigma^{-6}(\tilde{u})u=\tilde{u}\). After replacing \(m_{1}\) by \(\tilde{u}m_{1}\) and redefining \(m_{2},\dots,m_{6}\) by the relations above, we see that \(Vm_{6}=m_{1}\) holds true. This gives the desired description of the Dieudonne module \(M\).
Then it follows from that description of \(M\) that \(\overline{C}\) is supersingular2 and \(a(\operatorname{Jac}(\overline{C}))=1\).
Footnote 2: Alternatively one can use the Shimura-Taniyama formula to compute the Newton polygon, but in fact Deligne [7] proves that formula by giving the same description of the Dieudonne module
We claim that \(\overline{P}\) is a touchpoint for the Cartier operator. Indeed, by [12, Corollary 5.11] there is an isomorphism
\[\frac{VM}{pM}\cong\operatorname{H}^{0}(\overline{C},\Omega^{1}_{\overline{C}})\]
that identifies \(V\) on the left with the Cartier operator on the right. Therefore, the filtration
\[\ker(\mathcal{C})\subset\ker(\mathcal{C}^{2})\subset\operatorname{H}^{0}( \overline{C},\Omega^{1}_{\overline{C}})\]
is given by
\[\operatorname{span}(m_{3})\subset\operatorname{span}(m_{2},m_{3})\subset \operatorname{span}(m_{1},m_{2},m_{3})\,.\]
On the other hand, let us look at the filtration \(V_{2}\subset V_{1}\subset V_{0}\) associated to \(\overline{P}\), i.e.
\[\operatorname{H}^{0}\left(\overline{C},\Omega_{\overline{C}}(-2\overline{P}- 2\tau(\overline{P})\right)\subset\operatorname{H}^{0}\left(\overline{C}, \Omega_{\overline{C}}(-\overline{P}-\tau(\overline{P})\right)\subset \operatorname{H}^{0}\left(\overline{C},\Omega_{\overline{C}}\right)\,.\]
Definition 5.1 implies that \(V_{2}\) contains the eigenspace where \(R\) acts via the map correponding to the embedding \(\iota_{2}\) (a priori we have such an inclusion on the generic fiber, but it passes to the specialization). Therefore, we see that the class of \(m_{3}\) must be contained in \(V_{2}\) because \(m_{3}\) generates that eigenspace. Similarly we get \(m_{2}\in V_{1}\). This shows that \(\overline{P}\) is a touchpoint for the Cartier operator on \(\overline{C}\).
**Corollary 5.7**.: _For every prime number \(p\) such that_
\[p\equiv\pm 2\mod 7,\,p\equiv 3\mod 4\]
_the hyperelliptic locus and the supersingular locus intersect non-transversally in \(\mathfrak{A}_{3}\times\mathbb{F}_{p}\)._
Proof.: The curve \(C\) from Example 5.3 is defined over \(\mathbb{Q}\) and has good reduction at every prime different from \(2\), \(7\). Furthermore \(\operatorname{Jac}(C)\) has CM by \(K=\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1},i)\). Let \(R=\operatorname{End}(\operatorname{Jac}(C))\cap K\) be the CM-order. It follows from the proof of Proposition 5.4 that the index of \(R\) in \(\mathcal{O}_{K}\) must be a power of \(2\). In fact, with a bit more work one can prove that \(R=\mathcal{O}_{K}\), but we shall not need this.
Now Proposition 5.4 implies that \(C\) satisfies the assumptions of Theorem 5.6. Let \(p\) be a prime that satisfies the congruences \(p\equiv\pm 2\mod 7\), \(p\equiv 3\mod 4\). Then \(p\) is unramified in \(L\), which equals \(K\) in our case. Furthermore \(\operatorname{per}(\operatorname{Frob}_{\mathfrak{p}})=(123456)^{-1}\) for any prime \(\mathfrak{p}\) of \(L\) lying over \(p\). Thus with Theorem 5.6 and Theorem 4.5 we conclude that \(\mathfrak{H}_{3}\) and \(\mathcal{S}_{3}\) intersect non-transversely in the moduli point \([\overline{C}]\).
It is worthwile to remark that the curve \(C\) from Example 5.3 also has good supersingular reduction at the primes satisfying
\[p\equiv\pm 3\mod 7,\,p\equiv 3\mod 4\,.\]
However, in this case the criterion of Theorem 4.5 is not satisfied because one has \(\operatorname{per}(\operatorname{Frob}_{\mathfrak{p}})=(123456)\) instead of \((123456)^{-1}\).
### One example with \(a=3\)
The curve \(C:y^{2}=x^{8}-1\) over \(\mathbb{F}_{7}\) is supersingular and satisfies \(a(\operatorname{Jac}(C))=3\) (this can be checked by computing the Hasse-Witt matrix, for example). As a consequence of Lemma 3.5 the formal completion of \(\mathcal{S}_{3}\) at \([C]\) has \(p^{5}+p^{2}+1=16857\) irreducible components. One can show that \(p^{2}=49\) of these have the property that they intersect \(\mathfrak{H}_{3}\) non-transversely.
To prove this, it does not suffice to compute the Cartier-Manin matrix as one would do in the \(a=1\) case. One rather must know the matrix of
Frobenius on crystalline cohomology mod \(p^{2}\), i.e. \(H^{1}_{\mathrm{crys}}(C/W_{2})\). But, if \(C\) is defined over \(\mathbb{F}_{p^{2}}\), as in our example, it suffices to know the matrix of Frobenius acting on \(H^{1}_{\mathrm{dR}}\).
In any case, these matrices can be computed with Kedlaya's algorithm.
The details of the verification of the claims in this section are not hard, but tedious, and thus will be omitted.
|
2301.10725 | Direct observation of magnon BEC in an out-of-plane magnetized yttrium
iron garnet film | Bose-Einstain condensation occurs at an appropriate density of bosonic
particles, depending on their mass and temperature. We were able to
experimentally observe the transition from the spin wave regime to the magnon
Bose-Einstein condensed state (mBEC) with increasing magnon density by a
microwave pumping. We used optical methods to register the spatial distribution
of the magnon density and phase. For the first time, a coherent state of
stationary magnons was demonstrated far from the region of their excitation. | G. A. Knyazev, A. N. Kuzmichev, P. E. Petrov, I. V. Savochkin, P. M. Vetoshko, V. I. Belotelov, Yu. M. Bunkov | 2023-01-24T16:20:49Z | http://arxiv.org/abs/2301.10725v1 | # Direct observation of magnon BEC in an out-of-plane magnetized yttrium iron garnet film
###### Abstract
Bose-Einstein condensation occurs at an appropriate density of bosonic particles, depending on their mass and temperature. We were able to experimentally observe the transition from the spin wave regime to the magnon Bose-Einstein condensed state (mBEC) with increasing magnon density by a microwave pumping. We used optical methods to register the spatial distribution of the magnon density and phase. For the first time, a coherent state of stationary magnons was demonstrated far from the region of their excitation.
Magnetism is, in principle, a quantum phenomenon, which is usually described in the semiclassical approximation. However, there are a number of phenomena to which the semiclassical consideration is not applicable. And first of all, this is Bose - Einstein condensation of magnons -- elementary excitations of the ground magnets state. It follows from quantum statistics that magnons should form a coherent quantum state (Bose-Einstein condensed state, mBEC) at a concentration above the critical value \(N_{c}\). Magnons density is determined by the temperature and under stationary conditions it is always below \(N_{c}\). However, the density of magnons can be significantly increased by exciting them by radio-frequency (RF) photons. This process corresponds to the magnetic resonance -- deflection and precession of magnetization in the quasi classical model of magnetism. In this paper, we study the properties of magnons with \(\vec{k}=0\) in an out-of-plane magnetized YIG film at room temperature. Under these conditions, the homogeneous precession corresponds to the energy minimum and does not decay into spin waves [1]. The critical magnon concentration \(N_{c}\) for this case was calculated in [2] and corresponds to a deviation of the precessing magnetization by about \(3^{\circ}\).
In this article, we restrict ourselves to considering only mBEC for stationary magnons excited by resonant microwave pumping, whose properties are similar to those of an atomic Bose condensate. The coherent state of traveling magnons observed in in-plane magnetized YIG films has completely different properties and is not considered in this article.
Bose condensation of stationary magnons was previously discovered in antiferromagnetic superfluid \({}^{3}\)He-B [3]. It leads to the formation of a long-lived induction signal, which decayed orders of magnitude slower than it should be due to the inhomogeneity of the magnetic field. Spontaneous recovery of coherence after the decay of homogeneous precession [4], as well as a thousandfold narrowing of the resonance line [5], clearly indicated the formation of mBEC state. The discovered phenomenon of magnon supercurrent [6] is also a consequence of Bose condensation.
Despite the fact that antiferromagnetic \({}^{3}\)He is a superfluid liquid, its superfluidity does not play any role in the formation of mBEC and magnon supercurrent. Superfluid properties are not included in any of the mBEC parameters. Thus, mBEC similar to that obtained in \({}^{3}\)He can also be observed in solid magnets. In particular, the properties of magnons in \({}^{3}\)He have many analogies with magnons in a YIG film [7]. Magnons in an out-of-plane magnetized YIG film are characterized by repulsion, as in \({}^{3}\)He-B, which leads to an upward frequency shift when the magnetization deviates. Therefore, mBEC can be similar to that in \({}^{3}\)He-B.
The main advantage of antiferromagnetic superfluid \({}^{3}\)He is the extremely long lifetime of magnons. The Gilbert damping constant is of the order of 10 \({}^{-8}\), which makes it possible to observe Bose magnon condensation after turning off the RF excitation. For YIG the Gilbert constant is by 3 orders of magnitude larger, which makes it problematic to observe the formation of mBEC after turning off the RF excitation.
In this article, we used another method for studying magnons, which also categorically confirms the formation of mBEC in a YIG film. Using an optical setup, we observed the spatial distribution of the magnon state outside the region of their excitation. These experiments showed the formation of mBEC at magnon concentrations above the critical value in the region where there is no RF excitation.
## II Experimental setup
The experiments were carried out on an elliptical YIG film 6 \(\mu\)m thick and 4.5 \(\times\) 1.5 mm in size. The geometry of the experiment is schematically shown in Fig. 1. The sample was grown by epitaxy on a gallium gadolinium
garnet substrate. The elliptical shape of the sample was chosen to reduce the possibility of the secondary resonance modes formation. Magnetic resonance was excited by a narrow strip line 0.2 mm wide, oriented perpendicular to the main axis of the sample. It was located at a distance of 1 mm from one of the sides of the sample.
The states of magnons along the main axis of the sample were studied under local excitation of magnons by the radio-frequency pumping. A beam of linearly polarized light was sent to the sample through a prism. The position of the laser beam was scanned along the sample. After the reflection of the beam, its optical polarization changes due to the Faraday rotation when interacting with the component of the magnetization M along light beam. Therefore, the Faraday angle is sensitive to the magnetization dynamics, in particular to its deflection angle \(\Theta\) and phase \(\phi\). The reflected beam of light was directed to a balance photodetector through a Wollaston prism. Therefore, the signals from detectors carried information about the amplitude and phase of the precessing magnetization. To translate studies to lower frequencies, we used light modulation at a frequency shifted from the resonant frequency by about 12 kHz. This made it possible to record the parameters of the reflected signal using low-frequency detectors. After detecting the signals, we obtained the amplitude and phase of the magnon precession at a given point of the sample and for a given magnetic field. By scanning the magnetic field and the illumination point of the sample, we obtained the amplitude and phase distribution of magnons in the sample depending on the position and field. A detailed description of our optical setup was presented in [8] where the state of magnons in the region of excitation was investigated.
## II Experimental results
In this article, we demonstrate the experimental observation of magnons state outside the excitation region. Experimental records of spatial distribution of amplitude and phase of the magnetization precession as a function of the magnetic field at different excitation energies are shown in the figures 2.
In Fig. 2 (a,b) the amplitude of signal in units of magnetization deviation \(\Theta\) at a low excitation level of 0.05 mW (a) and at a high excitation level of 6 mW (b) is shown. The dependences of the signal amplitude on the magnetic field shift and the position of testing laser beam are shown. The deviation angle \(\Theta\) was calibrated from the field shift from the resonance. The excitation strip line was located between the positions of 3.5 and 3.7 mm. At the low excitation, the resonance is clearly visible with a maximum in the region of the strip line and a resonance field of about 2621 Oe and slightly below.
This shift from resonanse takes place due to effect of "Foldover" rsonanse, when the resonanse field shifts due to decrease of demagnetization field at magnetization deflection. See [8; 9] for details. The spatial distribution of magnons changes dramatically at a higher level of excitation. Magnons filled the entire sample in the range of fields from 2621 to 2598 Oe. At a lower field, magnons do not fill the part of the sample below the strip line, but continue to fill the upper part of the sample up to 2562 Oe. This field shift is determined by magnon relaxation, which is proportional to the square of the deflection angle and the distance from the excitation region to the edge of the sample. The filling with magnons disappears as soon as their flow from the excitation region ceases to compensate for the losses.
Of great interest is the spatial distribution of the precession phase in these experiments, shown in Fig. 2 (c,d). We see that at low excitation power magnons propagate from the excitation region in the form of spin waves. The length of these waves changes with the shift of the magnetic field. Naturally, the magnetization precession frequency should coincide with the RF pump frequency. In the absence of an additional radio frequency field, the frequency shift is provided by the gradient energy of spin waves. Therefore, as the magnetic field decreases, the length of the spin waves also decreases. This effect is clearly seen in computer micromagnetic simulation of experimental conditions in the framework of the semiclassical Landau-Lifshitz theory.
This state changes completely with increasing RF pump energy, as shown in Fig. 2 (d). We see that a spatially homogeneous state with a coherent phase distribution arises outside the pump region. In this case, it should be emphasized that coherence is a property of magnons, since it is not supported by an external RF field. This state directly demonstrates magnon Bose con
Figure 1: The scheme of the experiment geometry. A beam of linearly polarized light illuminates the sample through a prism. The reflected beam is directed on the detectors. The polarization of the optical beam changes due to the Faraday rotation on the magnetization component M along the light beam. The received signal contains information about the angle of magnetization deflection \(\Theta\) and phase of precession \(\phi\).
densation at a high concentration of magnons. Indeed, Fig. 2(d) clearly shows that magnons fill the entire sample at a magnon concentration corresponding to the magnetization deviation angle of more than 3\({}^{\circ}\), as predicted in the work [2].
Let's consider the spatial distribution of magnon amplitude and phase at a fixed field in more detail. In Fig. 3 (a) the spatial distribution of the signal amplitude is shown for an external magnetic field of 2607 Oe and 2615 Oe, marked by a dashed lines in Fig. 2 (a), and at an RF pump power of 0.05 mW. The critical magnon concentration is reached only in the RF pump region, while outside this region the magnon concentration is much lower and a magnon gas can be described in the semiclassical approximation. The very different distribution is shown in Fig. 3 (b) at an excitation power of 6 mW. In this case, the magnons filled the entire sample in approximately the same concentration, excluding the edge regions.
The spatial distribution of the magnon precession phase is shown in Fig. 3 (c,d). With a small excitation of 0.05 mW, a phase rotation is observed, which depends on the distance from the excitation region, which reaches 7\(\pi\) per mm in a field of 2615 Oe and 17\(\pi\) per mm in a field of 2607 Oe. We calculated the spatial distribution of phase precessing for the experimental conditions by micromagnetic modeling using the MuMax3 [10] program and obtained excellent agreement with the experimental results.
The phase distribution changes drastically at 6 mW excitation as shown in Fig. 3 (d). In this case, a sharp turn of the precession phase near the excitation region corresponds to the flow of magnons from the excitation region. For small magnon relaxation, this turn corresponds to about 180\({}^{\circ}\). It decreases with increasing relaxation. This experimental result requires further theoretical study. It is important to note that micromagnetic modeling by MuMax3 program leads to the formation of waves at any excitation amplitude. We can conclude that the formation of a spatially homogeneous coherent precession at high excitation is a consequence of magnons Bose condensation, which, naturally, lies beyond the scope of the semiclassical theory. The MuMax3 program is based on the classical theory of Landau-Lifshitz magnetization precession and, of course, cannot simulate Bose condensation, which has a quantum nature.
Figure 2: (a)–(b) The spatial distribution of magnon density in units of magnetization deflection angle as a function of sweeping down magnetic field at 0.05 mW (a), and at 6 mW (b) of pumping energy. (c)–(d) The spatial distribution of the magnon phase as function of sweeping down magnetic field at 0.05 mW (c) and at 6 mW (d) pumping energy.
## IV Conclusion
Magnon Bose condensation and the associated magnon supercurrent in antiferromagnetic superfluid \({}^{3}\)He are well known quantum phenomena that have received worldwide recognition [11] and are awarded by F. London Memory Prize for this discovery. The formation of a Bose condensate of stationary magnons in solid-state magnets has caused much controversy. Several experimental results obtained earlier in the out of plane magnetized YIG film were considered as indirect evidence of magnons Bose condensation [9; 12]. In this article, magnon Bose condensation is experimentally demonstrated by direct optical observation of the coherent precession of magnetization far beyond the excitation region. This result contradicts the semiclassical Landau-Lifshitz theory, which directly indicates the quantum nature of this effect. It opens up new perspectives for research in quantum physics, as well as for some modern technological applications of magnonics, quantum communications, and quantum computing.
This work was supported by Rosatom in the framework of the Roadmap for Quantum computing (Contract No. 868-1.3-15/15-2021 dated October 5).
|
2305.13903 | Let's Think Frame by Frame with VIP: A Video Infilling and Prediction
Dataset for Evaluating Video Chain-of-Thought | Despite exciting recent results showing vision-language systems' capacity to
reason about images using natural language, their capacity for video reasoning
remains under-explored. We motivate framing video reasoning as the sequential
understanding of a small number of keyframes, thereby leveraging the power and
robustness of vision-language while alleviating the computational complexities
of processing videos. To evaluate this novel application, we introduce VIP, an
inference-time challenge dataset designed to explore models' reasoning
capabilities through video chain-of-thought. Inspired by visually descriptive
scene plays, we propose two formats for keyframe description: unstructured
dense captions and structured scene descriptions that identify the focus,
action, mood, objects, and setting (FAMOuS) of the keyframe. To evaluate video
reasoning, we propose two tasks: Video Infilling and Video Prediction, which
test abilities to generate multiple intermediate keyframes and predict future
keyframes, respectively. We benchmark GPT-4, GPT-3, and VICUNA on VIP,
demonstrate the performance gap in these complex video reasoning tasks, and
encourage future work to prioritize language models for efficient and
generalized video reasoning. | Vaishnavi Himakunthala, Andy Ouyang, Daniel Rose, Ryan He, Alex Mei, Yujie Lu, Chinmay Sonar, Michael Saxon, William Yang Wang | 2023-05-23T10:26:42Z | http://arxiv.org/abs/2305.13903v3 | # Let's Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction
###### Abstract
Despite constituting \(65\%\) of all internet traffic in 2023, video content is underrepresented in generative AI research. Meanwhile, recent large language models (LLMs) have become increasingly integrated with capabilities in the visual modality. Integrating video with LLMs is a natural next step, so how can this gap be bridged? To advance video reasoning, we propose a new research direction of **VideoCOT** on video keyframes, which leverages the multimodal generative abilities of vision-language models to enhance video reasoning while reducing the computational complexity of processing hundreds or thousands of frames. We introduce **VIP1**, an inference-time dataset that can be used to evaluate VideoCOT, containing 1) a variety of real-life videos with keyframes and corresponding unstructured and structured scene descriptions, and 2) two new video reasoning tasks: video infilling and scene prediction. We benchmark various vision-language models on **VIP**, demonstrating the potential to use vision-language models and LLMs to enhance video chain of thought reasoning.
Footnote 1: The source code and dataset will be made available online.
## 1 Introduction
Large language models have seen considerable gains in few-shot performance on a wide array of benchmark reasoning tasks, employing multi-step explanatory contextual demonstration techniques such as chain of thought (CoT) (Wei et al., 2023) to achieve state-of-the-art performance. Vision-language models (VL Models) such as Flamingo and PaLM (Alayrac et al., 2022; Chowdhery et al., 2022) have furthered LLMs' reach by directly incorporating both image and text to perform tasks such as visually-guided open-ended text generation (Zhu et al., 2022), vision question-answering (Wang et al., 2022; Kim et al., 2021), and image captioning, (Li et al., 2022; Liu et al., 2023).
These vision-language models are often resource inefficient and lack multi-step, multimodal reasoning, such as identifying changes between images. One of the proposed solutions to this problem is visual chain of thought (Rose et al., 2023), which combines CoT prompting with vision-language guidance. However, visual chain of thought and general multimodal language models have not been extensively applied to the video domain.
Videos are widely used in the real world, helping us learn in a more engaging way than static text or images. Consequently, there is enormous potential in leveraging videos for computer learning, so video reasoning is essential for the next generation of artificial intelligence. Researchers have begun to advance video reasoning by training models for tasks like Video Question Answering (VideoQA) (Zeng et al., 2016; Tapaswi et al., 2016; Yu et al., 2019) and Video Summarization (Xu et al., 2016; Guadarrama et al., 2013). However, little focus has been paid to asking questions that require understanding multiple video frames. Just like how we learn by comparing what we see at different times, we believe that processing multiple frames is essential for understanding videos, which essentially are a story-like sequence of images.
Videos typically contain 24 frames per second and can range dramatically in length. Training a model to process all this information is computationally expensive for large input sizes, difficult to generalize, and likely would not have the same reasoning capabilities present in LLMs. Therefore, we propose integrating the robust few-shot inference of language models into video analysis using a video chain of thought. Drawing inspiration from chain of thought's "Let's think step by step," we promote complex video understanding by asking: "Let's think frame by frame."
To evaluate VideoCOT, we introduce Video Infilling and Prediction, **VIP**, an inference-time dataset containing real-life videos, extracted
keyframes, and scene descriptions for each respective keyframe. We propose a pipeline to extract the most important frames of a video and return long-form dense captions, which we call unstructured scene descriptions. Inspired by visually descriptive scene descriptions in plays, our VIP pipeline also creates FAMOuS scene descriptions, providing each keyframe's focus, action, mood, objects, and setting. These structured scene descriptions extract specific, important information from the unstructured scene descriptions, which language models can use as visually-descriptive textual context to evaluate video frames. In addition, VIP defines two tasks given video keyframes that can benefit from a visual chain of thought: 1) Video Infilling: generating the frames that logically occurred between two keyframes, and 2) Video Prediction: predicting the subsequent keyframes of a video. By testing multistep reasoning abilities on sparse keyframes, we hope to promote research into creating models that support real-world video understanding and video generation and are resource efficient. We benchmark existing models on VIP and find much room for developing video chain of thought. We provide qualitative examples of our pipeline and model capabilities, and we will soon release our VIP dataset.
We propose the following contributions:
* We develop a pipelined approach to extract keyframes and generate a long-form, unstructured scene description and FAMOuS (Focus, Action, Mood, Objects, and Setting) structured scene descriptions.
* We propose the Video Infilling and Prediction inference-time dataset to evaluate video chain of thought reasoning with two new tasks in frame generation.
* We demonstrate the performance of existing state-of-the-art models on these video reasoning tasks.
## 2 Related Work
**High-level visual understanding of videos.** To evaluate video understanding, datasets often use automatic speech recognition, subtitles, pre-written dialogue, plot summaries, etc Lei et al. (2018, 2020a); Tapaswi et al. (2016); Miech et al. (2019). Many answers in these datasets rely heavily on this additional textual context (e.g., DeepStory notes that out of 14,944 questions in MovieQA, "only 6,462 questions can be answered using the scenes'). As a result, researchers have proposed datasets whose answers rely on both the input visuals and text Kim et al. (2017); Mun et al. (2017). However, these datasets do not represent real-life videos and rely on this additional textual context, which not all videos contain. There exist some datasets asking questions entirely based on input frames, but their questions are often limited to summaries of videos Xu et al. (2016); Guadarrama et al. (2013) or answerable by a single frame Yu et al. (2019); Zeng et al. (2016); Maharaj et al. (2016). Our dataset narrows understanding from full summaries yet broadens single-frame understanding to test reasoning about multiple frames.
Figure 1: Dense vs. Unstructured Scene Description of a video keyframe, following the FAMOuS format (Focus, Action, Mood, Objects, and Setting).
**Questions about multiple frames.** Several datasets have begun to address multiframe video understanding [14] works with GIFS, Clever generates 5-second videos from a physics engine [23], and MarioQA utilizes short MarioQA gameplay videos and event logs [24]. They include descriptive, predictive, counting, predictive, and counterfactual questions. Additionally, VideoABC [14] proposes the novel abductive reasoning task for instructional videos: inferring the most likely sequence of keyframes and explaining why false hypotheses are wrong. Our tasks differ because we hide intermediate frames hidden to gauge generative reasoning. Finally, RaMViD [15] uses diffusion models to improve video prediction and infilling. However, they use the BAIR robot pushing dataset [1] for infilling and the Kinetics-600 dataset [1] for prediction, which only contains selected 10-second action clips of YouTube videos, both of which don't represent the variability of real-life videos involving high-level understanding. VIP maintains the complex reasoning questions about multiple frames of all these datasets yet extends their reach into longer, real-life videos, allowing evaluation of higher-level reasoning about the real world.
**Multimodal language models.** Researchers are continually demonstrating impressive performance gains from multimodal language models (chat-gpt [1], OFA [21]). PaLM [10] impressively uses few-shot learning to surpass fine-tuned language models on a variety of NLP tasks. PaLMe [13] and Flamingo [1] extend downstream performance capabilities into multimodal domains, training vision-language models with interleaved text and images. Researchers have also replicated these huge-scale vision-language models with smaller versions that still attain advanced performance [24], Open-Flamingo [1], BLIP [15], Llava [16], and Otter [15]). The next step in advancing video understanding is incorporating vision-language models, which is why we propose VIP evaluate their performance on video chain of thought reasoning.
**Textual Description for Video Understanding.** Most video understanding models encode only the question and video keyframes, and they are trained to return the desired answer (Extended End-to-End Memory Network [23], Deep Embedded Memory Networks [14], spatio-temporal VQA [14]. However, video researchers have begun to jointly encode in
Figure 2: Overview of our pipeline for extracting keyframes and generating scene descriptions, all of which we provide in the VIP dataset. We first use CLIP embeddings of the initial keyframes and the outputs of the object-detection models to prune unnecessary frames. We then extract the ground truth lists of objects from the video description as well as frame image captions to ground the generation of scene descriptions. For each frame, we instruct a language model to turn this grounding input, the detected objects, and dense captions into an unstructured scene description. We then prompt it to create a structured, FAMouS scene description by extracting its focus, action, mood, objects, and setting.
put text with videos or frames to create video-text embeddings (HowTo100m (Miech et al., 2019), VideoStory (Kim et al., 2018)), learn spatial and temporal relations between frames and captions (Merlot (Zellers et al., 2021)), and match videos with relevant natural language descriptions (Deep Embedded Memory Networks - PororoQA (Kim et al., 2017), YouTube2Text (Guadarrama et al., 2013)). VidIL (Wang et al., 2022b) uses few-shot learning with frames, frame captions, and visual tokens to language models for video captioning, video question answering, video caption retrieval, and video future event prediction (Lei et al., 2020b) (selecting the more likely future event given two options). By contrast, VideoCOT introduces novel tasks that evaluates the generation of video frames themselves and creates more in-depth textual descriptions of keyframes. Video4096 (Bhattacharya et al., 2023) has also looked to use multimodal models and instructions to "verbalize" videos, finding that generated stories can enhance downstream video understanding. VideoCOT is different from Video4096 in the following ways: 1) We select general real-life videos, while Video4096 uses storytelling or advertising videos, 2) They create detailed stories about the entire video, and we create textual descriptions of keyframes, and 3) They use their stories for video storytelling and sentiment analysis, which concern the entire video, while we use our scene descriptions to reason about specific video segments.
## 3 Pipeline
To generate high-quality scene descriptions, we propose the following pipeline2.
Footnote 2: [https://sites.google.com/view/videocot/home](https://sites.google.com/view/videocot/home)
1. We perform **keyframe extraction and pruning** to extract the important frames of a video.
2. We then **extract information**, returning the objects in the frame, dense descriptions of the objects, and captions to ground what is happening in the keyframe.
3. Finally, we **generate scene descriptions** about keyframes, returning a dense, unstructured description as well as a structured description of the frame that specifies its focus, action, mood, objects, and setting (FAMOuS).
### Keyframe Extraction
We use videos from an online video repository3, which contains a wide selection of real-life videos as well as high-quality video descriptions that the user uploads.
Footnote 3: [http://jukimmedia.com/videos](http://jukimmedia.com/videos)
Given these videos, our first step towards leveraging LLM for video understanding is carefully selecting our input keyframes. We use Katna, an open-source tool that automatically extracts video keyframes by considering LUV colorspace differences, brightness, entropy, and contrast filtering, clustering of image histograms, and variance of image laplacians for blur detection. Though Katna outputs high-quality keyframes, returning many results in multiple redundant ones. Likewise, selecting a small number sometimes removes necessary keyframes. Our goal of enhancing video understanding while limiting computational complexity requires balancing limiting the number of keyframes while maintaining faithfulness to the video.
We return a large number of keyframes and then _prune_ keyframes by removing ones that are of low quality and most similar to one another. First, we remove blurry frames with low laplacian blur scores, then remove frames where both Detic and Grit detect few objects. Both models are sensitive to objects, so only detecting a few objects indicates a low-quality frame.
We then use CLIP to embed the frame images and the list of unique detected objects from Detic to account for pixel similarity and object invariance in the frame, respectively. GRiT returns many objects and dense captions, which is too large to embed. We take the average of the cosine similarity scores among consecutive keyframes, then prune the frames with the highest pairwise similarity until we reach the desired number of keyframes. We add an additional check for necessary keyframes by not removing keyframes with characters unless either of the surrounding frames also contain that character.
### Extracting information from Keyframes
To generate descriptive scene descriptions, we extract as much visual information as possible from the keyframes. We return three main things from each keyframe: a list of objects and their locations, dense captions describing each object and their locations, and a caption about the scene.
To extract the list of objects, we use the state-of-the-art object detection model, DeticZhou et al. (2022). For each object we extract, we also instruct Detic to output its bounding box that contains the object and a score signifying the confidence of the prediction.
While Detic performs well with object detection, it doesn't provide much detail about the objects (i.e., Detic returns "chair" instead of "red chair on a clean marble floor"), which is important to generate high-quality scene descriptions. So, we use another model, GRiTWu et al. (2022), which returns dense captions describing each object in an image. GRiT also produces the bounding box and the confidence for each prediction. We use a combination of both outputs because we find that Detic has more accurate object detection and GRiT is far more descriptive.
Finally, we obtain the grounding details on what is happening in the scene using the image-captioning model BLiPLi et al. (2022).
### Generating Scene Descriptions
To generate scene descriptions, we utilize the output from the image captioning model (BLiP), the object detection model (Detic), and the dense captioning model for objects (GRiT). However, no model is perfect, and sometimes these models misclassify and hallucinate, affecting the quality of the scene descriptions. To address this issue, we extract the objects from the ground truth video description and use the confidence scores outputted by the models. Now we have the ground truth list of objects, the keyframe caption, the object detection output of GRiT and Detic, and their corresponding confidence scores. We prompt GPT-4 to synthesize all of this information while keeping in mind that the output from the models may not be completely accurate.
After this step, GPT-4 outputs a dense, descriptive caption, which provides far more detail than simple phrase-level captioning. However, some tasks may benefit from a structured description format, which extracts specific information from the dense, unstructured scene description. To address these needs, we feed the dense, unstructured scene description into GPT-4 and extract its focus, action, mood, objects, and setting. These FAMouS categories form VIP's structured scene descriptions.
However, we still cannot use the generated scene descriptions from GPT-4 as ground truth because we can't verify their correctness, as GPT-4 is also prone to hallucinations. To address this, we use M-Turk, a crowdsourcing platform, to get crowd workers to fix and validate scene descriptions given the keyframe, the scene description, and a description of the video. We input this updated scene description to correct the dense, unstructured scene description using GPT-4. To validate the unstructured scene descriptions, we ask M-Turk workers to vote if they make sense and, if not, suggest how they should be fixed. We then edit the unstructured scene descriptions accordingly until the workers verify that the description is accurate.
These scene descriptions augment the keyframes with detailed textual information to input to LLMs, enhancing VideoCOT reasoning.
## 4 Tasks
Within the VIP dataset, we propose two overarching tasks in video completion that necessarily require multiframe reasoning: 1) video infilling, which involves discerning changes and generating frames between any two given keyframes, and 2) scene prediction, which focuses on anticipating subsequent events given a sequence of keyframes. Video infilling and prediction of keyframes can be used in various downstream contexts that can benefit from video understanding and completion. We note that infilling or predicting scene descriptions themselves is useful because images can be generated or grounded by these vivid frame descriptions of images. Further, generated scene descriptions can promote subsequent video infilling or prediction in a "let's think frame by frame" manner.
In contrast to single images, videos are comprised of a multitude of frames, consequently demanding substantial computational resources for processing. However, it is worth noting that numerous frames within a video may be redundant, and only a select number of keyframes are essential for understanding the video. Therefore, the tasks we propose aim to gauge the efficacy of video chain of thought in promoting multiframe video understanding while circumventing the expensive analysis of every video frame.
For the video infilling and video prediction tasks, we will use represent the sequence of chronological keyframes as \(k_{1},...,k_{n}\), their respective unstructured scene descriptions as \(u_{1},...,u_{n}\), and structured FAMouS scene descriptions as \(s_{1},...,s_{n}\).
### Video Infilling
The video infilling task involves predicting the intermediate keyframes between a given frame and another frame within a video sequence that follows chronologically. Our input is keyframe \(k_{i}\) and \(k_{j}\) and their respective scene descriptions \(u_{i}\) and \(u_{j}\) or \(s_{i}\) and \(s_{j}\) (unstructured or structured) such that \(j\) occurs after \(i\). The task is to predict the scene descriptions \(u_{i+1},...,u_{j-1}\) or \(s_{i+1},...,s_{j-1}\) for the in-between keyframes \(k_{i+1},...,k_{j-1}\), depending on what type of scene description was used as input. This task requires models to capture a scene's temporal variations and transitions, including changes in visual elements, object positions, and contextual factors. The task's difficulty scales exponentially with an increasing number of missing in-between frames. By successfully predicting the intermediate keyframes, models demonstrate their ability to comprehend the dynamic evolution of scenes and identify critical points of change in a video sequence.
This task requires models to exhibit complex reasoning about the visual dynamics, temporal relationships, and contextual cues present in the video. Successful performance in the scene prediction task showcases the models' ability to infer and project the plausible progression of events, enabling them to anticipate and generate realistic future frames that align with the underlying video content.
### Video Prediction
The video prediction task aims to anticipate future frames in a video sequence. The input is a variable number of preceding frames, denoted as \(k_{t-m},...,k_{t}\), along with their respective unstructured or structured scene descriptions \(u_{t-m},...,u_{t}\) or \(s_{t-m},...,s_{t}\). By utilizing the preceding keyframes, the goal is to predict the n next frames \(k_{t+1},...,k_{t+n}\) or scene descriptions
Figure 3: Video Infilling Task Example; the black text indicates the input to a generative model and the blue text indicates the output
-- \(u_{t+1},...,u_{t+n}\) or \(s_{t+1},...,s_{t+n}\). Much like the video infilling task, this task also scales in difficulty as we decrease the number of input keyframes and increase the number of output keyframe predictions.
## 5 Experiments
We outline qualitative examples of benchmarking multimodal language models on the VIP dataset. We will release the full dataset and a more thorough benchmarking of multimodal language models. Currently, we provide examples of the VIP tasks on the multimodal language model Otter (Li et al., 2023), and we simplify our tasks to infilling or predicting a single frame, respectively. We highlight examples of generating relevant scene descriptions, which can then be used to generate more accurate frames.
Even with these task simplifications, Otter struggles to generate accurate scene descriptions. This observation underscores the inherent complexity and challenges of reasoning over a larger number of keyframes. Because the task's difficulty can grow significantly, we deem it important to benchmark existing multimodal language models and promote research into video chain of thought.
### Video Infilling Qualitative Analysis
Shown in Figure 3, we evaluate Otter's performance on the video infilling task with three different input contexts -- passing in only keyframes, passing in keyframes and unstructured scene descriptions, and passing in keyframes and structured scene descriptions. In the latter two contexts, we prompt the model to return either an unstructured or structured scene description, respectively. When provided only keyframes, Otter fails to discern what happened between the two given keyframes accurately and only describes the information in the last frame. When providing our unstructured scene descriptions, Otter demonstrates improvement but still cannot detect the opening of the gift, demonstrating the complexity of the task. However, when provided with our structured scene descriptions, Otter can detect the opening of the gift in detail, suggesting that our scene descriptions can effectively leverage models' reasoning capabilities to promote video chain of thought reasoning.
### Video Prediction Qualitative Analysis
In Figure 4, a similar pattern emerges in the video prediction task. Whereas only inputting keyframes results in a feasible yet undesirable prediction, utilizing our scene descriptions substantially enhances
Figure 4: Video Prediction Task Example; the black text indicates the input to a generative model, and the blue text indicates the output
Otter's ability to predict the ideas in a subsequent keyframe accurately. Otter predicts a markedly improved future scene of the actual wedding proposal in the video, and it exhibits proficient reasoning capabilities by effectively speculating about a celebratory context. These findings emphasize the valuable role played by scene descriptions in facilitating Otter's reasoning ability in the video prediction task.
## 6 Conclusion
We present the Video Infilling and Predict (VIP) dataset to evaluate the abilities of vision-language models to perform a video chain of thought on the keyframes of a video. We also introduce a novel pipeline to extract keyframes, then generate corresponding structured and unstructured scene descriptions. Inputting our generated scene descriptions helps improve performance, though there is significant room for improvement in video chain of thought. We encourage using VIP as an inference-time dataset to evaluate video chain of thought using different models and strategies for effective in-context learning, ultimately to promote complex video understanding and generation.
|
2302.06417 | Analog, In-memory Compute Architectures for Artificial Intelligence | This paper presents an analysis of the fundamental limits on energy
efficiency in both digital and analog in-memory computing architectures, and
compares their performance to single instruction, single data (scalar) machines
specifically in the context of machine inference. The focus of the analysis is
on how efficiency scales with the size, arithmetic intensity, and bit precision
of the computation to be performed. It is shown that analog, in-memory
computing architectures can approach arbitrarily high energy efficiency as both
the problem size and processor size scales. | Patrick Bowen, Guy Regev, Nir Regev, Bruno Pedroni, Edward Hanson, Yiran Chen | 2023-01-13T21:04:16Z | http://arxiv.org/abs/2302.06417v1 | # Analog, In-memory Compute Architectures for Artificial Intelligence
###### Abstract
This paper presents an analysis of the fundamental limits on energy efficiency in both digital and analog in-memory computing architectures, and compares their performance to single instruction, single data (scalar) machines specifically in the context of machine inference. The focus of the analysis is on how efficiency scales with the size, arithmetic intensity, and bit precision of the computation to be performed. It is shown that analog, in-memory computing architectures can approach arbitrarily high energy efficiency as both the problem size and processor size scales.
## I Introduction
This work is focused on minimizing the energy required to evaluate neural networks, particularly in the linear layers which comprise the overwhelming majority of the computation. The linear operators that describe convolutional neural network layers can be often be characterized by three qualities: they are sparse, high in dimensionality, and high in arithmetic intensity, where arithmetic intensity is defined as the ratio between the number of basic operations (i.e. multiplications and additions) and the number of bytes read and written. This paper shows that, in the context of operators that are both high in dimensionality and arithmetic intensity, an analog in-memory computing device can drastically reduce the energy required to evaluate the operator compared to a von Neumann machine. Moreover, the degree of increased efficiency of the analog processor is related to the scale of the processor.
In a classical von Neumann machine, the energy required to evaluate an operator can be broken into two components: memory access energy and computational energy. Within a typical CPU, and depending on the workload, these components can consume the same order of magnitude of the total energy. Memory access related energy can easily outgrow computational energy consumption, particularly when used to evaluate sequential large linear operators like those used in neural network inference. The goal of this paper is to find high-level architectures that can reduce the energy consumption of neural network algorithms by orders of magnitude, which requires addressing both memory access energy and computational energy. Here we show that an in-memory compute accelerator architecture can reduce memory access energy when applied to an operator/algorithm with high arithmetic intensity, while an analog processor/accelerator can reduce computational energy when specialized for particular classes of linear operators. A processor architecture that takes advantage of both in-memory compute and is analog in nature can in principle reduce the overall computational energy consumption by orders of magnitude, with the amount of reduction depending on the scale and arithmetic intensity of the algorithm to be performed and the analog processor's specialization in performing a specific set of operators.
In-memory compute architectures were originally designed to speed up processing of algorithms that are parallelizable and applied to large datasets. One of the earliest examples dates back to the 1960s with Westinghouse's Solomon project. The goal of that project was to accelerate the speed of the computer up to 1 GFLOPs by using a single instruction applied to a large array of Arithmetic Logic Units (ALUs). This is perhaps the first instance of the several closely related concepts: single instruction, multiple data (SIMD) machines, vector/array
processors, systolic arrays and in-memory/near-memory compute devices.
Today, exploiting parallelism in high-arithmetic intensity algorithms using parallel hardware remains a well-known technique to accelerate a computation along the time dimension. More recently, however, vector/array processors have been utilized to decrease compute energy as opposed to the original purpose of compute time, and it does this by reducing energy associated with memory accesses. Google's TPU is a good example of a systolic array being used as a near-memory compute device with digital processing elements [1; 2]. In sec. III, we explain how in-memory compute devices can reduce memory access energy in the case of linear operators with high arithmetic intensity.
Separately, analog computing has recently been proposed as an approach to reduce the computational energy consumption, again for large, linear operations. In sec. IV we present a general model of analog computation that focuses on how energy consumption scales with problem size and bit precision, and show that computational energy can be reduced by orders of magnitude by using an analog processor that is specialized to implement specific classes of operators. Reconfigurable analog processors are by nature in-memory compute devices, and so these classes of processors are shown to reduce overall computational energy by orders of magnitude for particular operators.
## II CPU energy consumption
We begin by finding the energy efficiency of a computer performing multiply-accumulate (MAC) operations, which are the core of linear operators used in deep learning. The total energy required to perform a linear operation can be decomposed into memory access energy and computational energy:
\[E_{tot}=N_{m}e_{m}+N_{op}e_{op}, \tag{1}\]
where \(N_{m}\) is the number of memory accesses, \(e_{m}\) is the average energy per access, \(N_{op}\) is the number of operations required to evaluate the overall operator, and \(e_{op}\) is the average energy per operation (e.g., add, multiply, etc). We define the computational efficiency as the number of operations per unit energy performed by the computer:
\[\eta\equiv N_{op}/E_{tot}=\frac{1}{(N_{m}/N_{op})e_{m}+e_{op}}. \tag{2}\]
In a simple CPU with a single instruction, single data (SISD) architecture in Flynn's taxonomy and a flat memory hierarchy, for each operation that is performed, a value is read from memory for the current partial sum, the operator weight, and the input activation. The three values are operated upon, and the result is written back to memory. Therefore, regardless of the actual size of the weights or activations, the number of memory accesses per operation will always be four (i.e. three reads and one write), and the number of computational operations (multiply and add) will be 2. This results in \(N_{m}=2N_{op}\) and a computational efficiency of
\[\eta=\frac{1}{2e_{m}+e_{op}}. \tag{3}\]
In modern CMOS devices, both \(e_{m}\) and \(e_{op}\) are on the order of magnitude of 1 pJ [3], as will later be shown in table IV. This places an approximate limit on the computational efficiency of most traditional architectures on the order of 0.1-1 TOPS/W, which is consistent with state of the art performance [4].
## III Minimizing memory access energy with in-memory compute
One of the major downsides of SISD machines is that they can end up accessing the same memory element multiple times in the course of evaluating a large operator, which wastes memory access energy. This is ultimately reflected in the ratio \(N_{op}/N_{m}=1/2\) that is fixed by the nature of a SISD machine. Alternatively, one can imagine finding another hypothetical architecture that is arranged in some energetically optimal way to where all of the inputs are only read once from memory, and all outputs are only written once to memory in the course of the computation. If that were done, this would represent the minimum total access energy required to evaluate the linear operator. In other words, \(N_{m}\) would reach its minimum value, and the ratio \(N_{op}/N_{m}\) would be maximized.
While a particular processor might only be able to implement a certain \(N_{op}/N_{m}\) ratio, this ratio is also limited by the algorithm being performed, and is commonly referred to as the _arithmetic intensity_ of the algorithm:
\[a\equiv N_{op}/N_{m}. \tag{4}\]
An _in-memory compute_ device [5] as illustrated in fig. 1 can leverage the arithmetic intensity of an algorithm by reading a large set of both operator data and input vector data from memory at once and operating on all of the data together before writing the output back to memory. If the in-memory compute device is sufficiently large and complex, all of the necessary operations involving this data can be performed without any of the inputs being read a second time from memory in the future.
Returning to eq. (1), we set a lower bound on the amount of memory access energy that must be expended for the von Neumann machine to evaluate the operator in terms of the arithmetic intensity. This in turn leads to a limit on the computational efficiency:
\[\eta=\frac{1}{e_{m}/a+e_{op}} \tag{5}\]
The contribution to computational efficiency from memory access energy can therefore be brought arbitrarily low when implementing an operator with arbitrarily high arithmetic intensity. The reduction in the contribution from memory access energy with increasing arithmetic intensity in eq. (5) is reflective of the energy savings in systolic arrays and TPUs [1; 2].
We note that the kind of analysis presented in eq. (5) is analogous to roofline models of processors [6]; however, the emphasis here is on energy consumption, while the latter is focused on identifying bottlenecks in processor speed.
In order to sample what degree of advantage in-memory compute devices can bring, we examine a few examples of linear operators and present their arithmetic intensities. For a general matrix multiplication of a matrix of size \(L\times N\) times a matrix of dimension \(N\times M\) the total number of memory accesses is \(N_{m}=LN+NM+LM\), and the number of operations is \(N_{op}=2NML\), where additions and multiplications are treated as separate operations. The arithmetic intensity in this case is:
\[a=\frac{2NML}{LN+NM+LM}, \tag{6}\]
which approaches \(\infty\) as \(N,M,L\rightarrow\infty\) collectively.
For a convolution, the arithmetic intensity can similarly become arbitrarily large, since a convolution can be implemented as a matrix-matrix multiplication. This is typically done by rearranging the input data into a toeplitz matrix using what is known as an im2col() operation. The general algorithm of implementing convolution using matrix multiplication in a systolic array is shown in fig. 2, where \(n\times n\) is the size of one input channel, \(C_{i}\) is the number of input channels, \(k\times k\) is the size of one of the kernel channels, and \(C_{i+1}\) is the number of output channels (and, consequently, also the number of individual 3-D kernels). The toeplitz formed by replicating and rearranging the activation data results in an \((n-k+1)^{2}\times k^{2}C_{i}\) matrix. A convolution is performed by multiplying this with a \(k^{2}C_{i}\times C_{i+1}\) matrix containing the weights. Therefore, when implementing a convolution using matrix multiplication we generally have matrix dimensions,
\[L =(n-k+1)^{2}\approx n^{2} \tag{7a}\] \[N =k^{2}C_{i}\] (7b) \[M =C_{i+1}. \tag{7c}\]
which results in an arithmetic intensity,
\[a=\frac{2n^{2}k^{2}C_{i}C_{i+1}}{n^{2}k^{2}C_{i}+k^{2}C_{i}C_{i+1}+n^{2}C_{i+1 }}. \tag{8}\]
However, since the activation data was replicated approximately \(k^{2}\) times in order to form the input matrix, the arithmetic intensity is significantly reduced relative to a processor that natively implements convolution instead of general matrix multiplication. To see this, consider again the convolutional layer of an \(n\times n\) input image with \(C_{i}\) input channels, \(C_{i+1}\) output channels, and a \(k\times k\) kernel. The input vector size is \(N_{i}=n^{2}C_{i}\), and the number of kernel weights is \(K=k^{2}C_{i}C_{i+1}\). If only the necessary weight and activation data were required to be read, the arithmetic intensity of the \(i^{th}\) layer would become
\[a\approx\frac{2n^{2}k^{2}C_{i}C_{i+1}}{n^{2}(C_{i}+C_{i+1})+k^{2}C_{i}C_{i+1 }}. \tag{9}\]
In the limit where \(n^{2}>>k^{2}C_{i}\), this is roughly \(k^{2}\) higher arithmetic intensity than when convolution is implemented using matrix multiplication.
Whether convolution is implemented natively or using matrix-matrix multiplication, eq. (9) shows that, as \(n,k,C_{i}\rightarrow\infty\), arithmetic intensity becomes arbitrarily large, making the contribution from memory access energy in eq. (5) arbitrarily small. Indeed, in most modern convolutional neural networks, these parameters are large and yield high arithmetic intensity, as shown in table 1. Depending on the size of the memory banks (which determine memory access energy), and based on the reference numbers given in table 4 for SRAM access energy and digital MAC operation, an in-memory compute processor implementing an algorithm with high arithmetic intensity can be made to expend negligible memory access energy relative to the computational energy.
## IV Reducing computational energy with analog computing
Unfortunately, by Ahmdal's law, even if the memory access energy is made arbitrarily small, computational energy consumed by the logical units will limit the overall performance gains to be made. In order to improve the overall efficiency by orders of magnitude, both contributions need to be addressed.
Figure 1: Illustration of a digital compute-in-memory processor.
Recently, various types of analog computing, from electrical to optical, have been proposed as techniques to reduce computational energy consumption. Electronic analog computing typically centers around crossbar arrays of resistive memory (or ReRAM) [7; 8; 9]. Optical analog processors are commonly based on silicon photonics [10; 11; 12; 13]. Optical 4F systems have been explored since the 1980s as a higher dimensional form of compute [14; 15], and simple scattering off of optical surfaces is also being explored [16; 17; 18].
The argument for analog computing is fundamentally a scaling one: analog computing has particular advantages when applied to large, linear operators with low bit precision [19]. To see this, consider a general analog processor (shown in fig. 3(a)) that takes \(N\) numbers of \(B\)-bit precision input data, produces \(M\) numbers of \(B\)-bit precision output data, and is configured by \(K\) weights with \(B\)-bit precision which represent the matrix. The analog processor is first configured by converting the \(K\) weights using digital-to-analog converters (DACs) and applying these values to the modulators in the analog processor. Then the \(N\) inputs are read from memory, and DACs are used to apply \(N\) analog inputs to the processor. By the physics of the processor, this naturally results in
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
**Network** & **\# of layers** & **median \(n\)** & **median \(C_{i}\)** & **max \(N\)** & **avg. \(k\)** & **total \(K\)** & **median \(C_{i+1}\)** & **median \(a\)** \\ \hline DenseNet201 & 200 & 62 & 128 & 1.6e+07 & 2.0 & 1.8e+07 & 128 & 292 \\ \hline GoogLeNet & 59 & 61 & 480 & 3.9e+06 & 2.1 & 6.1e+06 & 128 & 200 \\ \hline InceptionResNetV2 & 244 & 60 & 320 & 8.0e+06 & 1.9 & 8.0e+07 & 192 & 291 \\ \hline InceptionV3 & 94 & 60 & 192 & 8.0e+06 & 2.4 & 3.7e+07 & 192 & 295 \\ \hline ResNet152 & 155 & 63 & 256 & 1.6e+07 & 1.7 & 5.8e+07 & 256 & 390 \\ \hline VGG16 & 13 & 249 & 256 & 6.4e+07 & 3.0 & 1.5e+07 & 256 & 2262 \\ \hline VGG19 & 16 & 186 & 256 & 6.4e+07 & 3.0 & 2.0e+07 & 384 & 2527 \\ \hline YOLOv3 & 75 & 62 & 256 & 3.2e+07 & 2.0 & 6.2e+07 & 256 & 504 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of convolutional layer parameters of various well-known neural networks considering a 1-Mpixel (per channel) input image.
Figure 2: Algorithmic implementation of a convolution using matrix multiplication in a weight-stationary systolic array. The input data is converted into a toepliz matrix and fed into the systolic array, with each row delayed one time step behind the one above it.
analog outputs, which are converted back to the digital domain using analog-to-digital (DAC) converters. If the analog processor is somehow already configured, or never needs to be reconfigured, then the total energy consumed will be only that of the DACs for the inputs and ADCs for the outputs:
\[E_{op}\equiv N_{op}e_{op}=N(e_{dac,1}+e_{adc}), \tag{10}\]
where we have assumed \(N=M\) for simplicity. While
Figure 3: (a) System-level view analog, in-memory compute processors. The analog device is configured using DACs to either hold activations or weights, while the other is provided as input. (b) Detailed view of a ReRAM crossbar analog electronic in-memory compute processor. Each transistor is connected to a reconfigurable resistor, the conductance of which determines the effective weight of each element in the matrix. (c) Detailed view of a silicon photonic in-memory compute processor. Each transistor is connected to an electro-optic element that changes the scattering parameters through each intersection.
the right-hand-side of eq. (10) represents the computational energy consumed by the analog processor, the left-hand-side represents the equivalent number of digital operations performed (\(N_{op}\)) times the energy that each of those operations would have to take (\(e_{op}\)) in order for a digital computer to achieve the same efficiency as the analog computer. Since \(N_{op}=2N^{2}\) for matrix multiplication, if this operation were performed digitally, the expended computational energy would be proportional to the number of operations: \(E_{op}=2e_{op}N^{2}\). The conclusion is that _analog computing reduces matrix multiplication from \(\mathcal{O}(N^{2})\) in energy to \(\mathcal{O}(N)\) in energy_. This furthermore implies that the effective energy per operation of analog computing scales inversely to the size of the problem, i.e.
\[e_{op}\propto 1/N. \tag{11}\]
We note that in practice the scaling \(N\) is defined either by the size of the processor or the size of the problem, whichever is smaller.
### Vector-Matrix Multiplication
For most problems involving neural networks, the analog processors that can be created are not large enough to store the entire neural network. In this case, the reconfiguring of the weights in the analog processor itself can destroy the \(\mathcal{O}(N)\) scaling advantage. To see this, consider the multiplication of a vector of length \(N\) with a matrix of dimensions \(N\times M\). In this case, we have,
\[N_{op}e_{op}=2Ne_{dac,1}+2MNe_{dac,2}+2Me_{adc}. \tag{12}\]
We have also separated the DAC energies \(e_{dac,1}\) and \(e_{dac,2}\) since different physical mechanisms and loads are sometimes used to configure an analog computer versus feed it with analog inputs. Here, \(e_{dac,1}\) is used to represent the energy required per input, while \(e_{dac,2}\) is used to represent the energy required per reconfiguration.
Typically, in analog computing technologies, the analog in-memory compute device can only store either positive definite numbers (like in the example of memristors) or fully complex numbers (like in the case of coupled Mach-Zender interferometers). If only positive numbers can be created, then the entire calculation must be done twice and the difference of the results taken in order to take into account both positive and negative matrix values. On the other hand, when complex values are allowed like in the case of silicon photonic MZI's, there are two voltages (and hence two DAC operations) required to configure each coupled MZI modulator. Additionally, for coherent optical measurements, an interference technique must be used to recover the positive and negative field components from the photodetectors, which can only measure the norm square of the field. Hence, regardless of the analog compute scheme, each term in eq. (12) must practically be multiplied by a factor of two in order to handle both positive and negative values.
Applying eq. (12) to vector-matrix multiplication, we obtain:
\[e_{op}=e_{dac,1}/M+e_{dac,2}+e_{adc}/N, \tag{13}\]
in which case the middle term is proportional neither to \(1/N\) nor \(1/M\).
### Matrix-Matrix Multiplication
The aforementioned situation is relieved in the case of matrix-matrix multiplication. In this case the configuration of the analog computer itself is reused for every row of the input matrix, restoring the energy cost per operation to be inversely proportional to the problem scaling. In the case of an \(L\times N\) matrix times an \(N\times M\) matrix, we have
\[e_{op}=e_{dac,1}/M+e_{dac,2}/L+e_{adc}/N \tag{14}\]
since \(N_{op}=2NML\) in this case. Since each of the three separate contributions to the energy consumption is decreased by a factor proportional to the three different dimensions associated with the matrices being multiplied, the effective energy per operation decreases as the problem scale increases. In the case of a finite-sized analog processor, the last two contributions will ultimately be limited by the two dimensions (number of inputs and outputs) of the analog processor itself.
At this point, a distinction needs to be made between the size of the matrices involved in the neural net architecture and the physical dimensions of the analog processor. We label the matrix dimensions with primes, i.e. \(M^{\prime}\), \(N^{\prime}\), and \(L^{\prime}\), and label the physical dimensions of the processor with hats: \(\hat{M}\), \(\hat{N}\). The actual factors by which energy is saved (i.e. \(M\) and \(N\) in eq. (14)) are given by the smaller of these two numbers:
\[M =\min\{\hat{M},M^{\prime}\} \tag{15a}\] \[N =\min\{\hat{N},N^{\prime}\}. \tag{15b}\]
### Convolution
As in the case of digital processors, analog processors can also implement convolution using matrix-matrix multiplication. The mapping of the kernel and activation data to to matrix dimensions remains the same, i.e.
\[L^{\prime} =(n-k+1)^{2}\approx n^{2} \tag{16a}\] \[N^{\prime} =k^{2}C_{i}\] (16b) \[M^{\prime} =C_{i+1} \tag{16c}\]
when weight-stationary scheme is implemented. These numbers are permuted for activation-stationary. As with digital processors, one of the unfortunate aspects of representing convolution as pure matrix multiplication is
that the input activations get duplicated \(k^{2}\) times, which means \(k^{2}\) more DAC operations (and possibly memory accesses as well) than in a processor that natively implements convolution rather than general matrix multiplication. The consequence of this is that \(M\) is by far the smallest of the numbers in eq. (16c), and therefore analog processors that implement convolution as matrix multiplication get the least amortization over their input DACs in eq. (14). The median values of \(L^{\prime}\), \(N^{\prime}\), and \(M^{\prime}\) for various neural networks is presented in table 2.
## V Operator-Specialized Analog Processors
Thus far, we have seen that 1) the contribution of memory access energy to compute efficiency can be brought arbitrarily low by implementing networks with large arithmetic intensity on specialized processors, and 2) analog processors can further reduce computational energy consumption when performing matrix multiplication. The reduction in computational energy is proportional to the size of the matrix the analog processor can handle.
One of the inherent disadvantages of planar, matrix multiplication based processors in performing convolutions is that the matrix that is formed for the input is of dimensions \((n-k+1)^{2}\times k^{2}C_{i}\), which is a factor of \(k^{2}\) larger than the actual activation data. When the convolution is performed digitally this is of little consequence because the number of MACs required is the same for this matrix multiplication as it is for convolution: \((n-k+1)^{2}k^{2}C_{i}\). However, when the matrix multiplication is performed with an analog processor using a matrix with \(k^{2}\) more rows than necessary means that it requires \(k^{2}\) more DAC operations than should be theoretically necessary. Even worse than this, unless some additional logic is used to set up the matrix between the SRAM and processor (which also consumes energy), it will require \(k^{2}\) additional memory reads than is in principle necessary, thus significantly increasing the memory access energy. Furthermore, since the number of channels in each layer are often correlated (the output channels of one layer become the input channels of the next), the weight data loaded into the analog processor which has dimensions of \(k^{2}C_{i}\times C_{i+1}\) is highly rectangular, which will increase \(M\) relative to \(N\), which in turn increases the contribution to the energy consumption per operation to the input data DACs.
In contrast to analog processors designed for general matrix multiplication, there are classes of analog processors which are specialized to implementing convolutions. One technique to implementing an analog processor is by restricting it to only operate in one particular eigenspace of operators. While any linear operator may be expressed as a matrix, the matrix \(A\) may be factored into the product of three matrices using eigen-decomposition:
\[X=U\Lambda U^{T}, \tag{17}\]
where \(U\) is a unitary (i.e. lossless) matrix of the eigenvectors of \(X\), and \(\Lambda\) is a purely diagonal matrix of the eigenvalues of \(X\). The eigenvectors of a convolution are waves, and so when \(X\) is a matrix representing a convolution, the eigenvector matrix \(U\) represents a Fourier transform, while \(U^{T}\) represents and inverse Fourier transform.
One technique of creating an _operator-specialized processor_ is to statically implement the matrices \(U\) and \(U^{T}\), and only dynamically reconfigure the eigenvalues \(\Lambda\). In this case, in order to change linear operators from one to another only the diagonal entries of \(\Lambda\) need to be changed. In other words, if the matrix \(X\) is of size \(m\times m\), changing the matrix to another convolution matrix only requires the modulation of \(m\) weights in the analog processor instead of \(m^{2}\) weights. In the particular case where \(X\) represents a convolution, these eigenvalues are the Fourier transform of the kernel data. By tuning this set of \(m\) elements, the matrix \(X\) that is implemented by the analog processor can span the range of linear operators with the eigenvectors given by \(U\).
Eigen-decomposition is possible for planar analog processors, and has in fact been demonstrated in silicon photonic processors [11; 13]. However, there is an alternate approach to silicon photonics to implementing a convolution-specialized processor called an _optical 4F system_, which has a particular set of advantages relative to planar convolution processors.
In planar analog processors, data is inserted into the processor in a one dimensional array, and the data is processed as it propagates along the second dimension. Unlike planar processors, an optical 4F system is a volumetric processor, so data is represented in a two dimensional array, while the computation happens as light propagates in the third dimension. While this does bring dramatically higher information density and computational density, the most significant difference is that it allows the processor to scale to numbers of inputs that are entirely impractical for planar processors. Since the efficiency of analog compute was shown in eq. (11) to scale proportionally to the dimensions of the analog processor (in the limit of infinite arithmetic intensity), optical 4F systems can in theory reach computational efficiencies orders of
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Network** & **\# of layers** & \(L^{\prime}\) & \(N^{\prime}\) & \(M^{\prime}\) \\ \hline DenseNet201 & 200 & 3844 & 1152 & 128 \\ \hline GoogLeNet & 59 & 3721 & 528 & 128 \\ \hline InceptionResNetV2 & 244 & 3600 & 432 & 192 \\ \hline InceptionV3 & 94 & 3600 & 768 & 192 \\ \hline ResNet152 & 155 & 3969 & 1024 & 256 \\ \hline VGG16 & 13 & 62001 & 2304 & 256 \\ \hline VGG19 & 16 & 38688 & 2304 & 384 \\ \hline YOLOv3 & 75 & 3844 & 1024 & 256 \\ \hline \end{tabular}
\end{table}
Table 2: Median values of \(L^{\prime}\), \(N^{\prime}\), and \(M^{\prime}\) as per eq. (16) for the convolutional layers of various well-known neural networks. The values were obtained considering a 1-Mpixel (per channel) input image.
magnitude higher than planar processors.
An example of an optical 4F system processor is shown in fig. 4. It is composed of two spatial light modulators (SLMs), which might be based on either liquid crystal cells or dynamic metasurfaces. These are placed before and after a lens, one focal length away from either side. Lenses naturally perform Fourier transforms between these two place, so that the light transmitted through the first SLM is Fourier-transformed upon passing through the lens. Therefore the first SLM provides the input data, and the first lens represents multiplication by the unitary Fourier matrix \(U\). The second SLM is loaded with the Fourier transform of the kernel data and the transmitted light through it; therefore, the product of the Fourier transform of the input data with the Fourier transform of the kernel data. The second SLM therefore represents the multiplication by the diagonal eigenvalue matrix \(\Lambda\).
A second lens is then placed after the second SLM, one focal length away, which represents multiplication by the second eigenvalue matrix \(U\). Finally, a detector is placed a second focal length from the second lens, and the light impinging on the detector is therefore the convolution of the input data with the kernel data. The detector itself is sensitive only to the intensity (i.e. the norm square) of the incident field. However, the complex value of the field can nonetheless be recovered using interferometric methods. Alternatively, as others have pointed out, the nonlinear measurement performed by the optical absorption of semiconductors can also be used naturally as the nonlinear activation of the neurons.
As shown in fig. 4, more than one input channel can be processed in parallel if the kernel data is appropriately padded before the Fourier transform is taken and the data is applied to the second SLM. This allows greater SLM utilization when small kernels are being used.
Unfortunately, from a compute systems perspective, traditional optical 4F systems have a fatal flaw: the output data from the convolution is measured four focal lengths away from the input data, which presumably
Figure 4: Illustration of a transmission-mode optical 4F system performing convolutions with parallelized input channels. The input activation data can be tiled on the object plane, while the input filters can be tiled with appropriate padding before the Fourier transform is taken and the data is applied to the second SLM in the Fourier plane. In this arrangement one complete output channel is produced per measurement.
must be physically implemented in its own chip. Since this convolution operation only represents the connections between two layers of neurons, in order to implement a deep neural network with more than two layers of neurons the output data from the detector chip must be brought back somehow to the input spatial light modulator. Communicating this massive amount of data off-chip would entail massive energy costs, overcoming all advantages brought by the large-scale analog compute.
However, an optical 4F system might be folded using reflection-mode SLMs as shown in fig. 5 in order to consolidate the first SLM and the CMOS image sensor side-by-side into a single chip, and using only a single lens. In this architecture all significant data transfer between the two chips happens optically instead of electronically. On either side of the lens on two chips, split into two halves are: an SLM (or metasurface) and a CMOS image sensor. Both chips are placed one focal length away from either side of the lens such that, whenever light passes between the two chips, a Fourier transform is taken by the lens.
This system computes convolutions in two phases: a loading phase and a compute phase. The first, loading phase is shown in fig. 5(a), where the purpose is to take the Fourier transform of the activation data and load it into the second metasurface. A set of input filter maps are written to the input SLM in the first chip, which is illuminated. The Fourier transform of the reflected light is delivered to the CMOS image sensor (CIS) in the second chip, and this data is electronically transferred over to the second SLM within the same chip using DAC and ADC operations. As with the in-transmission unfolded 4F system in fig. 4, in-reflection 4F systems like the one in fig. 5 can be used to take the convolution of multiple input channels in parallel. The final result of this phase is therefore that the SLM in the second chip is configured with the Fourier transform of the activation data.
In the second, compute phase, the input kernel weight data is applied to the first SLM. This is then illuminated at a slightly oblique angle so that the reflected light impinges upon the SLM in the second chip. When this light is reflected the lens takes another Fourier transform, and the light impinging on the CIS in the first chip is the convolution of the input filter map data with the kernel data.
If the input data requires \(n^{2}C_{i}\) total pixels, loading the optical Fourier transform of the activation data will cost
\[E_{fft}=n^{2}C_{i}(2e_{adc}+4e_{dac}) \tag{18}\]
energy. One DAC operation per pixel is required to write the input data to the first metasurface, while two ADC operations and two DAC operations are required in order to reconstruct the complex field data from the intensity data and then apply it to the second SLM.
Since input channels can be performed in parallel and then looped over output channels, the second phase in
Figure 5: Illustration of a reflection-mode optical 4F system (folded into a 2F overall length) performing processing a full convolutional layer with all input and output channels in two phases: (a) phase one, where an optical Fourier transform of the input activation data is taken and loaded into the Fourier plane SLM, (b) phase two, where the input channels are tiled onto the object plane SLM and the convolution of all input channels are measured in parallel. The process is repeated for each output channel.
volves two times \(K=k^{2}C_{i}C_{i+1}\) DAC operations, and two \(n^{2}C_{i+1}\) ADC operations in the CIS to recover the field.
\[E_{conv}=2k^{2}C_{i}C_{i+1}e_{dac}+2n^{2}C_{i+1}e_{adc} \tag{19}\]
Therefore the total energy associated with the analog compute of this layer is \(E_{fft}+E_{conv}\),
\[E_{op}=2n^{2}(C_{i}+C_{i+1})e_{adc}+2C_{i}(2n^{2}+k^{2}C_{i+1})e_{dac}. \tag{20}\]
The total number of operations performed is \(N_{op}=2n^{2}k^{2}C_{i}C_{i+1}\). Therefore the efficiency of the approach is,
\[\eta=\frac{1}{e_{m}/a+e_{adc}/\left(\frac{k^{2}C_{i}C_{i+1}}{C_{i}+C_{i+1}} \right)+2e_{dac}/k^{2}C_{i+1}+e_{dac}/n^{2}}. \tag{21}\]
in the limit that the metasurfaces are large enough to handle all of the activation or weight data.
In order to take into account the finite size of the metasurfaces, which may not be large enough to fit all of the activation data from all channels at once, we first find the number of input channels that can practically be handled at once. For a metasurface of dimension \(n_{x}\times n_{y}\equiv\hat{N}\), the number of input channels that can be included at once, \(C^{\prime}\), is,
\[C^{\prime}=\lfloor\hat{N}/n^{2}\rfloor. \tag{22}\]
Using this in place of the actual number of software defined input channels we can derive the factors by which energy is saved in the optical 4F system in the case that \(C^{\prime}\geq 1\),
\[L =n^{2} \tag{23a}\] \[N =\frac{k^{2}C^{\prime}C_{i+1}}{(C^{\prime}+C_{i+1})}\] (23b) \[M =k^{2}C_{i+1}/2. \tag{23c}\]
In terms of these parameters, the efficiency of the optical 4F system is given in the usual way,
\[e_{op}=e_{dac}/M+e_{dac}/L+e_{adc}/N. \tag{24}\]
For an optical 4F system, the median values of \(L\), \(N\), and \(M\) as per eq. (23) for various neural networks is presented in table 3.
## VI Analytic results
The formula given in eqs. (3), (5), (14) and (24) can be used to estimate the efficiency when evaluating a given CNN layer on any one of those four compute platforms. They depend on the energy values for memory access, DAC/ADC operations, and digital multiplication. Estimates for many of these quantities are given in table 4, and formula for deriving the loads to estimate DAC energies for various analog compute platforms are also given in the appendix.
Each of these values depend on the CMOS technology node, but scaling laws can be used to interpolate between technology nodes [22]. We compare the various compute platforms by considering a CNN layer with parameters given in table 5, and the resulting efficiencies are plotted as a function of technology node in fig. 6.
While all processors improve with technology node, there is roughly an order of magnitude difference between digital in-memory compute processors and silicon photonic processors, and yet another order of magnitude difference to be expected between silicon photonic pro
\begin{table}
\begin{tabular}{|c|c|c|} \hline Input Channels & \(C_{i}\) & 128 \\ \hline Output Channels & \(C_{i+1}\) & 128 \\ \hline Filter size & \(k\) & 3 \\ \hline Input size & \(n\) & 512 \\ \hline Arithmetic intensity & \(a\) & 230 \\ \hline \end{tabular}
\end{table}
Table 5: Convolution parameters used to estimate efficiencies of various processors in fig. 6. The arithmetic intensity follows from the other parameters by eq. (9).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Network** & **\# of layers** & \(L\) & \(N\) & \(M\) \\ \hline DenseNet201 & 200 & 3844 & 272 & 136 \\ \hline GoogLeNet & 59 & 3721 & 128 & 64 \\ \hline InceptionResNetV2 & 244 & 3600 & 224 & 112 \\ \hline InceptionV3 & 94 & 3600 & 240 & 120 \\ \hline ResNet152 & 155 & 3969 & 1024 & 512 \\ \hline VGG16 & 13 & 62001 & 2304 & 1152 \\ \hline VGG19 & 16 & 38688 & 3456 & 1728 \\ \hline YOLOv3 & 75 & 3844 & 512 & 256 \\ \hline \end{tabular}
\end{table}
Table 3: Median values of \(L\), \(N\), and \(M\) for the convolutional layers of various well-known neural networks considering an optical 4F system as computational substrate. The values were obtained considering a 1-Mpixel (per channel) input image and an infinitely large metasurface (i.e. \(C^{\prime}\rightarrow\inf\)).
\begin{table}
\begin{tabular}{|c|c|} \hline \(e_{m}\) (96kB SRAM)[3] & 4.3pJ \\ \hline \(e_{mac}\)[3] & 0.23pJ \\ \hline \(e_{adc}\)[20] & 0.25pJ \\ \hline \(e_{dac}\)[21] & 0.01pJ \\ \hline \(e_{opt}\) [eq. (A8)] & 0.01pJ \\ \hline \(e_{load}\) for 4\(\mu m\) pitch, \(N=256\) [eq. (A6)] & 0.08pJ \\ \hline \(e_{load}\) for 250\(\mu m\) pitch, \(N=40\) [eq. (A6)] & 0.8pJ \\ \hline \(e_{load}\) for 2.5\(\mu m\) pitch, \(N=2048\) [eq. (A6)] & 0.04pJ \\ \hline \end{tabular}
\end{table}
Table 4: Energy per operation for various operations of digital and analog computers. These assume a technology node of 45nm and a voltage of 0.9V, and 8-bit values per operation. The example of memory access energy assumes a bank size of 96kB, since this is the bank sized used to construct the TPU SRAM bank.
cessors and optical 4F systems. While this difference is clearly algorithm-dependent, the underlying hardware for analog compute systems must be large enough to be able to exploit the potential algorithmic advantages, which is what is enabled by moving from a two-dimensional silicon photonic processor to a fundamentally three-dimensional processor akin to an optical 4F system.
The breakdown of improvements into memory and computational energy reductions are shown in fig. 7, which shows the contribution to the energy per operation from memory and computational elements separately for each processor type. Exploiting high arithmetic intensity with in-memory compute vastly improves energy consumption between CPUs and the other platforms by first reducing memory energy well below computational energy. The analog processors in turn have reduced computational energy, with less computational energy on a per-operation basis for analog processors with more inputs.
It is worthwhile noting that the efficiencies reported in fig. 6 for the digital in-memory processor are significantly higher than the Google TPU, which reported 0.3-2 TOPS/W depending on the CNN architecture, for a chip manufactured at a 28-nm node. The in-memory compute digital processor modeled here has the same architectural parameters as the TPU: a 256 by 256 systolic array, and 24-MiB of SRAM divided into 256, 96-KB banks. Here we predict that number should be roughly 5 TOPS/W,
Figure 6: Efficiencies from analytic models of various compute architectures as a function of technology node.
Figure 7: Contributions of energy consumption per operation for various processor types. DIM is digital in-memory, SP is silicon photonic, and O4F is optical 4F system architectures. The CNN layer parameters are in table V, and assumptions about architectural details are given in the text. The technology node is assumed to be 32nm for all processor types.
which is a significantly higher efficiency than reported in the literature [1]. However, we note that this estimation simplifies the energy costs associated with the digital multiplication and storing and transporting data in and between each processing element in the systolic array.
The silicon photonics processor modelled in figs. 6 and 7 assumed an array size of 40 by 40, which is typical for most processors reported in the literature [10; 11; 12; 13], since the various modulator technologies typically require the array to have pitches in the 100-400 \(\mathrm{\SIUnitSymbolMicro m}\) range. The computational energy consumption is highly limited by the optical modulator technology, which currently stands at around 7 pJ/byte, as discussed in section A.1. We assume in our model that this will be improved to 0.5 pJ over time, but even with this assumed advantage it is clear in fig. 6 that silicon photonics will have a difficult time maintaining an efficiency advantage over digital compute in memory technologies unless it is possible to scale up the processor sizes. We also assume a 24-MiB SRAM for the silicon photonics processor, divided into 40, 600-KB SRAM banks, following the TPU architecture.
The optical 4F system is based on the architecture in fig. 5, with 4-Mpx SLMs and a 24-MiB SRAM divided into 2048, 12-KB banks, again following the TPU architecture. The SLM pitch for DAC loads involved in active matrix addressing of the SLMs was assumed to be 2.5 \(\mathrm{\SIUnitSymbolMicro m}\), which results in a line capacitance of 0.9 fF and a load energy of 40 fJ as shown in table 4. The optical energy per pixel is based on 1550-nm light, and contributes 10 fJ/pixel per operation as shown in table 5. The large array sizes enabled by realistic SLM dimensions are able to reduce computational energy consumption even below the memory consumption in fig. 7.
## VII Computational results
Thus far, we have provided simple analytic formula that estimate the efficiency of various AI inference platforms on the basis of how they scale. These formula are approximations with several limitations, the biggest of which is that they don't take into account situations where the matrices involved are too large for either the capacity of the in-memory compute device or its inputs. In that circumstance the problem needs to be broken up into several smaller matrix multiplications. In order to get around this limitation we developed cycle-accurate models of a systolic array and of an in-reflection optical 4F system, and tested those models when evaluating various CNNs for a given input image size. The more accurate computational results are then compared with the analytic models from the previous sections.
### Systolic array efficiency estimation
For analyzing the energy efficiency of a systolic array, we considered an architecture similar to that of the Google TPU [ref:TPU], with a weight-stationary systolic array of size 256 x 256 tiles. Each of the 256 ports of the array has access to an individual 96-KB SRAM block, totaling 24 MiB of buffer memory for storing activations (i.e. inputs/outputs of a convolutional layer). The weights are stored in DRAM and accessed based to the convolutional layer being executed. The activations and weights are 8-bit fixed point.
In terms of energy costs, we used as reference the SRAM and MAC energy values for a 45-nm process at 0.9 V from [3]: SRAM read/write of 1.25 pJ/byte (8-KB memory) and 8-bit MAC operation of 0.23 pJ/byte. To align with the SRAM block size of 96 KB in the TPU, the 8-KB SRAM energy cost was scaled in size by a factor of \(\sqrt{96\mathrm{K}/8\mathrm{K}}=3.46\) in accordance with eq. (A2), resulting in 4.33 pJ/byte. Associated to each MAC operation, we also included the energy costs of the load and of the memory read/write inside each array tile (to store/propagate the 8-bit input and 32-bit accumulation = 40 bits). A load energy cost of 2.82 fJ/bit was computed using eq. A6, where the distance between array tiles was approximated based on the 256 x 256 array area occupancy (24%) of the entire TPU chip (331 \(\mathrm{mm}^{2}\)), resulting in a distance of 34.8 \(\mathrm{\SIUnitSymbolMicro m}\) between tiles. The internal array memory energy cost was obtained by scaling the 8-KB SRAM block to 40 bits, resulting in 1.25 pJ/byte \(\times\sqrt{5/8\mathrm{K}}=31.25\) fJ/byte.
Lastly, using the techniques presented in [22], we scaled all the energy values (except for load, since it's not directly process-dependent) from 45-nm process to the appropriate technology nodes, ranging from 180 \(\mathrm{nm}\) down to 7 \(\mathrm{nm}\). The results are presented in fig. 8. Both the analytic expression and the cycle-accurate model follow the same trend, with a slight divergence as the technology node is reduced. This can be accounted for the fact that \(e_{load}\) does not depend on technology node, and its cost starts becoming a dominating factor in the overall energy cost since the other energy sources diminish as node size reduces.
### Optical computer efficiency estimation
For the optical 4F system we considered 4-Mpixel SLMs, along with the same 24-MiB SRAM as in the systolic array analysis. With this, the SRAM is partioned into 2048 equal parts (one per metasurface row), resulting in a size-scaled SRAM read/write energy of 1.55 pJ/byte. The DAC, ADC and laser energies were obtained using the values in table 4 considering a 2.5-\(\mathrm{\SIUnitSymbolMicro m}\) pitch.
A comparison between the analytic expression and a cycle-accurate model of the optical 4F system is presented in fig. 9. The figure provides an overall curve for the efficiency, with significant gain when constructing the
device with smaller technology nodes. The main differences which explain the divergence between the analytic and cycle-accurate models include:
* The cycle-accurate model considers the exact number of metasurface executions to account for output detector ADC read operations, output memory accesses, and total laser energy consumed.
* Equation (23) considers that the dimensions of the output are the same as that of the input (i.e. _m=n_), which naturally does not account for strides bigger than 1.
* The value of \(e_{dac}\) in eq. (24) is comprised of \(e_{dac,1}+e_{load}+e_{opt}\), resulting in an energy cost based on number of active pixels in the metasurface. However, the cycle-accurate model more precisely estimates the energy costs by separating the pixel-wise energy (\(e_{dac,1}+e_{load}\)) from the metasurface size-dependent laser energy (\(e_{opt}\)).
### Optical computer energy cost distribution
The cycle-accurate model for the optical 4F system can provide a detailed summary of the energy cost distribution based on four different system components: DAC, ADC, SRAM, and laser. These results for VGG19 and YOLOv3 across different technology nodes are presented in fig. 10, with the values specified in picojoules per MAC operation.
Naturally, as the node size reduces, ADC and SRAM energy costs decrease. On the other hand, the DAC energy includes the dominating \(e_{load}\) in its composition, and since the latter is technology node-independent, we see very little reduction in the overall DAC energy cost throughout the different nodes. Just as with \(e_{load}\), the laser energy \(e_{opt}\) does not change with technology node and is, thus, constant.
Comparing the energy cost distributions between VGG19 (left) and YOLOv3 (right), it is curious to note that a network with a much larger arithmetic intensity as in the case of VGG19 (refer to table 1) presents a higher SRAM energy per MAC operation. This can be explained by the fact that the cycle-accurate model takes into account the sizes of the SLMs and the inputs, making the VGG19 network slightly less efficient in terms of placement of the input image pixels onto the metasurface due to it presenting (on average per layer) larger input images with more channels. This results in more metasurface executions - and, consequently, more output activation buffering (SRAM read/write) - to complete the convolutions in the network. If we consider an infinitely large metasurface, then this artifact naturally goes away and VGG19 becomes more efficient than YOLOv3 in terms of SRAM energy per MAC operation.
## VIII Conclusions
In-memory compute and analog compute techniques are both effective techniques to address different contributions to total processor energy cost. While in-memory compute is able to reduce memory access energy per operation in the context of a high arithmetic intensity algorithm, analog compute is able to reduce the computational energy itself in proportion to the scale of the analog processor. Convolutional neural networks are a perfect application for such analog, in-memory compute architectures since they have high arithmetic intensity, large linear operators, and typically require low bit precision for forward propagation.
Figure 8: Efficiency comparison between a cycle-accurate model and the analytic expression given by eq. (8) and the values in table 1. Both models are running YOLOv3 (1-Mpixel input image) using a \(256\times 256\) weight-stationary systolic array and a 24-MiB SRAM (as in the Google TPUv1).
Figure 9: Comparison of eq. (24) with a cycle-accurate model of the optical 4F processor running YOLOv3 (1-Mpixel input image) using 4-Mpixel SLMs and a 24-MiB SRAM.
To provide some intuition regarding how much energy efficiency can be improved using one or both of these techniques, we have provided simple analytic formula estimating the efficiency for a range of processor types, including digital in-memory processors like a systolic array, analog in-memory compute processors, and optical 4F systems which are a class of analog in-memory processor that are specialized to convolutions. These analytic formula, when applied to average neural network parameters provided in tables 1 to 2 show good agreement with a cycle accurate model of the TPU architecture in figs. 8 and 9.
As shown in fig. 6 of these approaches perform orders of magnitude better than CPUs, at any modern technology node. The largest improvement is due to the reduction in memory access energy per operation for in-memory compute processors, as shown in fig. 7. However, this technique is so effective that computational energy becomes the dominant contribution, which is then improved by analog computing. Since analog computing's energy advantage is proportional to the scale of the analog processor, optical 4F systems have a particular advantage in that they can be scaled large enough to reduce computational energy per operation below the minimum memory energy required for an in-memory compute processor when evaluating a modern CNN algorithm.
## Appendix A Processor Energy Model Parameters
In this section, we provide derivations for the typical energies per operation associated with memory access, digital MAC, ADC, and DAC, which are all necessary in order to properly compare the various computing schemes discussed in this paper.
The energy consumed by a digital MAC operation will scale as the number of gates involved with the logical unit: indeed the lower bound of a digital MAC is set by the Landauer limit, which is proportional to the number of gates. For a serial-parallel multiplier, the number of gates \(G\) is \(G=6B^{2}\), and for other multiplier implementations, the area or gate-count is still proportional to \(B^{2}\)[23], where \(B\) is the number of bits of the operand. A full adder has and additional nine gates per bit, so we can write
\[e_{mac}=\gamma_{mac}(6B^{2}+9B)kT \tag{10}\]
where \(k\) is Boltzmann's constant, \(T\) is temperature, and \(\gamma_{mac}\) is a dimensionless constant. Landauer's limit specifies that the energy per MAC is bounded on the lower end by \(\gamma_{mac}>\ln(2)\).[24] Typically, \(\gamma_{mac}\approx 122,500\) for a 45-nm process [3], so current digital multipliers have several orders of magnitude improvement that could in theory be achieved.
All accelerators will need internal memory for both neural network parameters and intermediate variables (unless an analog processor is built with a large enough capacity to store the entire network which is currently impractical). Digital electronic SRAM banks have an energy per operation that scales as the length of the bit and word lines used to address and write data to the SRAM, since most of the power is consumed in charging and discharging the effective capacitors formed by these lines. In general then, the energy per memory access can be written as,[3]
\[e_{m}=e_{m0}\sqrt{N_{m}} \tag{11}\]
where \(N_{m}\) is the size of the memory bank, and \(e_{m0}\) is a constant with units of energy. The scaling presented here is not reflective of the lower bound of energy consumption according to the Landauer limit, since currently it's the charging and discharging of capacitive lines that drives the energy consumption of SRAM rather than switching gates, which is why it scales according to the root of the array size. In the limit of a single bit cell, one might compare \(e_{m0}\) with the Landauer limit by setting \(e_{m0}=\gamma_{m}kT\). The resulting \(\gamma_{m}\) is many more orders of magnitude away from the Landauer limit than even digital MACs: \(\gamma_{m}\approx 3\times 10^{6}\) for a 45-nm CMOS process,
Figure 10: Energy cost distribution for the cycle-accurate model of the 4F optical system running VGG19 (left) and YOLOv3 (right).
which corresponds to an \(e_{m0}\approx 5\) fJ. It can be argued that both the sheer value of \(e_{m0}\) compared to the Landauer limit and the fact that power consumption in the capacitance of the addressing lines in SRAM leads to the energy scaling proportional to the root of the array size are broadly the source of computing's most severe energy problem.[3] Fortunately, in the case of specialized processors implementing operations with high arithmetic intensity like convolutions, we are able to significantly mitigate this problem.[5]
For analog computation, ADC energy depends exponentially on bit precision, most fundamentally because it requires sufficient signal to noise ratio to distinguish the levels. When these levels are defined in terms of linear voltage steps, the ADC energy per sample is[20; 25]
\[e_{adc}=\gamma_{adc}kT2^{2B}, \tag{10}\]
where \(k\) is Boltzmann's constant, \(T\) is the temperature, and \(\gamma\) is a dimensionless constant. It has been argued[20] that \(\gamma_{adc}\) is bounded on the lower end at \(\gamma_{adc}>3\) by thermal noise, and presents an empirical survey showing that the state of the art value for on-chip ADCs is \(\gamma_{adc}\approx 1404\) for a 65-nm process, which scales to about 927 at 45 nm.
DACs scale in the same manner as ADCs:
\[e_{dac}=\gamma_{dac}kT2^{2B} \tag{11}\]
and for similar arguments. However, the value for state-of-the art on \(\gamma_{dac}\) is \(\gamma_{dac}\approx 39\).[21]
However, the expression in eq. (11) only takes into account the power burned in the DAC circuitry itself, and not the power consumed driving the analog processor load. For example, the load of the bitline associated with a ReRAM processor in fig. 3(b) will be very different from the load associated with the variable optical attenuator (VOA) in the optical analog processor in fig. 3(b). An optical processor will have an additional energy contribution from the optical laser power itself, which can be considered effectively part of the load energy. Therefore we can write,
\[e_{dac,i}=\gamma_{dac}kT2^{2B}+e_{load,i} \tag{12}\]
for both \(e_{dac,1}\) and \(e_{dac,2}\). In the following subsections these quantities are estimated for both analog, memristive processors and silicon photonic processors.
However, we note that for physically large arrays the load can often be dominated by the capacitance of the row and column addressing lines. The formula for the energy dissipation due to the capacitance of the bitlines and wordlines is,
\[e_{load,i}=(1/2)\mathcal{C}LV^{2} \tag{13}\]
where \(\mathcal{C}\) is the capacitance per unit length of the line, and \(L\) is the line length. For reference, a typical CMOS copper trace has a capacitance of around 0.2 fF/\(\mu m\)[26], so for a process with 0.9 volts they typically consume 0.08 fJ/\(\mu m\) per operation.
### Silicon Photonics Analog Processors
For an optical computer, there is both an optical and an electrical component to the load for the driving of the inputs:
\[e_{load,1}=e_{elec}+e_{opt}. \tag{14}\]
The electrical component will involve driving some kind of electro-optic modulator, and the energy per operation will depend on the capacitance of that component in the usual way. In the context of silicon photonics, this might be a variable optical attenuator (VOA) on the data input, while a mach-zender interferometer, MEMS modulator, or phase-change modulator is often used to store the weight data in the array. Some of the lower energy approaches used for electro-optic modulators are plasmonic resonators. Out of these, the lowest recorded to date energy per modulation of plasmonic modulators is around \(e_{elec}\approx 9\) pJ.[27; 28] These are comparable to electro-optic modulators made of doped silicon micro-ring resonators tuned via carrier plasma dispersion have been demonstrated with roughly 0.9pJ/bit, or 7pJ/B.[29] It may be possible to design optical modulators in the future with lower energy per sample than these figures.[30]
The optical contribution to the load itself will depend exponentially on bit precision since the dominant source of optical noise is shot noise. Therefore for the optical component we can write,
\[e_{opt}=\frac{\hbar\omega}{\eta_{opt}}2^{2B}\equiv\gamma_{opt}kT2^{2B} \tag{15}\]
where \(\hbar\) is Plank's constant in units of angular frequency, \(\omega\) is the frequency of the light, and \(\eta_{opt}\) is the efficiency of the optical system and photodetector. Conveniently, the optical power consumption also scales as \(2^{2B}\) since the dominant source of noise is typically shot noise. To provide numbers for context, for 1550-nm light and an optical efficiency of 80%, we have \(\gamma_{opt}\approx 39\), which corresponds to \(e_{opt}\approx 10\) fJ. In light of the energy per sample associated with current electro-optic technology, the optical contribution to the energy is negligible.
The load associated with the reconfiguration of the weights, \(e_{dac,2}\), only has an electrical component, which will involve both the electro-optic modulators and the electrical bitlines used to address the array. Ultra-low energy electro-optic modulators typically also have small dimensions in order to minimize the capacitance, on the order of a few microns, which leads to an additional energy consumption of a few femtojoules per element in the length of the array. This is also negligible compared to the energy associated with the electro-optic modulator itself. Therefore, both \(e_{dac,1}\) and \(e_{dac,2}\) are dominated by the electro-optic modulator energy.
### Memristive Analog Processors
In the ReRAM processor, the load has two contributions: the capacitance associated with the conductive lines in the array, and the dissipation of charge in the memristors. The pitch of ReRAM arrays tends to be limited by the size of the transistor placed at each node, which means the array bitlines and wordlines are relatively short and have low capacitance. Nonetheless energy consumption in large arrays can still be dominated by the capacitance, which is given by eq. (10).
On the other hand, in a ReRAM array the energy per operation consumed by the memristors themselves can also be quite high since the energy is proportional both the size of the array, and proportional to their average conductance, and the conductance is limited by the quantum conductance of \(G_{0}=2e^{2}/h\), where \(e\) is the charge of an electron and \(h\) is Plank's constant. The conductance of these elements is therefore limited in the range of \(G=G_{0}\) to \(G=G_{0}2^{B}\) for \(B\)-bit precision elements.
Memristors are highly nonlinear elements, so the input data is usually supplied with pulse width modulation instead of changing the voltage. Therefore the energy consumed by the entire array can be written as a sum over all the memristors
\[\langle E_{ReRAM}\rangle=\delta t\sum_{i=1}^{M}\sum_{j=1}^{N}\langle G_{ij} \rangle\langle V_{j}^{2}\rangle. \tag{12}\]
where \(\delta t\) is the samplint period. Using the nominal value of the conductances and voltages for each memristor, we can simplify the equation to
\[\langle E_{ReRAM}\rangle=\delta tMN\langle G\rangle V_{rms}^{2}. \tag{13}\]
In one action of the array the number of MAC operations is \(MN\), so the average energy per operation consumed by the array is actually a constant and _is not reduced by scaling up the array_ in the case of a ReRAM array:
\[e_{ReRAM}\equiv\frac{\langle E_{ReRAM}\rangle}{MN}=\langle G\rangle V_{rms}^ {2}\delta t \tag{14}\]
As noted above, the conductance of memristors is only well behave above quantum conductance. Assuming a uniform distribution, the average value will be half the dynamic range, and therefore \(\langle G\rangle=2^{B-1}G_{0}\).
The energy is proportional to the square of the voltage, so we assume this is limited to maintain \(B\) effective bits of precision relative to the Johnson-Nyquist thermal noise \(V_{noise}\) limit in the memristors. For a clock period of \(dt\), the thermal noise is,
\[V_{noise}^{2}=\frac{4kT}{G_{0}\delta t}. \tag{15}\]
since the maximal noise is given by the minimum conductance. Setting \(V_{rms}^{2}=(3/2)2^{2B}V_{noise}^{2}\) as the minimal required voltage to maintain \(B\) bits of accuracy, the minimum energy absorbed by the memristor array per operation is,
\[e_{ReRAM}=3kT2^{3B}. \tag{16}\]
While this is the ideal solution, in practice there is a minimum voltage that can be applied that is typically much higher than the thermal noise limit, and is on the order of \(V_{rms}\approx 70\) mV. Using this estimate and a sampling period of \(\delta t=1\) ns, the energy per operation due to the memristors is \(e_{ReRAM}\approx 0.05\) pJ, which is about five times lower than the energy per operation in commercial memristor arrays, but nonetheless places an upper bound on the efficiency at \(\eta\approx 20\) TOPS/W.
|
2301.07259 | The Solecki Dichotomy and the Posner-Robinson Theorem are Almost
Equivalent | The Solecki dichotomy in descriptive set theory and the Posner-Robinson
theorem in computability theory bear a superficial resemblance to each other
and can sometimes be used to prove the same results, but do not have any
obvious direct relationship. We show that in fact there is such a relationship
by formulating slightly weakened versions of the two theorems and showing that,
when combined with determinacy principles, each one yields a short proof of the
other. This relationship also holds for generalizations of the Solecki
dichotomy and the Posner-Robinson theorem to higher levels of the
Borel/hyperarithmetic hierarchy. | Patrick Lutz | 2023-01-18T01:41:32Z | http://arxiv.org/abs/2301.07259v1 | # The Solecki Dichotomy and the Posner-Robinson Theorem are Almost Equivalent
###### Abstract.
The Solecki dichotomy in descriptive set theory and the Posner-Robinson theorem in computability theory bear a superficial resemblance to each other and can sometimes be used to prove the same results, but do not have any obvious direct relationship. We show that in fact there is such a relationship by formulating slightly weakened versions of the two theorems and showing that, when combined with determinacy principles, each one yields a short proof of the other. This relationship also holds for generalizations of the Solecki dichotomy and the Posner-Robinson theorem to higher levels of the Borel/hyperarithmetic hierarchy.
## 1. Introduction
This paper is about the relationship between two theorems: the Solecki dichotomy from descriptive set theory, which says that every Borel function on the reals is either a countable union of continuous functions or at least as complicated as the Turing jump [12], and the Posner-Robinson theorem in computability theory, which says that every real is either computable or looks like \(0^{\prime}\) relative to some oracle [13]. We will give formal statements of both theorems later.
Superficially, these theorems are very similar. Recall that every continuous function on the reals is computable relative to some oracle. So, allowing for some poetic license, we might summarize both theorems as saying that every object of some sort is either computable or at least as complicated as the jump.
However, it is not apparent whether this similarity is more than superficial. In the Solecki dichotomy, the objects under consideration are second-order--functions from the real numbers to the real numbers--while in the Posner-Robinson theorem they are first order--individual real numbers. Note that this difference is distinct from the observation that the Solecki dichotomy is a "bold-face" statement while the Posner-Robinson theorem is a "light-face" one. Additionally, the superficial analogy seems to suggest that the Solecki dichotomy should simply say that every function is either continuous (rather than a countable union of continuous functions) or at least as complicated as the jump, but this is false.
One indication that there might be something mathematically significant behind this similarity can be found in work by Kihara. First, Kihara observed [6] that the Solecki dichotomy could be used to prove a special case of Martin's conjecture that had previously been proved by Slaman and Steel in [15] using the Posner-Robinson theorem. Second, work by Gregoriades, Kihara and Ng [3] used a version of the Posner-Robinson theorem to prove results related to the decomposability conjecture that had previously been proved using the Solecki dichotomy [12, 11].
The goal of this paper is to show that this is no accident--there is a meaningful technical relationship between the Solecki dichotomy and the Posner-Robinson theorem. In particular, we will formulate slightly weakened versions of the Solecki dichotomy and the Posner Robinson theorem1 and show that each one can be used to give a short proof of the other2. The fact that this is possible, along with the details of the proofs, support the view that the Solecki dichotomy is morally (though not literally) a bold-face version of the Posner-Robinson theorem.
There are also generalizations of the Solecki dichotomy and the Posner-Robinson theorem to higher levels of the Borel/hyperarithmetic hierarchy and all of our results go through for these generalizations, with the proofs more or less unchanged. We discuss this further in Section 4.
In the remainder of the introduction, we will introduce the Solecki dichotomy and the Posner-Robinson theorem, as well as the weakened versions that we will use in this paper. We will also briefly discuss determinacy principles, which provide the main technical tool that we will use in our proofs.
### The Solecki dichotomy
Informally, the Solecki dichotomy says that every sufficiently definable function from reals to reals is either a countable union of continuous functions or at least as complicated as the Turing jump3. To state it formally, we must first state precisely what we mean by "a countable union of continuous functions" and "at least as complicated as the Turing jump."
Footnote 3: Actually, most published statements of the Solecki dichotomy use a function called “Pawlikowski’s function” in place of the Turing jump, but it is not hard to see that these two versions of the theorem are equivalent
**Definition 1.1**.: A function \(f\colon\omega^{\omega}\to\omega^{\omega}\) is _\(\sigma\)-continuous_ if there is a partition \(\{A_{n}\}_{n\in\omega}\) of \(\omega^{\omega}\) into countably many pieces such that for each \(n\), \(f\upharpoonright_{A_{n}}\) is continuous with respect to the subspace topology on \(A_{n}\).
Note that there is a small subtlety here: just because \(f\upharpoonright_{A_{n}}\) is continuous with respect to the subspace topology on \(A_{n}\) does not mean that \(f\upharpoonright_{A_{n}}\) can be extended to a continuous function defined on all of \(\omega^{\omega}\). We will also refer to a partial function which is continuous with respect to the subspace topology on its domain as a _partial continuous function_.
**Definition 1.2**.: A function \(f\colon\omega^{\omega}\to\omega^{\omega}\)_topologically embeds_ into a function \(g\colon\omega^{\omega}\to\omega^{\omega}\) if there are topological embeddings \(\varphi,\psi\colon\omega^{\omega}\to\omega^{\omega}\) such that \(\psi\circ f=g\circ\varphi\). In other words, the following diagram commutes.
**Definition 1.3**.: The _Turing jump_ is the function \(J\colon\omega^{\omega}\to\omega^{\omega}\) defined by
\[J(x)(n):=\begin{cases}0&\text{if }\Phi_{n}^{x}(n)\uparrow\\ m+1&\text{if }\Phi_{n}^{x}(n)\downarrow\text{ in exactly }m\text{ steps}.\end{cases}\]
Note that our official definition of the Turing jump is slightly different from the standard one, in which \(J(x)(n)\) only indicates whether or not \(\Phi_{n}^{x}(n)\) converges, not how many steps it takes to converge. This is necessary for Theorem 1.4, but it doesn't matter anywhere else in this paper--in other words, after the statement of Theorem 1.4, the entire remainder of the paper can be read as if we had defined \(J\) in the usual way instead of the definition given above.
**Theorem 1.4** (Solecki dichotomy).: _For every Borel function \(f\colon\omega^{\omega}\to\omega^{\omega}\), either \(f\) is \(\sigma\)-continuous or the Turing jump topologically embeds into \(f\)._
Theorem 1.4 was first proved by Solecki in [16] in the special case where \(f\) is of Baire class 1. It was extended to all Borel functions by Zapletal in [17] and to all analytic functions by Pawlikowski and Sabok in [12]. It is also known to hold for all functions under AD[17].
We will now state two weaker versions of the Solecki dichotomy, obtained by replacing topological embeddability with weaker notions of reducibility between functions.
**Definition 1.5**.: A function \(f\colon\omega^{\omega}\to\omega^{\omega}\) is _reducible4_ to a function \(g\colon\omega^{\omega}\to\omega^{\omega}\), written \(f\leqslant g\), if there are partial continuous functions \(\varphi,\psi\colon\omega^{\omega}\to\omega^{\omega}\) such that for all \(x\in\omega^{\omega}\), \(f(x)=\psi(g(\varphi(x)))\). In other words, the following diagram commutes
Footnote 4: This notion of reducibility has also been called _strong continuous Weihrauch reducibility_[1] and _continuous reducibility_ (by Carroy [2]).
Note that this definition implies that \(\varphi\) is actually total and that \(\operatorname{range}(g\circ\varphi)\subseteq\operatorname{dom}(\psi)\).
**Theorem 1.6** (Solecki dichotomy, version 2).: _For every Borel function \(f\colon\omega^{\omega}\to\omega^{\omega}\), either \(f\) is \(\sigma\)-continuous or \(J\leqslant f\)._
Note that if \(f\) topologically embeds into \(g\) then \(f\) is also reducible to \(g\): if \(\varphi\) and \(\psi\) are topological embeddings such that \(\psi\circ f=g\circ\varphi\) then by definition, \(\psi^{-1}\) is a partial continuous function and \(f=\psi^{-1}\circ g\circ\varphi\). Hence version 2 of the Solecki dichotomy above really is a weakened version of the original Solecki dichotomy.
Before going further, let's try to understand this notion of reducibility a little better. Suppose that \(f\) is reducible to \(g\) via partial continuous functions \(\varphi\) and \(\psi\)--i.e. that \(f=\psi\circ g\circ\varphi\). Then the task of evaluating \(f\) at a given point can be achieved by evaluating \(g\) at a single point, together with some continuous pre- and post-processing using \(\varphi\) and \(\psi\), respectively.
This way of understanding reducibility suggests another, slightly weaker, notion: instead of only allowing \(\psi\) to use \(g(\varphi(x))\) in the post-processing step, why not allow it to use the original input, \(x\), as well? Using this weakened notion of reducibility yields our final version of the Solecki dichotomy, which is the version we will use for the rest of the paper.
**Definition 1.7**.: A function \(f\colon\omega^{\omega}\to\omega^{\omega}\) is _weakly reducible5_ to a function \(g\colon\omega^{\omega}\to\omega^{\omega}\), written \(f\leqslant_{w}g\), if there are partial continuous functions \(\varphi\colon\omega^{\omega}\to\omega^{\omega}\) and \(\psi\colon\omega^{\omega}\times\omega^{\omega}\to\omega^{\omega}\) such that for all \(x\in\omega^{\omega}\), \(f(x)=\psi(g(\varphi(x)),x)\).
Footnote 5: Also known as _continuous Weihrauch reducible_.
**Theorem 1.8** (Solecki dichotomy, version 3).: _For every Borel function \(f\colon\omega^{\omega}\to\omega^{\omega}\), either \(f\) is \(\sigma\)-continuous or \(J\leqslant_{w}f\)._
### The Posner-Robinson theorem
Informally, the Posner-Robinson theorem says that every real \(x\) is either computable or "looks like" \(0^{\prime}\) relative to some real \(y\). Formally stated, it reads as follows.
**Theorem 1.9** (Posner-Robinson theorem).: _For every real \(x\), either \(x\) is computable or there is some real \(y\) such that \(x\oplus y\geqslant_{T}y^{\prime}\)._
This theorem was first proved by Posner and Robinson in [13] and extended by Jockusch and Shore [4] and Shore and Slaman [14]. As usual, there is also a relativized version of this theorem.
**Theorem 1.10** (Posner-Robinson theorem, relativized version).: _For every real \(z\) and every real \(x\), either \(x\leqslant_{T}z\) or there is some real \(y\) such that \(x\oplus y\oplus z\geqslant_{T}(y\oplus z)^{\prime}\)._
We can weaken this relativized version of the Posner-Robinson theorem by requiring the conclusion to hold not for every \(z\), but only for all \(z\) in some cone of Turing degrees.
**Definition 1.11**.: A _cone of Turing degrees_ is a set of the form \(\{x\in\omega^{\omega}\mid x\geqslant_{T}y\}\) for some fixed real \(y\), called the _base_ of the cone.
**Theorem 1.12** (Posner-Robinson theorem, cone version).: _There is a cone of Turing degrees, \(C\), such that for every \(z\in C\) and every real \(x\), either \(x\leqslant_{T}z\) or there is some real \(y\) such that \(x\oplus y\oplus z\geqslant_{T}(y\oplus z)^{\prime}\)._
### Determinacy principles
As we mentioned earlier, determinacy principles are the main technical tool in our proofs. In this section, we will state the main theorems about determinacy that we need, as well as a useful corollary of these theorems.
Recall that determinacy principles involve games of the following form: two players, called player 1 and player 2, alternate playing natural numbers. Each player can see all previous moves by the other player. At the end, they have jointly formed a sequence \(x\in\omega^{\omega}\). To determine the winner, we have
some fixed set \(A\subseteq\omega^{\omega}\), called the _payoff set_. Player 1 wins if \(x\in A\) and otherwise player 2 wins. The game with payoff set \(A\) is sometimes denoted \(G(A)\).
In principle, it is possible that for a fixed payoff set \(A\), neither player has a winning strategy. When one of the two players does have a winning strategy, the game \(G(A)\) is said to be _determined_. Determinacy principles assert that when \(A\) is sufficiently definable, \(G(A)\) must be determined. For example, Martin proved that whenever \(A\) is Borel, \(G(A)\) is determined [9].
**Theorem 1.13** (Borel determinacy).: _For every Borel set \(A\subseteq\omega^{\omega}\), \(G(A)\) is determined._
Determinacy principles for sets which are not Borel are typically not provable in \(\mathsf{ZFC}\). However, they are usually provable from large cardinal principles. For example, Martin proved that if there is a measurable cardinal then all games with \(\mathbf{\Pi}^{1}_{1}\) payoff sets are determined [10].
**Theorem 1.14** (Analytic determinacy).: _Assume that there is a measurable cardinal. Then for every \(\mathbf{\Pi}^{1}_{1}\) set \(A\subseteq\omega^{\omega}\), \(G(A)\) is determined._
There is also an axiom, known as the _Axiom of Determinacy_ and abbreviated \(\mathsf{AD}\), which states that for _all_ sets \(A\subseteq\omega^{\omega}\), \(G(A)\) is determined. This axiom is incompatible with the axiom of choice, but it is consistent with \(\mathsf{ZF}\)[5].
**Theorem 1.15**.: _Assuming that \(\mathsf{ZF}+\)"there are infinitely many Woodin cardinals" is consistent, so is \(\mathsf{ZF}+\mathsf{AD}\)._
Martin has also proved a corollary of determinacy which is often useful in computability theory and which we will use below. To state it we need a few more definitions.
**Definition 1.16**.: A set \(A\subseteq\omega^{\omega}\) is _cofinal in the Turing degrees_ (or sometimes just _cofinal_) if for every \(x\in\omega^{\omega}\) there is some \(y\in A\) such that \(y\geqslant_{T}x\).
Note that if a set \(A\subseteq\omega^{\omega}\) does not contain a cone of Turing degrees then its complement must be cofinal (and vice-versa).
**Definition 1.17**.: A _pointed perfect tree_ is a tree \(T\) on \(\omega\) such that
1. \(T\) has no dead ends--i.e. every node has at least one child.
2. \([T]\) has no isolated paths--i.e. every node has incomparable descendants.
3. Every path \(x\in[T]\) computes \(T\).
It is not too hard to show that if \(T\) is a pointed perfect tree then \([T]\) is cofinal in the Turing degrees. Martin showed that determinacy implies a partial converse of this: if \(A\) is cofinal then there is some pointed perfect tree \(T\) such that \([T]\subseteq A\)[8, 15]. Moreover, the amount of determinacy required to prove this matches the complexity of \(A\).
**Theorem 1.18**.: _Suppose \(A\subseteq\omega^{\omega}\) is cofinal in the Turing degrees. Then:_
* _If_ \(A\) _is Borel,_ \(A\) _contains_ \([T]\) _for some pointed perfect tree_ \(T\)_._
* _If_ \(\mathbf{\Pi}^{1}_{1}\)_-determinacy holds and_ \(A\) _is_ \(\mathbf{\Pi}^{1}_{1}\) _then_ \(A\) _contains_ \([T]\) _for some pointed perfect tree_ \(T\)_._
* _If_ \(\mathsf{AD}\) _holds and_ \(A\) _is any set then_ \(A\) _contains_ \([T]\) _for some pointed perfect tree_ \(T\)_._
There is a simple observation that yields a useful strengthening of this theorem. Namely, suppose \(\{A_{n}\}_{n\in\omega}\) is a countable sequence of subsets of \(\omega^{\omega}\) whose union is cofinal in the Turing degrees. Then there must be some \(n\) such that \(A_{n}\) is cofinal in the Turing degrees. Thus we have the following theorem.
**Theorem 1.19**.: _Suppose \(\langle A_{n}\rangle_{n\in\omega}\) is a countable sequence such that \(\bigcup_{n\in\omega}A_{n}\) is cofinal in the Turing degrees. Then:_
* _If_ \(\mathbf{\Pi}^{1}_{1}\)_-determinacy holds and each_ \(A_{n}\) _is_ \(\mathbf{\Pi}^{1}_{1}\) _then there is some_ \(n\) _and pointed perfect tree_ \(T\) _such that_ \(A_{n}\) _contains_ \([T]\)_._
## 2. Posner-Robinson \(\implies\) Solecki
In this section, we will assume a version of the Posner-Robinson theorem (specifically Theorem 1.12) and use it to prove a version of the Solecki dichotomy (specifically Theorem 1.8). Here's a brief outline of the proof. First, for any functions \(f,g\colon\omega^{\omega}\to\omega^{\omega}\), we will introduce a game, \(G(f,g)\), and show that player 2 has a winning strategy in this game if and only if \(f\leq_{w}g\). We will then show that in the special case of the game \(G(J,f)\), player 1 has a winning strategy if and only if \(f\) is \(\sigma\)-continuous. It is in this step that we will make use of the Posner-Robinson theorem. Finally, it will be clear from the definition of \(G(f,g)\) that as long as \(f\) and \(g\) are both Borel functions then the payoff set of \(G(f,g)\) is also Borel. Thus by Borel determinacy, if \(f\colon\omega^{\omega}\to\omega^{\omega}\) is Borel then either player 1 wins \(G(J,f)\) or player 2 does. In the first case, \(f\) is \(\sigma\)-continuous and in the second case, \(J\leq_{w}f\) and so we have reached the dichotomy in the statement of Theorem 1.8.
The game \(G(f,g)\) is played as follows: player 2 first plays a code \(e\in\omega\) for a three place Turing functional6. For the rest of the game, player 1 plays a real \(x\) and player 2 plays two reals, \(y\) and \(z\). On every turn, player 1 will play one more digit of the real \(x\) and player 2 will play one more digit of the the real \(z\) and either one or zero more digits of the real \(y\). Player 2 wins if they eventually play all digits of \(y\) (in other words, player 2 can delay arbitrarily long between playing one digit of \(y\) and the next but if they delay forever then they forfeit the game) and \(f(x)=\Phi_{e}(g(y),x,z)\). Otherwise, player 1 wins. The game can be pictured as follows.
Footnote 6: Here we use the phrase _Turing functional_ to indicate a partial computable function on real numbers.
\begin{tabular}{c|c c c c c c c} player 1 & \(x_{0}\) & \(x_{1}\) & \(\dots\) & \(x_{n}\) & \(\dots\) & \(x=x_{0}x_{1}x_{2}\dots\) \\ \hline player 2 & \(e\) & \(y_{0},z_{0}\) & \(z_{1}\) & \(\dots\) & \(y_{1},z_{n}\) & \(\dots\) & \(y=y_{0}y_{1}y_{2}\dots\), & \(z=z_{0}z_{1}z_{2}\dots\) \\ \end{tabular}
We can understand this game as follows. First recall that \(f\leq_{w}g\) means that there are partial continuous functions \(\varphi\) and \(\psi\) such that for all \(x\), \(f(x)=\psi(g(\varphi(x)),x)\). Further, recall that a continuous function is just a computable function relative to some oracle. Hence for some code \(e\in\omega\) for a Turing functional and some \(z\in\omega^{\omega}\), \(\psi(x,y)=\Phi_{e}(x,y,z)\).
In the game \(G(f,g)\), player 2 is trying to convince player 1 that \(f\leq_{w}g\). The natural number \(e\) and real \(z\) played by player 2 should be thought of as specifying the continuous function \(\psi\). Player 1's moves consist of a challenge input \(x\) for \(f\) and the real \(y\) played by player 2 corresponds to \(\varphi(x)\). The winning condition for player 2--that player 2 plays infinitely many digits of \(y\) and that \(f(x)=\Phi_{e}(g(y),x,z)\)--corresponds to the reduction procedure implicitly specified by player 2 working successfully on input \(x\).
**Lemma 2.1**.: _Player 2 wins \(G(f,g)\) if and only if \(f\leq_{w}g\)._
Proof.: (\(\implies\)) Suppose that player 2 wins \(G(f,g)\) via the strategy \(\tau\). Then the following two procedures describe partial continuous functions \(\omega^{\omega}\to\omega^{\omega}\) and \(\omega^{\omega}\times\omega^{\omega}\to\omega^{\omega}\), respectively.
1. Given \(x\) as input: play the game \(G(f,g)\), using the digits of \(x\) as player 1's moves and using \(\tau\) to generate player 2's moves. Output the first of the two reals played by player 2 (the real referred to as \(y\) in the description of the game above).
2. Given \(w\) and \(x\) as input: play the game \(G(f,g)\) using the digits of \(x\) as player 1's moves and using \(\tau\) to generate player 2's moves. Let \(e\) be number played by \(\tau\) on the first move of the game and let \(z\) be the second of the two reals played by player 2. Output \(\Phi_{e}(w,x,z)\).
Let \(\varphi\) and \(\psi\), respectively, denote these two continuous functions. Then the fact that \(\tau\) is a winning strategy for player 2 in the game \(G(f,g)\) ensures that for all \(x\), \(f(x)=\psi(g(\varphi(x)),x)\).
(\(\iff\)) Suppose that \(f\leq_{w}g\) via partial continuous functions \(\varphi\) and \(\psi\). Let \(e\in\omega\) and \(z\in\omega^{\omega}\) be such that \(\psi\) is computed by the \(e^{\text{th}}\) Turing functional with oracle \(z\). Then the following is a winning strategy for player 2 in the game \(G(f,g)\). On the first turn, play the number \(e\). On each subsequent turn, play one more digit of \(z\). Also on each of these turns, if player 1 has played enough digits of \(x\) to determine one more digit of \(\varphi(x)\), play that as well.
We now turn to the case where player 1 wins and the game is of the form \(G(J,f)\) for some \(f\). We will show that in this case, \(f\) must be \(\sigma\)-continuous. To prove this, we first need the following observation about \(\sigma\)-continuity.
**Observation 2.2**.: _A function \(f\colon\omega^{\omega}\to\omega^{\omega}\) is \(\sigma\)-continuous if and only if there is some real \(z\) such that for all \(x\), \(f(x)\leqslant_{T}x\oplus z\)._
Proof.: If \(f\) is \(\sigma\)-continuous then it can be written as a countable union of partial continuous functions. Each partial continuous function on \(\omega^{\omega}\) is a partial computable function relative to some oracle, so we can find a countable sequence of codes for Turing functionals \(\{e_{n}\}_{n\in\omega}\) and oracles \(\{z_{n}\}_{n\in\omega}\) such that for each \(x\), \(f(x)=\Phi_{e_{n}}(x,z_{n})\) for some \(n\). So if we take \(z=\bigoplus_{n\in\omega}z_{n}\) then for each \(x\), \(f(x)\leqslant_{T}x\oplus z\).
Conversely, suppose there is some \(z\) such that for all \(x\), \(f(x)\leqslant_{T}x\oplus z\). For each \(n\), define \(A_{n}=\{x\mid f(x)=\Phi_{n}(x,z)\}\). Then \(\{A_{n}\}_{n\in\omega}\) is a countable sequence of sets whose union covers \(\omega^{\omega}\). Also, for each \(n\), \(f\!\upharpoonright_{A_{n}}\) is computable relative to \(z\) via \(\Phi_{n}\) and hence continuous. So \(f\) is \(\sigma\)-continuous.
**Lemma 2.3**.: _Player 1 wins \(G(J,f)\) if and only if \(f\) is \(\sigma\)-continuous._
Proof.: (\(\implies\)) Suppose that player 1 wins \(G(J,f)\) via the strategy \(\sigma\). Let \(w\) be the base of a cone for which the conclusion of Theorem 1.12 applies (i.e. such that the Posner-Robinson theorem holds relative to every real which computes \(w\)).
We claim that for every \(y\), \(f(y)\leqslant_{T}y\oplus\sigma\oplus w\) and hence \(f\) is \(\sigma\)-continuous by Observation 2.2. To prove this, we will show that if not then \(\sigma\) is not actually a winning strategy for player 1. So suppose that there is some \(y\) such that \(f(y)\leqslant_{T}y\oplus\sigma\oplus w\). Since the Posner-Robinson theorem holds relative to \(y\oplus\sigma\oplus w\), we can find some real \(v\) such that \(f(y)\oplus v\oplus y\oplus\sigma\oplus w\geqslant_{T}(v\oplus y\oplus\sigma \oplus w)^{\prime}\).
We will now explain how to win while playing as player 2 in \(G(J,f)\) against the strategy \(\sigma\). We will play as follows: first we play some number \(e\in\omega\), which we will explain how to choose later. Then we ignore player 1's moves entirely and play the reals \(y\) and \(z=v\oplus y\oplus\sigma\oplus w\). Note that from \(z\) we can compute player 1's moves since \(z\) computes both player 1's strategy \(\sigma\) and all of player 2's moves. In other words, if \(x\) is the real played by player 1 then \(x\leqslant_{T}z\). Hence by our choice of \(z\) we have
\[x^{\prime}\leqslant_{T}z^{\prime}\leqslant_{T}f(y)\oplus z.\]
This is almost what we want, but there is one problem: for player 2 to win, we need not just that \(f(y)\oplus z\) computes \(x^{\prime}\), but that it does so via the Turing functional specified by player 2. And the only problem with this is that the Turing functional which computes \(x^{\prime}\) from \(f(y)\) and \(z\) depends on the code \(e\) played by player 2. However, we can get around this by using the recursion theorem.
In precisely, note that while the computation of \(x^{\prime}\) from \(f(y)\oplus z\) depends on the value of \(e\) (because the value of \(x\) itself depends on \(e\)), the dependence is uniform in \(e\)7. In other words, there is some \(a\in\omega\) such that for all \(e\),
Footnote 7: This is because the computation of \(z^{\prime}\) from \(f(y)\oplus z\) does not depend on \(e\) and the computation of \(x\) from \(z\), and hence \(x^{\prime}\) from \(z^{\prime}\) is uniform in \(e\).
\[\Phi_{a}(f(y),z,e)=(\sigma*(e,y,z))^{\prime}\]
where by \((\sigma*(e,y,z))\) we mean the real played by the strategy \(\sigma\) in response to player 2 playing \(e\) along with the reals \(y\) and \(z\). Thus by the recursion theorem, we can find some \(e\) such that
\[\Phi_{e}(f(y),z)=\Phi_{a}(f(y),z,e)=(\sigma*(e,y,z))^{\prime}.\]
Thus by playing this \(e\) as our first move as player 2 (and then playing \(y\) and \(z\)), we can win against \(\sigma\).
(\(\iff\)) Suppose that \(f\) is \(\sigma\)-continuous. Thus by our observation, there is some \(w\) such that for all \(x\), \(f(x)\leqslant_{T}x\oplus w\). Consider the following strategy for player 1 in the game \(G(J,f)\): alternate playing digits of \(w\) and copying the moves played by player 2. We can picture this strategy as follows.
\begin{tabular}{c|c c c c c} player 1 & \(w_{0}\) & \(\langle y_{0},z_{0}\rangle\) & \(w_{1}\) & \(\langle z_{1}\rangle\) & \(\ldots\) \\ \hline player 2 & \(e\) & \(y_{0},z_{0}\) & \(z_{1}\) & \(y_{1},z_{2}\) & \(\ldots\) \\ \end{tabular}
We claim this is a winning strategy for player 1. To see why, suppose player 1 follows this strategy and that player 2 plays \(e\in\omega\) and \(y,z\in\omega^{\omega}\). Then the real \(x\) played by player 1 will compute \(w\oplus y\oplus z\). Player 2 can only win if \(\Phi_{e}(f(y),x,z)=J(x)\), but this is impossible since we have
\[f(y)\leqslant_{T}y\oplus w\leqslant_{T}x\]
and hence \(\Phi_{e}(f(y),x,z)\leqslant_{T}x\), but \(J(x)\leqslant_{T}x\).
We can now finish our proof of the Solecki dichotomy from the Posner-Robinson theorem.
Proof of Theorem 1.8 from Theorem 1.12.: Let \(f\colon\omega^{\omega}\to\omega^{\omega}\) be a Borel function and consider the game \(G(J,f)\). By Borel determinacy, either player 1 or player 2 has a winning strategy for this game. In the former case, \(f\) is \(\sigma\)-continuous by Lemma 2.3. In the latter case, \(J\leqslant_{w}f\) by Lemma 2.1.
The results of this section raise an obvious question. Namely, is it possible to use the Posner-Robinson theorem to prove the full Solecki dichotomy (i.e. either Theorem 1.4 or Theorem 1.6)? Let us mention one possible route to such a proof. In this section, we described a game, \(G(f,g)\), which can be used to characterize weak reducibility of \(f\) to \(g\) and this game was the key to our proof of Theorem 1.8. It seems plausible that finding a game which characterizes reducibility of \(f\) to \(g\), rather than weak reducibility, would yield a proof of Theorem 1.6. Finding such a game might also be of independent interest.
**Question 2.4**.: _Can the Posner-Robinson theorem together with Borel determinacy be used to prove either Theorem 1.4 or Theorem 1.6?_
**Question 2.5**.: _Is there a game characterizing reducibility in the same way that the game \(G(f,g)\) described above characterizes weak reducibility?_
## 3. Solecki \(\implies\) Posner-Robinson
In this section we will assume a version of the Solecki dichotomy (specifically Theorem 1.8) and use it to prove a version of the Posner-Robinson theorem (specifically Theorem 1.12). We will do so by proving the contrapositive: we will show that if Theorem 1.12 fails then so does Theorem 1.8. Also, as we mentioned in the introduction, our proof is carried out assuming \(\mathbf{\Pi}^{1}_{1}\)-determinacy, a statement which is not provable in \(\mathsf{ZFC}\), but which is provable from \(\mathsf{ZFC}\) plus the existence of a measurable cardinal.
The core idea of the proof is very simple--it essentially consists of the observation that if \(f\) is a function which takes each real \(x\) to a witness of the failure of the Posner-Robinson theorem relative to \(x\) then \(f\) does not satisfy the conclusion of the Solecki dichotomy. However, this simple idea is complicated by the need to make sure \(f\) is Borel (otherwise we cannot invoke Theorem 1.8). Most of the details of the proof will be devoted to overcoming this obstacle.
We will now go into the details of the proof. Suppose that Theorem 1.12 fails. In other words, the set
\[A=\{x\in\omega^{\omega}\mid\text{the Posner-Robinson theorem holds relative to }x\}\]
does not contain any cone of Turing degrees. Hence its complement, the set
\[B=\{x\in\omega^{\omega}\mid\text{the Posner-Robinson theorem fails relative to }x\},\]
is cofinal in the Turing degrees.
Now suppose that we can find a function \(f\colon B\to\omega^{\omega}\) such that for each \(x\) in \(B\), \(f(x)\) is a witness to the failure of the Posner-Robinson theorem relative to \(x\)--i.e. \(f(x)\leqslant_{T}x\) and there is no \(y\) such that \(f(x)\oplus y\oplus x\geqslant_{T}(y\oplus x)^{\prime}\). Extend \(f\) to a total function on \(\omega^{\omega}\) by setting \(f(x)=0\) for all \(x\notin B\). Note that this modified version of \(f\) has the following two properties.
1. For cofinally many \(x\), \(f(x)\leqslant_{T}x\).
2. For all \(x\), there is no \(y\) such that \(f(x)\oplus y\oplus x\geqslant_{T}(y\oplus x)^{\prime}\). When \(x\in B\) this is by assumption and when \(x\notin B\) this is because \(f(x)\) is computable.
The next two lemmas show that these properties imply that \(f\) is a counterexample to the Solecki dichotomy.
**Lemma 3.1**.: _Suppose that \(f\colon\omega^{\omega}\to\omega^{\omega}\) is a function such that for a set of \(x\) which is cofinal in the Turing degrees, \(f(x)\nleq_{T}x\). Then \(f\) is not \(\sigma\)-continuous._
Proof.: For contradiction, assume \(f\) is \(\sigma\)-continuous. By Observation 2.2, there must be some \(z\) such that for every \(x\), \(f(x)\leq_{T}x\oplus z\). By assumption, we can find some \(x\geq_{T}z\) such that \(f(x)\nleq_{T}x\). But since \(x\equiv_{T}x\oplus z\), this implies \(f(x)\nleq_{T}x\oplus z\), which contradicts our choice of \(z\).
**Lemma 3.2**.: _Suppose that \(f\colon\omega^{\omega}\to\omega^{\omega}\) is a function such that for all \(x\), there is no \(y\) such that \(f(x)\oplus y\oplus x\geq_{T}(y\oplus x)^{\prime}\). Then \(J\nleq_{w}f\)._
Proof.: For contradiction, assume that \(J\leq_{w}f\). Thus there are partial continuous functions \(\varphi\) and \(\psi\) such that for all \(x\), \(J(x)=\psi(f(\varphi(x)),x)\). Let \(z\) be an oracle relative to which \(\psi\) and \(\varphi\) are computable. Let \(x=\varphi(z)\). By assumption, there is no \(y\) such that \(f(x)\oplus y\oplus x\geq_{T}(y\oplus x)^{\prime}\). We claim this is contradicted by taking \(y=z\).
To see why, note that since \(\varphi\) and \(\psi\) are computable relative to \(z\), we have
\[x=\varphi(z)\leq_{T}z\qquad\text{and}\qquad\psi(f(x),z)\leq_{T}f(x)\oplus z\]
and by our choice of \(\varphi\) and \(\psi\) we have
\[z^{\prime}=\psi(f(\varphi(z)),z)=\psi(f(x),z).\]
Hence we have
\[(z\oplus x)^{\prime}\equiv_{T}z^{\prime}=\psi(f(x),z)\leq_{T}f(x)\oplus z\leq _{T}f(x)\oplus z\oplus x.\]
which yields the contradiction.
We are now left with the problem of finding some \(f\colon B\to\omega^{\omega}\) with the properties described above. Of course, it is easy to find such an \(f\) using the Axiom of Choice, but there is no reason to believe that a function chosen in this way will be Borel and hence we cannot apply Theorem 1.8. Instead of using choice, we could try to appeal to a uniformization theorem from descriptive set theory. However, the relation that we need to uniformize, namely
\[\{(x,y)\mid y\nleq_{T}x\text{ and }\forall z\,(y\oplus z\oplus x\nleq_{T}(z \oplus x)^{\prime}\},\]
is \(\Pi^{1}_{1}\) and thus too complicated for any of the standard uniformization theorems to give us a Borel--or even analytic--uniformizing function.
We will now see how to find some \(f\) with the necessary properties which is Borel (in fact, Baire class 1). The key step is the following lemma.
**Lemma 3.3**.: _Suppose that for cofinally many \(x\), the Posner-Robinson theorem fails relative to \(x\). Then for cofinally many \(x\), there is some \(y\leq_{T}x^{\prime}\) which witnesses this failure--i.e. such that \(y\nleq_{T}x\) and there is no \(z\) such that \(y\oplus z\oplus x\geq_{T}(z\oplus x)^{\prime}\)._
Proof.: Let \(w\in\omega^{\omega}\) be arbitrary. We need to show that there is some \(x\geq_{T}w\) such that the Posner-Robinson theorem fails relative to \(x\) and such that this failure is witnessed by some \(y\leq_{T}x^{\prime}\). By increasing \(w\) if necessary (and invoking our assumption that the Posner-Robinson theorem fails cofinally), we may assume the Posner-Robinson theorem fails relative to \(w\). Let \(y\) be a witness to the failure of the Posner-Robinson theorem relative to \(w\).
The key observation is that it is sufficient to find some \(x\geq_{T}w\) such that \(x^{\prime}\) computes \(y\) but \(x\) does not. Suppose we can find such an \(x\). We claim that the Posner-Robinson theorem fails relative to \(x\) and that this failure is witnessed by \(y\). If not, then we can find some \(z\) such that \(y\oplus z\oplus x\geq_{T}(z\oplus x)^{\prime}\). But since \(x\geq_{T}w\), this gives us
\[y\oplus z\oplus x\oplus w\equiv_{T}y\oplus z\oplus x\geq_{T}(z\oplus x)^{ \prime}\equiv_{T}(z\oplus x\oplus w)^{\prime}\]
and hence \(y\) is _not_ a witness to the failure of the Posner-Robinson theorem relative to \(w\).
We will now explain how to construct \(x\). In fact, we will actually construct a real \(x_{0}\) such that \((x_{0}\oplus w)^{\prime}\) computes \(y\) but \(x_{0}\oplus w\) does not and then set \(x=x_{0}\oplus w\). The construction is similar to the proof of Friedberg jump inversion. We will construct \(x_{0}\) by finite initial segments. On step \(e\) of the construction, we will make sure that \(\Phi_{e}^{x_{0}\oplus w}\) does not correctly compute \(y\) and then code one more digit of \(y\).
Suppose we are on step \(e\) of the construction and the initial segment of \(x_{0}\) that we have built so far is \(\sigma_{e}\). There are two cases to consider.
1. There is some \(n\in\omega\) and strings \(\tau_{0},\tau_{1}\) such that \(\Phi_{e}^{\sigma_{e}^{\frown}\tau_{0}\oplus w}(n)\downarrow\neq\Phi_{e}^{ \sigma_{e}^{\frown}\tau_{1}\oplus w}(n)\downarrow\). In this case, one of these two values must disagree with \(y(n)\). Let \(\langle n,\tau_{0},\tau_{1}\rangle\) be the first such triple discovered in some \(w\)-computable search and set \(\sigma_{e+1}=\sigma_{e}^{\frown}\tau_{i}^{\frown}\langle y(e)\rangle\) where \(\tau_{i}\) is equal to whichever of \(\tau_{0},\tau_{1}\) causes \(\Phi_{e}^{x_{0}\oplus w}(n)\) to disagree with \(y(n)\).
2. For every \(n\in\omega\), there is at most one value of \(\Phi_{e}^{\sigma_{e}^{\frown}\tau\oplus w}(n)\) obtainable over all strings \(\tau\). Then by standard arguments, if \(x_{0}\) extends \(\sigma_{e}\) then either \(\Phi_{e}^{x_{0}\oplus w}\) is not total or it computes a real which is computable from \(w\) alone. In either case, \(\Phi_{e}^{x_{0}\oplus w}\) cannot be equal to \(y\) so we may simply set \(\sigma_{e+1}=\sigma_{e}^{\frown}\langle y(e)\rangle\).
It is clear from the construction that \(x_{0}\oplus w\) does not compute \(y\). To see that \((x_{0}\oplus w)^{\prime}\) computes \(y\), simply note that \((x_{0}\oplus w)^{\prime}\) can figure out what happened at each step of the construction described above (i.e. it can check which of the two cases held at each step and, in the first case, recover the triple \(\langle n,\tau_{0},\tau_{1}\rangle\)) and can thus recover the digits of \(y\) coded during the construction.
We can now prove Theorem 1.12. As we mentioned above, our proof uses \(\mathbf{\Pi}_{1}^{1}\)-determinacy.
Proof of Theorem 1.12 from Theorem 1.8.: Suppose Theorem 1.12 fails. Then as we saw above, the set
\[B=\{x\in\omega^{\omega}\mid\text{the Posner-Robinson theorem fails relative to }x\}\]
is cofinal in the Turing degrees and hence by Lemma 3.3, the following set is also cofinal
\[\begin{split} C=\{x\in\omega^{\omega}\mid\exists y\leqslant_{T} x^{\prime}\,(\text{$y$ witnesses the failure of the}\\ \text{Posner-Robinson theorem relative to }x)\}.\end{split}\]
Now for each \(e\in\omega\), define
\[\begin{split} C_{e}=\{x\in\omega^{\omega}\mid\Phi_{e}(x^{\prime} )&\text{ is total and witnesses the failure of the}\\ \text{Posner-Robinson theorem relative to }x\}\end{split}\]
and note that \(C=\bigcup_{e\in\omega}C_{e}\). Hence by Theorem 1.19, there is some pointed perfect tree \(T\) and \(e\in\omega\) such that \([T]\subseteq C_{e}\).
Define \(f\colon\omega^{\omega}\to\omega^{\omega}\) by
\[f(x)=\begin{cases}\Phi_{e}(x^{\prime})&\text{if }x\in[T]\\ 0&\text{else.}\end{cases}\]
Let's now make some observations about \(f\).
1. \(f\) is clearly Borel--in fact, it is actually Baire class 1.
2. For every \(x\in[T]\), \(f(x)\) is a witness to the failure of the Posner-Robinson theorem relative to \(x\).
3. In particular, for any \(x\in[T]\), \(f(x)\nleqslant_{T}x\). Since \([T]\) is a cofinal set in the Turing degrees, \(f\) satisfies the hypothesis of Lemma 3.1.
4. For any \(x\in\omega^{\omega}\), there is no \(y\) such that \(f(x)\oplus y\oplus x\geqslant_{T}(y\oplus x)^{\prime}\). If \(x\in[T]\) then this is because \(f(x)\) is a witness to the failure of the Posner-Robinson theorem relative to \(x\). If \(x\notin[T]\) then this is because \(f(x)\) is computable. Hence \(f\) satisfies the hypothesis of Lemma 3.2.
Thus by Lemmas 3.1 and 3.2, \(f\) is a counterexample to Theorem 1.8.
## 4. Generalizations
Recently, Marks and Montalban have generalized the Solecki dichotomy (specifically, Theorem 1.6) to higher levels of the Borel hierarchy [7]. To state their result, we must introduce a few more definitions. First, we must generalize \(\sigma\)-continuity. Recall that for any countable ordinal \(\alpha\), a function \(f\) is \(\mathbf{\Sigma}^{0}_{\alpha}\)_-measurable_ if for every open set \(U\), \(f^{-1}(U)\) is in \(\mathbf{\Sigma}^{0}_{\alpha}\).
**Definition 4.1**.: For any countable ordinal \(\alpha\),
* \(\mathsf{Dec}_{\alpha}\) denotes the set of functions \(f\colon\omega^{\omega}\to\omega^{\omega}\) for which there is a partition \(\{A_{n}\}_{n\in\omega}\) of \(\omega^{\omega}\) into countably many pieces such that for each \(n\), \(f\mathbin{\upharpoonright}_{A_{n}}\) is \(\mathbf{\Sigma}^{0}_{\alpha}\)-measurable with respect to the subspace topology on \(A_{n}\).
* \(\mathsf{Dec}_{<\alpha}\) denotes the set of functions \(f\colon\omega^{\omega}\to\omega^{\omega}\) for which there is a partition \(\{A_{n}\}_{n\in\omega}\) of \(\omega^{\omega}\) into countably many pieces such that for each \(n\), \(f\mathbin{\upharpoonright}_{A_{n}}\) is \(\mathbf{\Sigma}^{0}_{\beta}\)-measurable for some \(\beta<\alpha\) (note that \(\beta\) may depend on \(n\)).
Note that \(\mathbf{\Sigma}^{0}_{1}\)-measurable is the same as continuous, so a function \(f\) is \(\sigma\)-continuous if and only if it is \(\mathsf{Dec}_{1}=\mathsf{Dec}_{<2}\).
As a warning to readers, the notation in this area is not yet standardized and our notation does not quite match previously used notation. In particular, \(\mathsf{Dec}_{\alpha}\) is sometimes denoted \(\mathsf{Dec}(\mathbf{\Sigma}^{0}_{\alpha})\) (see e.g. [3]). We have chosen the notation \(\mathsf{Dec}_{\alpha}\) to be consistent with our chosen notation for \(\mathsf{Dec}_{<\alpha}\), for which there does not seem to currently be any standard notation but which is necessary to correctly express Marks and Montalban's generalization of the Solecki dichotomy at limit levels of the Borel hierarchy.
For each countable ordinal \(\alpha\geq 1\), we will use \(J_{\alpha}\colon\omega^{\omega}\to\omega^{\omega}\) to denote the \(\alpha^{\text{th}}\) Turing jump--i.e. \(J_{\alpha}(x)=x^{(\alpha)}\). Note that technically \(J_{\alpha}\) depends on a choice of presentation for \(\alpha\). However, the versions of \(J_{\alpha}\) that are obtained by choosing different presentations for \(\alpha\) are all reducible to each other (in the sense of Definition 1.5) and so this subtlety does not matter for us.
We can now state the appropriate generalization of Theorem 1.6 due to Marks and Montalban.
**Theorem 4.2** (Generalized Solecki dichotomy).: _For every countable ordinal \(\alpha\geq 1\) and every Borel function \(f\colon\omega^{\omega}\to\omega^{\omega}\), either \(f\) is in \(\mathsf{Dec}_{<(1+\alpha)}\) or \(J_{\alpha}\leq f\)._
There is also a generalization of the Posner-Robinson theorem to higher levels of the hyperarithmetic hierarchy, due to Slaman and Shore [14].
**Theorem 4.3** (Generalized Posner-Robinson theorem).: _For all computable ordinals \(\alpha\) and all reals \(x\), either \(x\leq_{T}0^{(\beta)}\) for some \(\beta<\alpha\) or there is some real \(y\) such that \(x\oplus y\geq_{T}y^{(\alpha)}\)._
As usual, there is also a relativized version.
**Theorem 4.4**.: _For all reals \(z\), all ordinals \(\alpha\) which are computable relative to \(z\) and all reals \(x\), either \(x\leq_{T}z^{(\beta)}\) for some \(\beta<\alpha\) or there is some real \(y\) such that \(x\oplus y\oplus z\geq_{T}(y\oplus z)^{(\alpha)}\)._
The main results of this paper also hold for these generalizations of the Solecki dichotomy and the Posner-Robinson theorem. In particular, we can introduce weakened versions of Theorems 4.2 and 4.4:
**Theorem 4.5**.: _For every countable ordinal \(\alpha\geq 1\) and every Borel function \(f\colon\omega^{\omega}\to\omega^{\omega}\), either \(f\) is in \(\mathsf{Dec}_{<(1+\alpha)}\) or \(J_{\alpha}\leq_{w}f\)._
**Theorem 4.6**.: _For every countable ordinal \(\alpha\), there is some cone of Turing degrees, \(C\), such that for all \(z\in C\) and all reals \(x\), \(\alpha\) is computable relative to \(z\) and either \(x\leq_{T}z^{(\beta)}\) for some \(\beta<\alpha\) or there is some real \(y\) such that \(x\oplus y\oplus z\geq_{T}(y\oplus z)^{(\alpha)}\)._
The proofs in sections 2 and 3 work, with almost no modifications, to show that the two theorems above are equivalent.
## Acknowledgments
Thanks to Andrew Marks and Ted Slaman for useful conversations and advice. |
2303.17633 | Interplay of many-body interactions and quasiperiodic disorder in the
all-band-flat diamond chain | We study the effects of quasiperiodic Aubry-Andr\'e (AA) disorder and
interactions on a one-dimensional all-band-flat (ABF) diamond chain. We
consider the application of disorder in two ways: a symmetric one, where the
same disorder is applied to the top and bottom sites of a unit cell, and an
antisymmetric one, where the disorder applied to the top and bottom sites are
of equal magnitude but with opposite signs. The single-particle wave-packet
dynamics for the clean system and when the disorder is applied symmetrically
show quantum caging; in the antisymmetric case, the wave-packet spreads over
the entire lattice. These results agree with our previous work, where compact
localization was observed in the case of the clean system and for symmetrically
disordered diamond lattices. In the presence of nearest-neighbour interactions,
nonergodic phases are observed in the case of a clean system and symmetrical
disorder; at higher disorder strengths, we find an MBL-like phase in the
symmetric case. However, many-body non-equilibrium dynamics of the system from
carefully engineered initial states exhibit quantum caging. In the
antisymmetric case, a nonergodic mixed phase, a thermal phase and an MBL-like
phases, respectively, are observed at low, intermediate and high disorder
strengths. We observe an absence of caging and initial state dependence (except
at the intermediate disorder strength) in the study of non-equilibrium
dynamics. | Aamna Ahmed, Nilanjan Roy, Auditya Sharma | 2023-03-30T18:00:07Z | http://arxiv.org/abs/2303.17633v1 | # Interplay of many-body interactions and quasiperiodic disorder in the all-band-flat diamond chain
###### Abstract
We study the effects of quasiperiodic Aubry-Andre (AA) disorder and interactions on a one-dimensional all-band-flat (ABF) diamond chain. We consider the application of disorder in two ways: a symmetric one, where the same disorder is applied to the top and bottom sites of a unit cell, and an antisymmetric one, where the disorder applied to the top and bottom sites are of equal magnitude but with opposite signs. The single-particle wave-packet dynamics for the clean system and when the disorder is applied symmetrically show quantum caging; in the antisymmetric case, the wave-packet spreads over the entire lattice. These results agree with our previous work, where compact localization was observed in the case of the clean system and for symmetrically disordered diamond lattices. In the presence of nearest-neighbour interactions, nonergodic phases are observed in the case of a clean system and symmetrical disorder; at higher disorder strengths, we find an MBL-like phase in the symmetric case. However, many-body non-equilibrium dynamics of the system from carefully engineered initial states exhibit quantum caging. In the antisymmetric case, a nonergodic mixed phase, a thermal phase and an MBL-like phases, respectively, are observed at low, intermediate and high disorder strengths. We observe an absence of caging and initial state dependence (except at the intermediate disorder strength) in the study of non-equilibrium dynamics.
## I Introduction
Flat band (FB) systems, which are characterized by highly degenerate energy levels and support _compact localized eigenstates_ (CLS) [1; 2], has been a subject of great interest over the last decade [3; 4; 5; 6; 7], although the concepts are older where the term Aharanov Bohm (AB) caging [8; 9; 10; 11] has been used. Compact localized states span strictly over a few unit cells, with zero probability amplitude elsewhere in contrast to Anderson localization [12], where the'spread' of a state dies down exponentially. While Anderson localization observed in non-interacting disordered systems is now a mature topic with a large body of literature around it, the localization characteristics of quantum systems in the presence of both disorder and interactions [13; 14; 15; 16; 17; 43] is an actively evolving area of research. A prominent example is the phenomenon of _many-body localization_ (MBL) [18; 19; 20; 21; 22; 23; 24] where the system fails to thermalize even in the presence of interactions. Translationally invariant single-particle flat band networks coupled with many-body interactions have also recently gained a lot of attention [25; 26; 27; 28; 29; 30]. These models exhibit nonergodic behaviour with a lack of transport of particles for any interaction strength exhibiting _many-body flat band localization_ (MBFBL) [26; 30]. This naturally motivates the study of flat band systems subjected to both disorder and interactions [31; 32; 33].
In one of our previous works [31], we systematically investigated the effects of turning on interactions in the presence of uniform disorder on the all-band-flat (ABF) diamond chain. This model shows a nonergodic mixed phase at low disorder strength, separated from the MBL phase at high disorder strength by a thermal phase at intermediate disorder strength. The addition of disorder to flat-band systems is known to yield exotic behavior [34; 35; 36]. In our recent work [36], we investigated the effect of a quasiperiodic Aubry-Andre (\(AA\)) on-site disorder [37; 38] on the ABF diamond chain. We found that the symmetry of the applied external potential plays a crucial role. With a symmetric disorder, it is possible to completely destroy the degeneracy and still preserve the compact localization of the eigenstates [39]. However, when the disorder is applied in an antisymmetric fashion, both the degeneracy and compact localization are destroyed and a robust _flat-band-based multifractality_(FBM) [40; 41; 42] is observed in an extensive region of the phase diagram. In the present work, we study the effects of interactions on the ABF diamond chain both in the absence and presence of quasiperiodic disorder.
We begin by exploring single-particle dynamics, which shows quantum caging in the long time limit for both the zero-disorder and symmetric disorder cases. However, when the disorder is applied in an antisymmetric manner since the compact localization of the eigenstates is destroyed [36], the time-evolved state also displays a spreading over all the lattice sites. We next investigate the properties of the clean system when interactions are turned on. The system manifests nonergodic phases at all interaction strengths in the zero-disorder case. However, from a study of non-equilibrium dynamics, we conclude that for some specially engineered initial states, many-body systems exhibit caging behaviour independent of the strength of the interaction.
In the simultaneous presence of disorder and interactions, the symmetry of the applied disorder is again crucial. A symmetric disorder coupled with interactions yields nonergodic phases in the low and intermediate disorder regimes and MBL-like behaviour in the high disorder regime. We find that the dynamics is dependent on the initial state; in particular, we observe quantum
caging for specific engineered initial states. The antisymmetric application of disorder leads to a mixed nonergodic phase at low disorder strength, a thermal phase at intermediate disorder strength, and an MBL-like phase at high disorder strength. The mixed phase obtained at low disorder strength is attributed to the presence of multifractal states in the single particle limit. Although we find initial state dependence in the non-equilibrium dynamics (except for intermediate disorder strengths, which yield a thermal phase), no quantum caging behaviour is seen.
This paper is organized as follows. In Section II, we discuss the details of the model. In Section III, we discuss the effects of \(AA\) disorder on the single-particle dynamics in the disorder-free, symmetric and antisymmetric cases. Section IV discusses the effects of interactions on the clean ABF diamond chain. Section V explores the symmetric application of quasiperiodic \(AA\) disorder on the interacting system. Section VI discusses the interplay of antisymmetric application of disorder and interactions. We then summarize our results in Section VII.
## II Model
We study the ABF diamond lattice, where the \(k^{\rm th}\) unit cell consists of three sites \(\alpha_{k}=\left\{u_{k},d_{k},c_{k}\right\}\) (see Fig. 1). The fermionic creation operators acting at the \(u\) (up), \(c\) (center), and \(d\) (down) sites respectively in the \(k^{\rm th}\) unit cell are \(\hat{u}_{k}^{\dagger},\hat{c}_{k}^{\dagger}\), and \(\hat{d}_{k}^{\dagger}\) and the Hamiltonian is:
\[\hat{H}=\hat{H}_{\rm hop}+\hat{H}_{\rm os}+\hat{H}_{\rm int}, \tag{1}\]
where
\[\hat{H}_{\rm hop}= -J\sum_{k=1}^{N/3}\left(-\hat{u}_{k}^{\dagger}\hat{c}_{k}+\hat{d} _{k}^{\dagger}\hat{c}_{k}+\hat{c}_{k}^{\dagger}\hat{u}_{k+1}+\hat{c}_{k}^{ \dagger}\hat{d}_{k+1}+\text{H.c.}\right)\] \[\hat{H}_{\rm os}= \sum_{k=1}^{N/3}\left(\zeta_{k}^{u}\hat{u}_{k}^{\dagger}\hat{u}_{k }+\zeta_{k}^{c}\hat{c}_{k}^{\dagger}\hat{c}_{k}+\zeta_{k}^{d}\hat{d}_{k}^{ \dagger}\hat{d}_{k}\right)\] \[\hat{H}_{\rm int}= V\sum_{k=1}^{N/3}\left(\hat{u}_{k}^{\dagger}\hat{u}_{k}\hat{c}_{k}^{ \dagger}\hat{c}_{k}+\hat{d}_{k}^{\dagger}\hat{d}_{k}\hat{c}_{k}^{\dagger}\hat{ c}_{k}+\hat{c}_{k}^{\dagger}\hat{c}_{k}\hat{u}_{k+1}^{\dagger}\hat{u}_{k+1}\right.\] \[+\left.\hat{c}_{k}^{\dagger}\hat{c}_{k}\hat{d}_{k+1}^{\dagger} \hat{d}_{k+1}\right). \tag{2}\]
The total number of lattice sites is denoted by \(N\), which should be a multiple of 3 owing to the unit cell structure of the periodic lattice. The hopping amplitude is \(J\), which is taken to be 1 for simplicity, and \(V\) is the strength of the nearest neighbour interaction. For each site of the \(k^{\rm th}\) unit cell, we include independent on-site Aubry-Andre potentials
\[\zeta_{k}^{\rm o}=\lambda_{\alpha}\cos(2\pi kb+\theta_{p}), \tag{3}\]
where the strength of the potential is \(\lambda_{\alpha}\) and the quasi-periodicity parameter \(b\) is taken to be the golden mean \((\sqrt{5}-1)/2\). The arbitrary global phase \(\theta_{p}\) is chosen randomly from a uniform random distribution \([0,2\pi]\). Here we consider two types of correlations between the on-site energies on the up '\(u\)' and down '\(d\)' sites: a symmetric configuration in which \(\zeta_{k}^{u}=\zeta_{k}^{d}\) and an antisymmetric configuration in which \(\zeta_{k}^{u}=-\zeta_{k}^{d}\).
In the clean non-interacting limit, the ABF diamond chain possesses three flat bands at energies \(\pm 2,0\) and no dispersive band. Consequently, the system is a good insulator, possessing only compact localized eigenstates. The system is highly degenerate, with the CLS occupying two unit cells. The other states corresponding to each flat band can be obtained by translating by an integer multiple of unit cells along the lattice. In the presence of symmetric disorder, remarkably, the eigenstates continue to be compactly localized in the original basis [36] although the translation symmetry and, thus, the flat band structure are broken. On the other hand, when the potential is applied in an antisymmetric manner, we find neither degeneracy nor compact localization [36], but a novel kind of _flat-band-based multifractality_.
Figure 2: Clean system: The particle density (whose value is represented by a colour according to the code shown) as a function of time \(t\), with \(m\) denoting the site index, for a single-particle initially at the (a) \(c-\)site and (c) \(d-\)site of the \(100^{\rm th}\) unit cell for system size \(N=600\) (200 unit cells) in the ABF diamond lattice. The return probability \(R(t)\) as a function of time \(t\) for a particle initially at the (b) \(c-\)site and (d) \(d-\)site of the \(100^{\rm th}\) unit cell.
Figure 1: Schematic representation of the diamond lattice with the \(u\) (up), \(d\) (down) and \(c\) (centre) sites of a representative unit cell confined by the black dashed lines. Nearest neighbour interaction \(V\) is represented by wiggly blue lines.
## III Single-particle dynamics
In this section, we explore the single-particle properties with the help of non-equilibrium dynamics of the particle density and the return probability. We will see that these results are consistent with the static properties of the eigenstates we obtained in our earlier study [36]. We study the dynamics by considering two initial states, one where the single-particle occupies the lattice site '\(c\)' of the \(k^{th}\) unit cell (\(\left|\psi_{in}\right\rangle=\left|c_{k}\right\rangle\)) and the other where it occupies the lattice site '\(d\)' of the \(k^{th}\) unit cell (\(\left|\psi_{in}\right\rangle=\left|d_{k}\right\rangle\)). We choose \(k=N/6\) so as to focus on the sites of the central unit cell - the total number of unit cells is \(N/3\). Once the initial state is fixed, we obtain the time evolved state at time \(t\) using the relation \(\left|\psi(t)\right\rangle=\sum_{m=1}^{N}\psi_{m}(t)\left|m\right\rangle=e^{- iHt}\left|\psi_{in}\right\rangle\) where \(m\) denotes the site index that runs over all the \(N\) sites i.e. \(1,2\ldots m,\ldots N\). Also, selecting the '\(u\)' site yields similar results as the '\(d\)' site case; thus, we do not show the results here.
### Zero disorder case
We first study the system in the clean limit. We begin by investigating the evolution of the particle density where \(p_{m}(t)=|\psi_{m}(t)|^{2}\) is the probability of site \(m\) being occupied at time \(t\). When the initial state is chosen to be \(|\psi_{in}\rangle=|c_{k}\rangle\), the particle remains compactly localized in two unit cells at all instances of time (see Fig. 2(a)). On the other hand, with the initial state taken to be \(|\psi_{in}\rangle=|d_{k}\rangle\), we observe that the particle becomes compactly localized in three unit cells at all instances of time (see Fig. 2(c)). Also, the number of unit cells in which the particle is compactly localized is robust with increasing system size.
We next calculate the return probability, which is defined as:
\[R(t)=|\left\langle\psi_{in}\right|\left.\psi(t)\right\rangle|^{2}. \tag{4}\]
It is the probability of finding the particle in the initial state after a time \(t\). In the disorder-free limit, we have plotted the return probability starting from both the initial states in the long time limit \(t=10^{9}\) in Fig. 2(b) and Fig. 2(d). The spectrum is highly degenerate in the disorder-free limit, yielding three energy levels, i.e. \(E=\pm 2,0\). Since the return probability is related to the level spacing of the energy levels, \(R(t)\) shows oscillatory behaviour [44; 45]. We conclude that the dynamics of the clean system is dependent on the initial state.
### Symmetric disorder case
We next consider the introduction of disorder in the symmetric configuration:
\[\zeta_{k}^{u}=\zeta_{k}^{d}\qquad\text{and}\qquad\zeta_{k}^{c}=0. \tag{5}\]
Figure 3: In the symmetric case, the particle density (whose value is represented by a colour according to the code shown) as a function of time \(t\), with \(m\) denoting the site index, for a single particle initially at the \(c-\)site of the \(100^{\text{th}}\) unit cell with increasing disorder strength (a) \(\lambda=0.01\), (b) \(\lambda=2\) and (c) \(\lambda=100\) and (d) evolution of the return probability \(R\). For a single particle initially at the \(d-\)site of the \(100^{\text{th}}\) unit cell, the particle density as a function of time \(t\) for (e) \(\lambda=0.01\), (f) \(\lambda=2\) and (g) \(\lambda=100\) and (h) evolution of the return probability \(R\). Here system size is \(N=600\), and the number of disorder realizations is \(100\).
We have previously observed [36] that in the single particle limit, although the degeneracy of all the flat bands is lifted, the eigenstates are found to be compactly localized in two unit cells at all strengths of disorder.
For the initial state \(|\psi_{in}\rangle=|c_{k}\rangle\), from the evolution of the particle density, it can be observed that the state is compactly localized over two unit cells in the low, intermediate and high disorder regimes (see Figs. 3(a)-3(c)). However, at higher disorder \(\lambda=100\), the site on which the particle is initially localized shows a large occupation probability at all times, as indicated by the central white patch in Fig.3(c). Also, in the long time limit, the return probability has finite magnitude \(\approx 0.5\) for \(\lambda=0.01,2\) and a magnitude close to unity for \(\lambda=100\), as shown in Fig. 3(d). We then study the dynamics for the initial state \(|\psi_{in}\rangle=|d_{k}\rangle\). From the evolution of the particle density, we observe that at all strengths of disorder, the state is compactly localized over three unit cells (see Figs. 3(e)-3(g)). The return probability in the long time limit has a finite value at all disorder strengths, as shown in Fig. 3(h).
On the introduction of a symmetric disorder, the spectrum becomes dispersive, although the eigenstates are compactly localized. We obtain non-degenerate energy levels, whose magnitude depends on the disorder strength. As the return probability involves the contribution of various energy levels through the time evolution operator \(U(t)=e^{-iHt}\), its periodicity is affected by the various energy levels and the initial state.
There is a second way in which the symmetric disorder can be introduced wherein only the \(c\) sites are perturbed:
\[\zeta_{k}^{\mathrm{n}}=\zeta_{k}^{d}=0\quad\mathrm{and}\quad\zeta_{k}^{c}\neq 0. \tag{6}\]
In this case, we know [36] that while the degeneracy is broken for the upper and lower bands, the flat band at \(E=0\) remains robust even at higher disorder strengths. We have checked that the single-particle dynamics within this scenario yields qualitatively similar results as discussed above.
### Antisymmetric disorder case
We next consider the application of the \(AA\) potential in an antisymmetric manner, defined by
\[\zeta_{k}^{\mathrm{n}}=-\zeta_{k}^{d}=\lambda\cos(2\pi kb+\theta_{p})\quad \mathrm{and}\quad\zeta_{k}^{c}=0. \tag{7}\]
In the single-particle limit, we observed [36] that the tiniest of perturbation lifted the degeneracy, and the eigenstates were no longer compactly localized. Further, we also reported the existence of a central band with extended nonergodic (multifractal) eigenstates separated from the Anderson localized states by a fractal mobility edge \(|E|<4/\lambda\)[46].
Figure 4: In the antisymmetric case, the particle density (whose value is represented by a colour according to the code shown) as a function of time \(t\), with \(m\) denoting the site index, for a single particle initially at the \(c-\)site of the \(100^{\mathrm{th}}\) unit cell with increasing disorder strength (a) \(\lambda=0.01\), (b) \(\lambda=2\) and (c) \(\lambda=100\) and (d) particle density at \(t=10^{9}\) for various strengths of disorder. For a single particle initially at the \(d-\)site of the \(100^{\mathrm{th}}\) unit cell, the particle density as a function of time \(t\) for, (e) \(\lambda=0.01\), (f) \(\lambda=2\) and (g) \(\lambda=100\) and (h) particle density at \(t=10^{9}\) for various strengths of disorder. Here system size is \(N=600\), and averaging over \(100\) disorder realizations have been considered.
From the evolution of the particle density, we observe that for the initial state \(\ket{\psi_{in}}=\ket{c_{k}}\) the wavefunction spreads over the entire lattice with time \(t\) at all strengths of the disorder (see Figs. 4(a)-4(c)). The same can be observed in the long time limit \(t=10^{9}\) (see Fig. 4(d)), with occupation probability \(p_{m}^{\infty}\), spreading non-uniformly over the entire space at all strengths of disorder, which is a signature of the multifractal states, as observed in the phase diagram in the static case [36]. The results are qualitatively the same for the dynamics associated with the other initial state \(\ket{\psi_{in}}=\ket{d_{k}}\) as shown in Figs. 4(e)-4(h). In both cases, in the higher disorder regime \(\lambda=100\), we observe that the site on which the particle is initially localized shows a large occupation probability in the long time limit (see Figs. 4(d),(h)). For low and intermediate disorder, we have also checked (results not shown here) that the return probability \(R(t)\) in the long time limit is of the order of \(O(10^{-2})\) due to the contribution of a large fraction of multifractal eigenstates. At higher disorder strengths, it is \(\approx 0.4\) for \(\ket{\psi_{in}}=\ket{c_{k}}\) and \(\approx 1\) for \(\ket{\psi_{in}}=\ket{d_{k}}\) owing to the presence of a large fraction of localized eigenstates.
## IV Interacting disorder-free system
In this section, we study the effects of the interaction \(V\) on the ABF diamond lattice in the zero disorder limit. We investigate the properties of the eigenstates with the help of the many-particle inverse participation ratio (MIPR) and the one-particle density matrix (OPDM). We also explore the dynamics of the particle density, entanglement entropy and return probability. For a system size \(N\) with \(N_{p}\) being the particle number, the dimension of the Hilbert space is \(D=\binom{N}{N_{p}}\) and the filling fraction is represented by \(\nu=\frac{N_{p}}{N}\). Using exact diagonalization, we obtain the many-body energy spectra \(E_{i}\) and the normalized eigenstates \(\ket{\psi}_{i}\), where \(i=1,2,\ldots,D\).
Expanding a normalized eigenstate \(\ket{\Psi}\) in the particle number constrained space as \(\ket{\Psi}=\sum_{i=1}^{D}C_{i}\ket{i}\), we compute the many-particle inverse participation ratio (MIPR):
\[\text{MIPR}=\sum_{i=1}^{D}\left|C_{i}\right|^{4}. \tag{8}\]
For a perfect delocalized eigenstate \(\text{MIPR}=O(1)/D\), while for an extremely localized eigenstate, \(\text{MIPR}=O(1)\). Here we study the scaling of MIPR with \(D\), using the relation \(\text{MIPR}\propto\frac{1}{D\gamma}\). \(\gamma\) is close to \(0\) in the MBL phase, while in a perfectly delocalized many-body phase \(\gamma=1\) and in the nonergodic many-body phase \(0<\gamma<1\)[47].
In Fig. 5, we fix the filling fraction \(\nu=1/3\) and extract \(\gamma\) by increasing the system size \(N\). Using the relation \(\varepsilon_{i}=\frac{E_{1}-E_{1}}{E_{D}-E_{1}}\), where \(E_{1}\) and \(E_{D}\) are the ground state and maximum energy levels, respectively, the energy levels are rescaled to lie within the range \(0\leq\varepsilon_{i}\leq 1\). We then study MIPR averaged over the states in the energy windows, which are specified as \([\varepsilon-0.01,\varepsilon+0.01]\), where \(\varepsilon=0.1,0.2,\ldots,0.9\) at \(V=1\). From the energy-resolved study, we observe that \(0.57\leqslant\gamma_{\varepsilon}\leqslant 0.68\) over the entire energy spectrum, indicating the existence of a nonergodic phase.
The localization characteristics of a many-body system can also be explored with the help of the one-particle density matrix (OPDM) [48; 49; 50]. The OPDM \(\rho_{\text{o}}\) for any many-body eigenstate \(\ket{\Psi}\) is defined as:
\[\left(\rho_{\text{o}}\right)_{ij}=\left\langle\Psi\left|a_{i}^{\dagger}a_{j} \right|\Psi\right\rangle, \tag{9}\]
Figure 5: MIPR averaged over the eigenstates in the energy window \([\varepsilon-0.01,\varepsilon+0.01]\) with \(1/D\) where \(\varepsilon=0.1,0.2,\ldots,0.9\), for a fixed interaction strength \(V=1\). Here system sizes considered are \(N=9,12\) and \(15\), and the filling fraction is fixed as \(\nu=1/3\).
Figure 6: Occupation spectrum \(\langle n_{\alpha}\rangle\) with scaled index \(\alpha/N\) at fixed interaction strengths (a) \(V=0.1\), (b) \(V=1\), and (c) \(V=10\), for different system sizes \(N=9,12,15\) and filling fraction \(\nu=1/3\). (d) The average OPDM entropy \(S_{o}\) with increasing interaction strength \(V\). Dashed lines denote the maximal value of \(S_{o}\). Averaging has been performed over the eigenstates in the energy window \(\varepsilon=[0.54,0.57]\).
where we have renamed the fermion operators at the various sites of the different unit cells as \((u_{1},d_{1},c_{1},u_{2},d_{2},c_{2},\ldots u_{k},d_{k},c_{k})=(a_{1},a_{2},a_{3} \ldots a_{N})\) where \(a_{i}^{\dagger}(a_{i})\) creates(annihilates) a fermion on-site \(i\) which runs from \(i=1,2,\ldots,N\). A compact way to define these new operators is to simply write:
\[u_{k} = a_{3(k-1)+1}\] \[d_{k} = a_{3(k-1)+2}\] \[c_{k} = a_{3(k-1)+3} \tag{10}\]
where \(k=1,2,\ldots,\frac{N}{3}\) runs over the unit cells. The diagonalization of the OPDM results in a basis of single-particle eigenstates called the natural orbitals \(\ket{\phi_{\alpha}}\), with \(\alpha=1,2,\ldots,N\) and their occupations (eigenvalues) denoted by \(n_{\alpha}\):
\[\rho_{\alpha}\ket{\phi_{\alpha}}=n_{\alpha}\ket{\phi_{\alpha}}. \tag{11}\]
The trace of the OPDM is equal to the total number of particles in the system \(\mathrm{tr}(\rho_{\alpha})=\sum_{\alpha=1}^{N}n_{\alpha}=N_{p}\), and the natural orbitals are ordered with decreasing occupation: \(n_{1}\geq n_{2}\geq\ldots\geq n_{N}\).
These natural orbitals are localized in the MBL phase and delocalized in the ergodic phase. This behaviour of the natural orbitals is a many body effect since, without interactions, the natural orbitals of the single-particle energy eigenstates are all localized. In a non-interacting system, each many-body eigenstate \(\ket{\Psi}\) is a Slater determinant of \(N_{p}\) single-particle states, with the occupation spectrum \(n_{\alpha}=1\) for all \(\alpha\leq N_{p}\) and zero otherwise at any strength of disorder. In the MBL phase, all the natural orbitals corresponding to \(\alpha\leq N_{p}\) remain almost fully occupied (\(\bra{n_{\alpha}}\approx 1\)), while the others remain almost unoccupied (\(\bra{n_{\alpha}}\approx 0\)), resulting in a discontinuity \(\delta n=n_{N_{p}+1}-n_{N_{p}}\) that is close to unity. In the thermal phase, the occupations of all the orbitals approach the mean filling fraction \(\bra{n_{\alpha}}\approx\nu\). In the ergodic phase, the occupation spectrum is consistent with the eigenstate thermalization hypothesis, while in the MBL phase, the occupations preserve a discontinuity at an emergent Fermi edge.
From the occupation spectrum, the one-particle occupation entropy can be calculated as follows:
\[S_{o}=-\operatorname{tr}\rho_{o}\ln\rho_{o}=-\sum_{\alpha}n_{\alpha}\ln\left( n_{\alpha}\right). \tag{12}\]
The one-particle occupation entropy is large and proportional to the system size in the delocalized phase, corresponding to the volume law of thermal states. In contrast, in the localized phase, it is close to \(0\). In the ergodic phase \(\bra{n_{\alpha}}\approx\nu\), hence for a filling fraction \(\nu=1/3\) considered here, the maximal value of \(S_{o}\) will be \((N/3)\ln 3\). Thus in the thermal phase, it corresponds to the volume law displayed by many-body eigenstates, while it approaches \(0\) in the localized phase.
In Figs. 6(a)-6(c), we have plotted the occupation spectrum \(\bra{n_{\alpha}}\) at different interaction strengths \(V=0.1,1\) and \(10\) and over a specific energy window \(\varepsilon=[0.54,0.57]\). We observe that the occupations are not close to \(1\) or \(0\), indicating the absence of MBL. Further deep in the thermal phase, \(\bra{n_{\alpha}}\) are expected to become system size independent at the filling fraction (\(\nu=1/3\) here), while playing out on either side in a characteristic system-size-dependent manner; here, we only see a monotonic decrease with almost no system size dependence throughout. We conclude that the presence of interaction in the ABF diamond lattice results in a nonergodic phase - this is in agreement with the results of MIPR. We also study the OPDM entropy \(S_{o}\) (see Fig. 6(d)) for the states corresponding to the energies \(\varepsilon=[0.54,0.57]\). We observe that \(S_{o}\) does not reach its maximal value (dashed lines in Fig. 6(d)), nor does it decrease to \(0\) at any interaction strength \(V\), and indicates nonergodic behaviour.
In Section III from our discussion of the single-particle dynamics, we have seen how the number of unit cells occupied by the time-evolving state depends on the initial state. Here we study many-body non-equilibrium dynamics with the help of particle density, entanglement entropy and return probability. The study of entanglement entropy [51, 52] serves as a quantifier of localization
Figure 7: For the initial state given by Eq. 13, (a) the particle density (whose value is represented by a colour according to the code shown) as a function of time \(t\), where \(m\) is the site index at interaction strength \(V=1\), (b) the entanglement entropy \(S_{A}\) as a function of time \(t\) for a subsystem of size \(N_{A}=N/3\) and (c) the return probability \(R\) as a function of time \(t\) for interaction strengths \(V=0.001,0.01,1\) and \(3\). (d-f) Corresponding plots for the initial state given by Eq. 14. For all cases, \(N=18\) and \(\nu=1/6\).
in many-body systems. For the many-body state \(\ket{\psi}\), one can calculate the density matrix \(\rho=\ket{\psi}\bra{\psi}\). The system is then divided into two parts, one with \(N_{A}\) number of sites and the other with \(N_{B}=N-N_{A}\) sites. The reduced density matrix (RDM) is calculated by tracing over the subsystem \(B\) as \(\rho_{A}=\mathrm{Tr}_{B}(\rho)\), and the entanglement entropy is given by \(S_{A}=-\mathrm{Tr}(\rho_{A}\ln\rho_{A})\).
In order to understand the interplay of initial configuration and interaction \(V\), we consider two types of initial states for the system size \(N=18\) and a filling fraction \(\nu=1/6\). In the first case, we consider an initial state of the density wave type with particles on \(c-\)sites of alternate unit cells [31]:
\[\ket{\psi_{in}^{c}}=\prod_{i=1}^{N/6}\hat{c}_{2i-1}^{\dagger}\ket{0}. \tag{13}\]
In the second type of initial state, the \(d-\)sites of alternate unit cells are occupied:
\[\ket{\psi_{in}^{d}}=\prod_{i=1}^{N/6}\hat{d}_{2i-1}^{\dagger}\ket{0}. \tag{14}\]
Figs. 7(a)-7(c) show the dynamics starting from the initial state given by Eq. 13. When the particles are arranged in a manner such that \(\ket{h-l}\geq 2\) where \(h,l\) are the unit cell indices of any pair of particles, we observe distinct CLSs for each particle as shown by the evolution of the particle density in Fig. 7(a). The particles show caging behaviour and remain unaffected by the interactions here. The same can be observed from the evolution of the entanglement entropy \(S_{A}\), where \(N_{A}=1/3\) in Fig. 7(b). We observe that at all interaction strengths, there is zero entanglement between the two subsystems, indicating that the compact localized states are unaffected by interaction strength \(V\). Also, we observe that the return probability shows perfect oscillations (see Fig. 7(c), where time axis is shown on a linear scale to highlight the oscillations) in the long time limit independent of the strength of interaction.
We then consider the initial configuration given by Eq. 14, corresponding to a \(1/6\) filling fraction. However, the CLS corresponding to the single particles spans over \(3\) unit cells, as shown in Fig. 2(c). Consequently, from the evolution of the particle density, we observe an overlap between the CLS belonging to different particles in Fig. 7(d). This suggests that interaction among the initially caged particles comes into play. The same can also be observed from the evolution of \(S_{A}\), where we plot the entanglement by considering the subsystem size \(N_{A}=1/3\). After an initial transient till \(t\approx 1\), independent of the interaction strength \(V\), the entanglement saturates to a significant value indicating a nonergodic phase. From Fig. 7(f), we observe that the return probability displays continual oscillations about a nonergo mean value, although it does not reach \(1\). A closer look at this figure in a linear scale shows that the oscillations are interaction dependent, mainly controlled by the interaction-dependent gaps between the degenerate bands of the many-body energy spectrum. These energy gaps are constant throughout the dynamics, and hence, the associated terms present in the return probability do not vanish due to phase randomization, giving rise to energy gap-dependent fluctuations in return probability throughout the dynamics. This scenario is typical of a clean degenerate system perturbed with many-body interactions [31]. This behaviour is also consistent with the nonergodic phase argued from the previous quantities.
## V Interactions and Symmetric Disorder
In this section, we study the interplay of symmetric disorder and interactions. Specifically, we study the effects of the application of \(AA\) disorder in a symmetric manner where the disorder is introduced on the \(u\) and \(d\) sites:
\[\zeta_{k}^{u}=\zeta_{k}^{d}\qquad\text{and}\qquad\zeta_{k}^{c}=0, \tag{15}\]
in the presence of interactions. We first look at the eigenvalue and eigenvector properties of this Hamiltonian, and then see how these properties are reflected in a non-equilibrium dynamical setting.
### Statics
We start by investigating the level spacing distributions \(P(s)\). To do this, the energy levels are arranged in ascending order, and the consecutive spacings are obtained as \(s_{i}=E_{i+1}-E_{i}\). A large collection of such spacings is obtained with the aid of several disorder realizations. Next, the \(s_{i}\) are unfolded [53] by dividing the original level spacings by the mean level-spacing of the spectrum. We then study the distribution of these scaled spacings. It is well known [54] that when the states involved are localized, the probability distribution of the level spacings is Poissonian: \(P(s)=e^{-s}\). On the other hand, for delocalized states, the probabil
Figure 8: In the symmetric case, level spacing distribution \(P(s)\) with spacings \(s\) at interaction strength \(V=1\) for filling fraction \(\nu=1/3\) at disorder strength (a) \(\lambda=0.01\) and (b) \(\lambda=1\). The number of disorder realizations is \(50\) for \(N=15\) and \(200\) for \(N=9\) and \(N=12\).
ity distribution of the level spacings is Wigner-Dyson: \(P(s)=\frac{\pi}{2}se^{-\frac{\pi}{4}s^{2}}(\text{GOE}\)[54]).
In the disorder-free case, the single-particle ABF diamond lattice possesses massive degeneracy with only three energy levels. When interactions are turned on for the disorder-free model, we observe quasi-degeneracy as well as a large number of gaps in the spectrum. On the application of disorder, while degenerate bands are observed in the absence of interactions in the low disorder limit, many smaller bands are observed when the interactions are also turned on. In the case of high disorder, both in the presence and absence of interactions, the spectrum displays quasi-degeneracy and many smaller gaps, a behaviour typically observed in quasiperiodic systems [56]. This makes the level spacing distribution not a reliable tool for studying localization characteristics [55] in the low and high disorder regime. In the intermediate disorder regime, these effects are minimized due to the interplay of flat bands and disorder. Fig. 8 shows the probability distribution of the level spacing at interaction strength \(V=1\) for a fixed filling fraction \(\nu=1/3\) and different disorder strengths \(\lambda\). We observe that the spacing distribution is neither GOE nor does it show a perfect fit to the Poisson distribution both at \(\lambda=0.01\) (see Fig. 8(a)) and at \(\lambda=1\) (see Fig. 8(b)). The states are neither ergodic nor localized in the low and intermediate disorder regimes.
As discussed above, quasi-degeneracy and gaps in the low and high disorder limits yield inconclusive results when we study eigenvalue properties. We will now discuss MIPR at a fixed interaction strength \(V=1\) and various disorder strengths \(\lambda\) as shown in Fig. 9. For a fixed filling fraction \(\nu=1/3\), we extract the exponent \(\gamma\) by averaging the MIPR over the states belonging to the energy window \([\varepsilon-0.01,\varepsilon+0.01]\), where \(\varepsilon=0.1,0.2...0.9\). In the low disorder regime \(\lambda=0.01\) (see Fig. 9(a)), \(\gamma\approx 0.6\) over the entire energy spectrum indicating a nonergodic phase. We observe similar nonergodic behaviour in the intermediate regime \(\lambda=1\) as shown in Fig. 9(b).
Figure 10: The exponent \(\gamma\) extracted from the energy resolved MIPR in Fig. 9 with rescaled energy \(\varepsilon\) at interaction strength \(V=1\) and disorder strength \(\lambda=0.01,1\) and \(100\).
Figure 9: In the symmetric case, MIPR averaged over states in the energy window \([\varepsilon-0.01,\varepsilon+0.01]\) with \(1/D\), where \(\varepsilon=0.1,0.2,\ldots,0.9\) for a fixed interaction strength \(V=1\) and disorder strength (a) \(\lambda=0.01\), (b) \(\lambda=1\) and (c) \(\lambda=100\). Number of disorder realizations are \(400\), \(200\), and \(50\) for system sizes \(N=9,12\) and \(15\), respectively and the filling fraction is \(\nu=1/3\).
However, in the high disorder regime with \(\lambda=100\) (see Fig. 9(c)), the exponent has a significantly lower value \(\gamma\approx 0.1\) which is a signature of MBL-like behaviour. In Fig. 10, we have plotted the exponent \(\gamma\) at different disorder strengths and observe that it shows consistent behaviour over the entire spectrum.
We also study the OPDM here, with the help of the occupation spectrum \(\langle n_{\alpha}\rangle\) at different disorder strengths \(\lambda=0.01,1\) and \(100\) (see Figs. 11(a)-11(c)) and over a specific energy window \(\varepsilon=[0.54,0.57]\). At low and intermediate disorder strengths, the occupation spectrum falls monotonically with practically no dependence on system size and with no signature of the thermal value \(\langle n_{\alpha}\rangle=\nu=1/3\); it also does not quite reach close to \(0\) or \(1\) either indicating nonergodic behaviour. However, in the high disorder regime, \(\lambda=100\), it reaches close to \(0\) and \(1\), indicating localized behaviour of the single-particle states and hence an MBL-like phase. The OPDM entropy \(S_{o}\) is also consistent with the above inferences. While it is quite far from its thermal value (represented by dashed lines in Fig. 11(d)) in the low and intermediate disorder regimes, it shows system size independence and goes close to \(0\) in the MBL-like phase.
### Nonequilibrium dynamics
Next, we study the many-body non-equilibrium dynamics with the help of particle density, entanglement entropy and return probability. For the initial state given by Eq. 13, we observe the evolution of the particle density that the CLSs corresponding to distinct particles remain isolated and unaffected by the interaction strength \(V\) as shown in Figs. 12(a)-12(c) for disorder strengths \(\lambda=0.01,1\) and \(100\) which results in caging. For the initial state given by Eq. 14, the amplitude corresponding to different CLSs overlap, and the interaction comes into play. From the time evolution of the particle density (see Figs. 12(d)- 12(f)), we observe that the compact localized nature is no longer sustained. While nonergodic behaviour is observed at low and intermediate disorder, at higher disorder strength (\(\lambda=100\)), it is comparatively less nonergodic in the long time limit.
We also study the entanglement entropy and return probability dynamics for both the initial configurations as shown in Fig. 13. In the case of the initial state corresponding to Eq. 13, for a subsystem of size \(N_{A}=1/3\), we observe that at all disorder strengths, \(S_{A}\approx 0\) (see Fig. 13(a)) which supports the observation of caging from the particle density. From the return probability dynamics (see Fig. 13(b)), with increasing disorder strengths, and for \(V=1\), we observe that while in the low and intermediate disorder regimes, \(R(t)\) has a finite value, in the higher disorder regime, it approaches unity. For the second initial state (Eq. 14), in the low and intermediate disorder regimes after the transient, we observe a sub
Figure 12: In the symmetric case, particle density (whose value is represented by a colour according to the code shown) as a function of time \(t\), where \(m\) is the site index, for the initial state given by Eq. 13 for interaction strength \(V=1\) and increasing disorder strengths (a) \(\lambda=0.01\), (b) \(\lambda=1\), (c) \(\lambda=100\). Corresponding plots show the evolution of the particle density (d)–(f) for the initial state given by Eq. 14. \(N=18\), \(\nu=1/6\) and \(100\) disorder realizations have been considered for all cases.
Figure 13: In the symmetric case, (a) entanglement entropy \(S_{A}\) for a subsystem of size \(N_{A}=N/3\) and (b) return probability \(R\) as a function of time \(t\) for the initial state given by Eq. 13. (c) Entanglement entropy \(S_{A}\) for a subsystem of size \(N_{A}=N/3\) and (d) return probability \(R\) as a function of time \(t\) for the initial state given by Eq. 14. The interaction strength is fixed as \(V=1\) with increasing disorder strengths \(\lambda\). For all plots, \(N=18,\nu=1/6\) and number of disorder realizations is \(100\).
diffusive regime, followed by \(S_{A}\) reaching saturation at a value of the order of the thermal value as shown in Fig. 13(c). In the high disorder regime, we observe a logarithmically slow growth which eventually saturates to a sub-thermal value. The same can be observed from the evolution of return probability (see Fig.13(d)). The dynamics of return probability supports the results observed from the particle density (see Figs. 12(d)- 12(f)) and entanglement entropy with \(R(t)\) close to \(0\) in the low and intermediate disorder regimes and with a finite magnitude in the high disorder regime.
In Fig. 14, we study the dynamics when the initial state is of the density wave type with particles on \(c-\)sites of every unit cell, i.e. with filling fraction \(\nu=1/3\):
\[\ket{\psi_{in}}=\prod_{i=1}^{N/3}\hat{c}_{i}^{\dagger}\ket{0}. \tag{16}\]
The evolution of the particle density for the interaction strength \(V=1\) and disorder strengths \(\lambda=1\) and \(100\), is shown in Fig. 14(a)-(b). While at low (not shown here) and intermediate disorder, we observe that the particle density spreads uniformly over all the sites, at high disorder strength \(\lambda=100\), the particle density is significantly localized over the initially occupied sites, indicating MBL-like behaviour. We also study the dynamics of entanglement entropy \(S_{A}\) as shown in Fig. 14(c) at \(V=1\). After the initial transient, \(S_{A}\) shows a subdiffusive growth in the low and intermediate disorder regimes and saturates to a large value indicating delocalization in the many-body system. The high disorder regime shows a logarithmic growth with time \(t\), saturating to a much lower magnitude compared to the thermal value indicating MBL-like behaviour. The return probability dynamics is shown in Fig. 14(d). For low disorder, it saturates to a finite value, indicating nonergodic behaviour, while in the intermediate disorder regime, the magnitude is much smaller (but \(\neq 0\)), indicating that the phase has a higher nonergodic tendency. At higher disorder strengths, it is close to unity, which is a signature of MBL-like behaviour.
We further study the normalized participation ratio (NPR) [21], in the long time limit (\(t=10^{9}\)) to understand the many-body phases. We consider two types of initial states with filling fraction \(\nu=1/3\), one given by Eq. (16) and another density wave type state with particles on \(d-\)sites of every unit cell:
\[\ket{\psi_{in}}=\prod_{i=1}^{N/3}\hat{d}_{i}^{\dagger}\ket{0}. \tag{17}\]
For any time evolved many-body state, \(\ket{\Psi(t)}=\sum_{i=1}^{D}C_{i}(t)\ket{i}\), the NPR is given as:
\[\eta=\frac{1}{D\sum_{i}\ket{C_{i}}^{4}}.\]
In the long time limit (\(t\rightarrow\infty\)), \(\eta\) is independent of system size \(N\) in the ergodic phase; in contrast, it decays exponentially with the system size in the localized phase [21]. In Fig. 14(e), we study the dependence
Figure 14: The particle density (whose value is represented by a colour according to the code shown) as a function of time \(t\), where \(m\) is the site index for the initial state given by Eq. 16 and for disorder strengths (a) \(\lambda=1\) and (b) \(\lambda=100\). (c) Entanglement entropy \(S_{A}\) for a subsystem of size \(N_{A}=N/3\) and (d) return probability \(R\) as a function of time \(t\). Here the interaction strength is fixed to \(V=1\), and the system size is \(N=15\), with filling fraction \(\nu=1/3\) and averaging has been done over \(50\) disorder realizations. (e) In the long time limit \(t=10^{9}\), the NPR \(\eta\) as a function of disorder strength \(\lambda\) for various system sizes \(N\), solid lines correspond to the initial state given by Eq. 16 while dashed lines correspond to the initial state given by Eq. 17. (f) The scaling exponent \(\kappa\) as a function of disorder strength \(\lambda\). Here \(V=1\) and \(\nu=1/3\), and the number of disorder realizations are at least \(50\) for all the system sizes.
Figure 15: In the antisymmetric case, energy-resolved gap-ratio as a function of the fractional eigenstate index \(\epsilon\) for disorder strength (a) \(\lambda=0.01\) and (a) \(\lambda=2\). The number of disorder realizations is \(500,400\), and \(50\) for system sizes \(N=9,12\) and \(15\), respectively.
of NPR \(\eta\) on the disorder strength \(\lambda\), fixed interaction strength \(V=1\) and increasing system sizes. Here the solid lines correspond to the initial state given by Eq. (16) while the dashed lines correspond to the initial state given by Eq. (17). We observe that in both cases, \(\eta\) is system size dependent at all strengths of disorder \(\lambda\), indicating the absence of the thermal phase. Further, the exponent \(\kappa\) can be extracted at various disorder strengths using the relation \(\eta\propto e^{-\kappa N}\). In Fig. 14(f), we plot \(\kappa\) with increasing disorder strength and observe that for both initial states, in the low and intermediate disorder regime \(0<\kappa<0.5\), indicating nonergodic behaviour. However, at higher disorder strength, it reaches near \(0.5\), which is a sign of many-body localization [21].
For the second type of symmetric configuration, when the disorder is only considered on the \(c-\)site:
\[\zeta_{k}^{u}=\zeta_{k}^{d}=0\quad\text{and}\quad\zeta_{k}^{c}\neq 0 \tag{18}\]
we observe that the results are qualitatively similar to the one discussed above.
## VI Interactions and antisymmetric disorder
In this section, we study the interplay of antisymmetric disorder and interactions. Specifically, we consider the antisymmetric application of the \(AA\) disorder on the \(u\) and \(d\) sites:
\[\zeta_{k}^{u}=-\zeta_{k}^{d}\qquad\text{and}\qquad\zeta_{k}^{c}=0, \tag{19}\]
in the presence of interactions. We first study the eigenvalue and eigenvector properties, and then investigate them in a non-equilibrium dynamical setting as well.
### Statics
We begin by analyzing the eigenvalue properties with the aid of the level-spacing ratio \(r_{av}\)[54], defined as:
\[r_{av}=\bigg{\langle}\frac{1}{N-2}\sum_{i=1}^{N-2}\frac{\min[s_{i},s_{i+1}]}{ \max[s_{i},s_{i+1}]}\bigg{\rangle}. \tag{20}\]
Here the energies \(E_{i}\)'s are first organized in ascending order, which are used to obtain the energy level-spacings \(s_{i}=E_{i+1}-E_{i}\). The braces in Eq. 20 represent the average over disorder realizations. In the delocalized and localized phases, \(r_{av}\) is expected to be approximately \(0.528\) and \(0.386\), respectively [54]. In Fig. 15, we study the energy-resolved level spacing ratio by dividing the many-body energy spectrum into several equal segments and calculating the local average of the level-spacing ratio for each segment of the spectrum. While in the low disorder regime (see Fig. 15(a)) \(r_{av}\approx 0.3\) indicating a mixed nonergodic phase, \(r_{av}\approx 0.52\) for \(\lambda=2\) (see Fig. 15(b)) suggests a thermal-like phase. As discussed in Section V.1, in the high disorder regime, the spectrum displays quasi-degeneracy and many gaps; we do not show the results here as they are inconclusive.
We next study the eigenvector properties with the
Figure 17: The exponent \(\gamma\) extracted from the energy resolved MIPR in Fig. 16 with rescaled energy \(\varepsilon\) at interaction strength \(V=1\) and disorder strength \(\lambda=0.01,2\) and \(100\).
Figure 16: In the antisymmetric case, MIPR averaged over states in the energy window \([\varepsilon-0.01,\varepsilon+0.01]\) with \(1/D\), where \(\varepsilon=0.1,0.2,\ldots,0.9\) for a fixed interaction strength \(V=1\) and disorder strength (a) \(\lambda=0.01\), (b) \(\lambda=2\) and (c) \(\lambda=100\). Number of disorder realizations are \(400\), \(200\), and \(50\) for system sizes \(N=9,12\) and \(15\), respectively and the filling fraction is \(\nu=1/3\).
help of MIPR and the OPDM. We study MIPR at a fixed interaction strength \(V=1\) and various disorder strengths \(\lambda\) as shown in Fig. 16. For a fixed filling fraction \(\nu=1/3\), we extract the exponent \(\gamma\) by averaging the MIPR over the states belonging to the energy window \([\varepsilon-0.01,\varepsilon+0.01]\), where \(\varepsilon=0.1,0.2,\ldots,0.9\). In the low disorder regime, \(\lambda=0.01\) (see Fig. 16(a)), we observe a nonergodic mixed phase owing to the spread of the exponent \(\gamma\) over a wide range \(0.54<\gamma<0.80\). In the intermediate disorder case, \(\gamma\) has a significantly higher magnitude, which signifies thermal-like behaviour (see Fig. 16(b)). At high disorder strength \(\lambda=100\), the exponent \(\gamma\) has a small magnitude \(\approx 0.07\) over the entire spectrum indicating an MBL-like phase, as shown in Fig. 16(c). In Fig. 17, we have plotted the exponent \(\gamma\) at different disorder strengths and observe that in the low disorder regime, \(\gamma\approx 0.8\) corresponding to the energy window about \(\varepsilon=0.9\), which signifies the presence of thermal-like states that contribute to the mixed nonergodic behaviour. We observe that the intermediate and high disorder regimes show thermal-like and MBL-like behaviour, respectively, over the entire spectrum.
We next study the OPDM with the help of the occupation spectrum \(\langle n_{\alpha}\rangle\) at different disorder strengths \(\lambda=0.01,2\) and \(100\) (see Figs. 18(a)-18(c)) and over the energy window \(\varepsilon=[0.54,0.57]\). In the intermediate regime (Fig. 18(b)), we observe that with increasing system size, \(\langle n_{\alpha}\rangle\) spreads about the filling fraction \(\nu=1/3\) with a characteristic inverse system-size variation on either side of the critical value of the \(\frac{\alpha}{N}\), indicating a thermal phase. In contrast, in the high disorder regime (Fig. 18(c)), \(\langle n_{\alpha}\rangle\) reaches close to \(0\) and \(1\), indicating localized behaviour of the single-particle states. Hence, the phase is MBL-like. In the low disorder regime ((Fig. 18(a))), the occupation spectrum is neither spread about the thermal value \(\langle n_{\alpha}\rangle=\nu=1/3\) nor does it reach close to \(0\) and \(1\) (like in MBL), thus indicating mixed nonergodic behaviour. The OPDM entropy \(S_{o}\) (see in Fig. 18(d)) signifies a nonergodic phase in the low disorder regime as it neither reaches the thermal value (dashed lines) nor the MBL value (\(0\)). In the intermediate disorder case, \(S_{o}\) reaches its thermal value denoted by dashed lines, especially for \(N=15\), while it approaches \(0\) in the high disorder case, indicating MBL-like behaviour.
### Nonequilibrium dynamics
We next study many-body non-equilibrium dynamics with several measures such as particle density, entanglement entropy and return probability. We first consider the initial state given by Eq. 13, a product state for a system size \(N=18\) and filling fraction \(\nu=1/6\), with particles occupying the \(c-\)sites of alternate unit cells. From the evolution of the particle density shown in Fig. 19(a)-19(c), we observe that for all disorder strengths, the CLS-like behaviour persists at early times (\(t<1\)). However,
Figure 19: In the antisymmetric case, particle density (whose value is represented by a colour according to the code shown) as a function of time \(t\), where \(m\) is the site index, for the initial state given by Eq. 13 for interaction strength \(V=1\) and increasing disorder strengths (a) \(\lambda=0.01\), (b) \(\lambda=2\), (c) \(\lambda=100\). Corresponding plots (d)–(f) show the evolution of the particle density for the initial state given by Eq. 14. \(N=18\), \(\nu=1/6\) and \(100\) disorder realizations have been considered for all cases.
Figure 18: Occupation spectrum \(\langle n_{\alpha}\rangle\) with scaled index \(\alpha/N\) at fixed interaction strengths \(V=1\) and disorder strengths (a) \(\lambda=0.01\), (b) \(\lambda=2\), and (c) \(\lambda=100\), for different system sizes \(N=9,12,15\) and fixed filling fraction \(\nu=1/3\). (d) The average OPDM entropy \(S_{o}\) with increasing strength of disorder \(\lambda\). Dashed lines denote the maximal value of \(S_{o}\). Averaging has been performed over the eigenstates in the energy window \(\varepsilon=[0.54,0.57]\) and using \(400,200\) and \(50\) disorder realizations for system sizes \(N=9,12\) and \(15\), respectively.
for time \(t>1\), while the particle density spreads uniformly over the entire lattice in the low and intermediate disorder regimes, it shows a comparatively less ergodic behaviour in the high disorder case. We also study the dynamics for the second initial state given by Eq. 14. For low and intermediate disorder strengths (see Fig. 19(d)-19(e)), we observe that the particle density is uniformly spread over all the sites indicating ergodic behaviour. In contrast, at higher disorder strength \(\lambda=100\) (see Fig. 19(f)), the particle density is significantly localized over the initially occupied sites.
The entanglement entropy and return probability dynamics for both the initial configurations are shown in Fig. 20. In the case of the initial state corresponding to Eq. 13, \(S_{A}\) has near-zero magnitude at early times, as shown in Fig. 20(a). However, the low and intermediate disorder regimes saturate to a large magnitude after the initial transient. At higher disorder strengths (\(\lambda=100\)), we observe that the behaviour is relatively more localized with the entanglement entropy \(S_{A}\) (see Fig. 20(a)) saturating to a lower magnitude. From the return probability dynamics as shown in Fig. 20(b), while for low and high disorder \(R(t)\) saturates to a finite value indicating non-ergodic behaviour, in the intermediate disorder regime, \(R(t)\approx 0\) which signifies a thermal-like phase. For the second initial state given by Eq. 14, after the initial transient, in the low disorder regime, we observe oscillatory behaviour followed by a subdiffusive growth after which \(S_{A}\) saturates near the thermal value(see Fig. 20(c)). The behaviour of \(S_{A}\) in the intermediate disorder regime is similar to the low disorder case, except that the oscillatory part is absent. After the transient, there is a sub-diffusive increment in \(S_{A}\), which saturates to a large value indicating delocalization in the many-body system. In contrast, in the high disorder region, it saturates to a low value signifying MBL. We also study the evolution of the return probability, as shown in Fig. 20(d). In the long time limit, it saturates to \(0\) in the low and intermediate disorder regimes indicating thermal-like behaviour. At the same time, it is close to \(1\) in the high disorder regime, which is a signature of many-body localization.
We next study the non-equilibrium dynamics of the system for the two initial states by considering filling fraction \(\nu=1/3\). The evolution of the particle density is shown in Fig. 21(a)- 21(b), at the interaction strength \(V=1\) and for disorder strengths \(\lambda=2\) and \(100\). We observe mixed nonergodic behaviour at low disorder strength (not shown here). In contrast, we see ergodic behaviour at intermediate disorder strengths, with the particle density spread uniformly over all the sites.
Figure 21: The particle density (whose value is represented by a colour according to the code shown) as a function of time \(t\), where \(m\) is the site index for the initial state given by Eq. 16 for disorder strengths (\(\lambda\)) \(\lambda=2\) and (b) \(\lambda=100\). (c) Entanglement entropy \(S_{A}\) for a subsystem of size \(N_{A}=N/3\) and (d) return probability \(R\) as a function of time \(t\). Here the interaction strength is fixed to \(V=1\), and the system size is \(N=15\) with filling fraction \(\nu=1/3\), and averaging has been done over \(50\) disorder realizations. (e) In the long time limit \(t=10^{9}\), NPR \(\eta\) as a function of disorder strength \(\lambda\) for various system sizes \(N\), solid lines correspond to initial state given by Eq. 16 while dashed lines correspond to initial state given by Eq. 17. (f) The scaling exponent \(\kappa\) as a function of disorder strength \(\lambda\). Here \(V=1\) and \(\nu=1/3\), and the number of disorder realizations is at least \(50\) for all the system sizes.
Figure 20: In the antisymmetric case, (a) the entanglement entropy \(S_{A}\) for a subsystem of size \(N_{A}=N/3\) and (b) return probability \(R\) as a function of time \(t\) for the initial state given by Eq. 13. (c) Entanglement entropy \(S_{A}\) for a subsystem of size \(N_{A}=N/3\) and (d) return probability \(R\) as a function of time \(t\) for the initial state given by Eq. 14. The interaction strength is fixed as \(V=1\) for various disorder strengths \(\lambda\). For all plots, \(N=18,\nu=1/6\), and the number of disorder realizations is \(100\).
At high disorder strength \(\lambda=100\), the particle density is significantly localized over the initially occupied sites. We also study the dynamics of entanglement entropy \(S_{A}\) as shown in Fig. 21(c) at \(V=1\). After the initial transient, \(S_{A}\) shows a subdiffusive growth in both the low and intermediate disorder regimes; however, it saturates to a large value for \(\lambda=2\), indicating an ergodic phase, while for \(\lambda=0.01\), it saturates to a comparatively lower value indicating weak delocalization in the system. In the high disorder case, after the initial transient, \(S_{A}\) saturates to a sub-thermal value indicating localization. From the dynamics of the return probability (see Fig. 21(d)), we observe that it saturates to a finite value in the low and high disorder phases. In contrast, it saturates close to \(0\) in the intermediate phase indicating thermal behaviour.
We next study the NPR [21] in the long time limit for the initial states given by Eq. 16 (solid lines) and Eq. 17 (dashed lines) as shown in Fig. 21(e). In the case of the initial state given by Eq. 16, while \(\eta\) is system size dependent in the low and high disorder regimes, it shows system size independence in the intermediate disorder case. We then extract the exponent \(\kappa\) and plot it as a function of disorder strength \(\lambda\) in Fig. 21(f). In the low disorder case, \(\kappa\) lies between \(0\) and \(0.5\), indicating nonergodic behaviour while in the intermediate regime \(\kappa=0\), indicating a thermal phase. In contrast, in the high disorder regime, we observe \(\kappa\approx 0.5\), which is a signature of many-body localization. For the other initial state given by Eq. 17, in the low disorder regime, \(\eta\) tends to be system size-independent (see Fig. 21(e)), with \(\kappa\) close to \(0\) (Fig. 21(f)), which indicates that it is close to the ergodic phase. In the intermediate regime, the phase is thermal as \(\eta\) is system size independent and \(\kappa=0\). In the higher disorder regime, \(\eta\) is system size dependent with \(\kappa\) close to \(0.5\), indicating an MBL-like phase.
## VII Conclusion
In this work, we have systematically investigated the single-particle dynamics and the interplay of many-body interactions and quasiperiodic AA disorder in the one-dimensional ABF diamond lattice. We find that the compact localized states observed for the clean system and when the disorder is applied symmetrically in the single particle case [36] sustain quantum caging even in a non-equilibrium dynamical set-up. In contrast, in the long time limit, the wave function spreads over the entire lattice in the presence of antisymmetric disorder. This can be attributed to the loss of compact localization and the presence of multifractal eigenstates in the static case [36].
In the presence of interactions and zero disorder in the system, nonergodic phases are observed at all interaction strengths. In general, non-equilibrium dynamics support the findings from the static case. However, the many-body system manifests quantum caging for specially engineered initial states. When our interacting many-body system is subjected to symmetric disorder, nonergodic phases are observed at low and intermediate disorder strengths. In contrast, an MBL-like phase is observed at higher disorder strengths. Studying non-equilibrium dynamics, we find nonergodic regimes in the case of low and intermediate disorder strengths, while localization characteristics dominate in the high disorder case. Quantum caging behaviour is supported for specific initial configurations, independent of the strength of interaction or disorder.
The antisymmetric application of disorder in the presence of interactions in the system result in three distinct phases: a nonergodic mixed phase at low disorder strengths, a thermal phase at intermediate disorder strengths and an MBL-like phase at higher strengths of disorder. Even in the mixed nonergodic phase, some states show thermal-like behaviour. A study of the non-equilibrium dynamics shows that for different initial states, in the low disorder regime, a mixed phase exists with a varying magnitude of non-ergodicity; for the intermediate disorder, the phase is always thermal. In the high disorder case, the phase shows varying magnitudes of non-ergodicity, inclined towards many-body localization.
We also want to remark on the case when the on-site disorder is chosen from a uniform uncorrelated random distribution in the presence of interactions. Interestingly, for both the symmetric and antisymmetric cases, the resulting phase is similar in the low, intermediate and high disorder strengths to those obtained when applying the quasiperiodic disorder. We conclude that the observed many-body phases are due to the symmetry of the applied disorder as reported previously in the single particle case [36]. Also, in the antisymmetric case, the presence of mixed non-ergodic, thermal and MBL-like phases at the low, intermediate and high disorder strengths, respectively, is qualitatively similar to phases that emerge as a result of the application of the uniform disorder on all sites as reported in [31].
Thus, our work shows that the interplay of quasiperiodic disorder, interactions and flat-band structure in the diamond lattice results in an exciting phase diagram. Exploring distinctive phases in other interacting and disordered flat band systems would be an interesting direction for further research. With the recent surge in the experimental study of engineered flat-band systems, such phases could be realized in optical lattices for cold atoms.
###### Acknowledgements.
We are grateful to Ajith Ramachandran for the careful reading of the manuscript and discussions. We are grateful to the High-Performance Computing (HPC) facility at IISER Bhopal, where large-scale computations of this project were run. A.A. is grateful to the Council of Scientific and Industrial Research (CSIR), India, for her PhD fellowship. N.R. acknowledges support from the Indian Institute of Science under the
IoE-IISc fellowship program. A.S. acknowledges financial support from SERB via the grant (File Number: CRG/2019/003447) and DST via the DST-INSPIRE Faculty Award [DST/INSPIRE/04/2014/002461].
## Appendix: Entanglement entropy
In this section, we analyze the effect of interactions on the ABF diamond lattice by calculating the half-chain entanglement entropy \(S_{A}\) of all the many-body eigenstates. We first discuss the case when interactions are turned on in the disorder-free model, and then consider cases where disorder is applied in a symmetric and antisymmetric manner in the presence of interactions.
Fig. 22 shows the half-chain entanglement entropy \(S_{A}\) of all the many-body eigenstates at various interaction strengths \(V\) for a system size \(N=18\) and filling fraction \(\nu=1/6\). For all interaction strengths, \(V=0.1,1\) and \(10\), while a significant fraction of the eigenstates shows a large magnitude of the entanglement entropy, it does not vary smoothly with the fractional eigenstate index \(\epsilon\). This indicates the presence of a nonergodic phase, which agrees with the MIPR and OPDM results shown in Section IV.
Next, we study the application of symmetric disorder; we have reported the presence of compactly localized states [36] in the single-particle case. Interestingly, in the case of non-interacting fermions, from the half-chain entanglement entropy, we observe that the eigenstates can be broadly divided into two categories (see Fig. 23(a)), one with \(S_{A}\neq 0\) and the other with \(S_{A}=0\). We choose a state corresponding to \(S_{A}\neq 0\) with index \(EV=1\) and another corresponding to \(S_{A}=0\), with index \(EV=13\). In Fig. 23(c), we have plotted both eigenstates in the particle number-constrained space with the index \(i\) running over \(1,2,\ldots,D\) and observe non-zero amplitude only on a finite number of sites. For the given state, we calculate the particle density \(\left\langle a_{j}^{\dagger}a_{j}\right\rangle\) (see Eq. 9) on all the sites \(j=1,2,\ldots,N\) as shown in Fig. 23(d). In the case of the state corresponding to \(S_{A}=0\), the particle density is zero both at the subsystem boundary (\(N/2\) or \(N/2+1\)) as well as the system boundaries (\(1\) or \(N\)) as periodic boundary conditions have been considered thereby disassociating the two subsystems. However, the same is not true for the states where the entanglement is non-zero. This effect persists for all strengths of the disorder as well as on the introduction of interactions (see Fig. 23(b)). We conclude that this is a many-body effect in a disordered system hosting compactly localized states. Also, a blind disorder averaging the entanglement entropy would wash out this behaviour, as shown in Fig. 24.
Fig. 25(a) shows the half-chain entanglement entropy for all the many-body eigenstates in the case of antisymmetric application of disorder and interaction strength
Figure 23: The half chain entanglement entropy \(S_{A}(\epsilon)\) for all the eigenstates as a function of the fractional eigenstate index \(\epsilon\) for disorder strength \(\lambda=0.01\) and interaction strengths (a) \(V=0\) and (b) \(V=1\). (c) The wavefunction probability of two states indexed as \(EV:1\) with \(S_{A}\neq 0\) and \(EV:13\) with \(S_{A}=0\). Here \(i=1,\ldots,D\), where \(D\) is the dimension of the particle number constraint Hilbert space. (d) The corresponding particle density \(\left\langle a_{m}^{\dagger}a_{m}\right\rangle\), where \(m\) is site index at \(V=0\). For all the cases, \(N=18\), filling fraction \(\nu=1/6\) and single disorder realization has been considered.
Figure 22: The half-chain entanglement entropy \(S_{A}(\epsilon)\) of all the eigenstates as a function of the fractional eigenstate index \(\epsilon\) for \(N=18\) with \(\nu=1/6\) for interaction strengths \(V=0.1,1\) and \(10\).
\(V=1\). In the low disorder case, \(S_{A}\) shows both rises and dips, indicating the presence of a nonergodic mixed phase. In the intermediate disorder regime, \(S_{A}\) has a smooth dependence on the eigenstates with a large magnitude indicating thermal behaviour. In contrast, for the higher disorder \(\lambda=100\), we observe \(S_{A}\) with a negative value indicating MBL-like behaviour. These results agree with those discussed in Section VI from the study of MIPR and OPDM. In Fig. 25(b), we plot the entanglement entropy \(S_{A}\) for a fixed filling fraction \(\nu=1/3\) and subsystem size \(N_{A}=1/3\) with increasing system size \(N\) in the energy window \(\varepsilon=[0.54,0.57]\)[57]. We find that the intermediate disorder case (\(\lambda=2\)) follows a volume law scaling while an area law-like behaviour is seen in the high disorder regime (\(\lambda=100\)). In the low disorder regime, \(S_{A}\) initially increases and eventually saturates as a function of system size, thus indicating nonergodic behaviour. We also study a single disorder realization of the half-chain entanglement entropy with \(\lambda=0.01\) and \(V=1\) (see Fig. 25(c)). Unlike the symmetric disorder case, we observe that \(S_{A}\neq 0\) for any eigenstate. We also study the particle density (see Fig. 25(d)) for the infinite temperature state and observe that in the low disorder case (\(\lambda=0.01\)), it is unevenly spread over the lattice sites indicating nonergodic behaviour. In contrast, at \(\lambda=2\), it spreads out uniformly, showing thermal behaviour, and at higher disorder strengths \(\lambda=100\), it is localized over a few sites.
|
2310.18438 | Exploring Shape Embedding for Cloth-Changing Person Re-Identification
via 2D-3D Correspondences | Cloth-Changing Person Re-Identification (CC-ReID) is a common and realistic
problem since fashion constantly changes over time and people's aesthetic
preferences are not set in stone. While most existing cloth-changing ReID
methods focus on learning cloth-agnostic identity representations from coarse
semantic cues (e.g. silhouettes and part segmentation maps), they neglect the
continuous shape distributions at the pixel level. In this paper, we propose
Continuous Surface Correspondence Learning (CSCL), a new shape embedding
paradigm for cloth-changing ReID. CSCL establishes continuous correspondences
between a 2D image plane and a canonical 3D body surface via pixel-to-vertex
classification, which naturally aligns a person image to the surface of a 3D
human model and simultaneously obtains pixel-wise surface embeddings. We
further extract fine-grained shape features from the learned surface embeddings
and then integrate them with global RGB features via a carefully designed
cross-modality fusion module. The shape embedding paradigm based on 2D-3D
correspondences remarkably enhances the model's global understanding of human
body shape. To promote the study of ReID under clothing change, we construct 3D
Dense Persons (DP3D), which is the first large-scale cloth-changing ReID
dataset that provides densely annotated 2D-3D correspondences and a precise 3D
mesh for each person image, while containing diverse cloth-changing cases over
all four seasons. Experiments on both cloth-changing and cloth-consistent ReID
benchmarks validate the effectiveness of our method. | Yubin Wang, Huimin Yu, Yuming Yan, Shuyi Song, Biyang Liu, Yichong Lu | 2023-10-27T19:26:30Z | http://arxiv.org/abs/2310.18438v1 | # Exploring Shape Embedding for Cloth-Changing Person Re-Identification via 2D-3D Correspondences
###### Abstract.
Cloth-Changing Person Re-Identification (CC-ReID) is a common and realistic problem since fashion constantly changes over time and people's aesthetic preferences are not set in stone. While most existing cloth-changing ReID methods focus on learning cloth-agnostic identity representations from coarse semantic cues (e.g. silhouettes and part segmentation maps), they neglect the continuous shape distributions at the pixel level. In this paper, we propose Continuous Surface Correspondence Learning (CSCL), a new shape embedding paradigm for cloth-changing ReID. CSCL establishes continuous correspondences between a 2D image plane and a canonical 3D body surface via pixel-to-vertex classification, which naturally aligns a person image to the surface of a 3D human model and simultaneously obtains pixel-wise surface embeddings. We further extract fine-grained shape features from the learned surface embeddings and then integrate them with global RGB features via a carefully designed cross-modality fusion module. The shape embedding paradigm based on 2D-3D correspondences remarkably enhances the model's global understanding of human body shape. To promote the study of ReID under clothing change, we construct 3D Dense Persons (DP3D), which is the first large-scale cloth-changing ReID dataset that provides densely annotated 2D-3D correspondences and a precise 3D mesh for each person image, while containing diverse cloth-changing cases over all four seasons. Experiments on both cloth-changing and cloth-consistent ReID benchmarks validate the effectiveness of our method. Our project page is located at [https://CSCL-CC.github.io](https://CSCL-CC.github.io).
Cloth-Changing Person Re-Identification; Shape Embedding; 2D-3D Correspondences; Large-Scale Dataset; Cross-Modality Fusion. +
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: thanks: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
+
Footnote †: Corresponding Author.
-levelrage dense pose estimation (Beng et al., 2017) to align the texture of body parts based on UV mapping. However, they do not further explore reliable shape representations for the ReID task. Additionally, these methods have a major defect in that they require partitioning the 3D model into charts, and the resulting discretized UV spaces prevent them from learning continuous correspondences over the entire body surface. As shown in Figure 1(c), the use of independent UV coordinate systems for each body part results in noticeable part seams in the estimated IUV maps. There are also some methods (Beng et al., 2017; Wang et al., 2018) directly estimating SMPL (Shi et al., 2019) shape parameters as 3D shape features. However, the SMPL shape parameter space is highly incompatible with the image feature space, making it challenging to effectively integrate features from these two modalities.
In this paper, we propose a Continuous Surface Correspondence Learning (CSCL) framework, which represents a new shape embedding paradigm for cloth-changing ReID. CSCL pixel-wisely maps a person image to a continuous embedding space of the SMPL mesh surface through vertex classification. Essentially, learning continuous 2D-3D correspondences aligns a person image to the entire surface of a 3D human model, and simultaneously obtains a pixel-level continuous distribution of body shape on the canonical 3D surface. Even for different persons wearing the same clothes, there can be significant differences in their body shape distributions. Therefore, we further extract fine-grained discriminative shape features from the established correspondences, and integrate them with global RGB features via an optimized cross-modality fusion module based on the transformer (Wang et al., 2019), which greatly compensates for the lost shape details in global RGB features. We incorporate a novel Latent Convolutional Projection (LCP) layer for feature projection. The LCP layer enhances the sharing and correlation among tokens via adding an additional latent embedding, which is the latent vector of an auto-encoder designed to reconstruct the token map. It is also noteworthy that the proposed framework generalizes well to the cloth-consistent cases, indicating the reliability of the learned shape features.
However, there is currently no publicly available cloth-changing ReID dataset with ground-truth dense 2D-3D correspondences. To facilitate the research, we construct a large-scale cloth-changing ReID dataset named 3D Dense Persons (DP3D), which contains 39,100 person images of 413 different persons captured by 15 cameras over all four seasons. We annotated dense 2D-3D correspondences for each person image via a carefully designed annotation system, ensuring 80 to 125 annotations for each image.
The main contributions of this work are summarized as follows:
* We propose a new shape embedding paradigm for cloth-changing ReID that establishes pixel-wise and continuous correspondences between a 2D image plane and a canonical 3D human body surface. To the best of our knowledge, this is also the first work to explore global shape representations for cloth-changing ReID via 2D-3D correspondences.
* We develop an optimized cross-modality fusion module to adaptively integrate shape features with global RGB features, where a novel Latent Convolutional Projection (LCP) layer is designed to perform feature projection.
* We construct 3D Dense Persons (DP3D), which is the first large-scale cloth-changing ReID dataset with densely annotated 2D-3D correspondences and a corresponding 3D mesh for each person image, while containing highly diverse cloth-changing cases in real-world scenarios.
* We demonstrate our proposed method is applicable to both cloth-changing and cloth-consistent situations, as shown by extensive results on four cloth-changing ReID datasets including DP3D and two general ReID datasets.
## 2. Related Works
In this section, we first review the literature on cloth-changing person re-identification and corresponding datasets, then introducing the research related to continuous surface embeddings in the context of 3D shape analysis.
### Cloth-Changing Person ReID
Existing cloth-changing ReID methods can be categorized into decoupling-based methods and auxiliary modality-based methods. Decoupling-based methods (Wang et al., 2019; Wang et al., 2019; Wang et al., 2020) aim to decouple cloth-agnostic features directly from RGB images without multi-modal auxiliary information. AFD-Net (Wang et al., 2019) disentangled identity and clothing features via generative adversarial learning. CAL (Wang et al., 2019) proposed to penalize the predictive power of the ReID model with respect to clothes via a clothes-based adversarial loss, while UCAD (Wang et al., 2019) enforced the identity and clothing features to be linearly independent in the feature space via an orthogonal loss.
Auxiliary modality-based methods (Beng et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020) are considered more robust since visual texture features can be filtered under the supervision of human semantics. FSAM (Wang et al., 2019) proposed to complement 2D shape representations obtained from human silhouettes for global features. MVSE (Wang et al., 2019) embedded multigranular
Figure 1. Comparison of different multi-modal auxiliary information for person re-identification. (a) Images of the same person in DP3D; (b) Coarse part segmentation, with only part labels estimated; (c) Discretized DensePose IUV estimation, with obvious seams between body parts; (d) Continuous 2D-3D correspondences between image pixels and the entire body surface, obtained through our CSCL framework.
visual semantic information into the model. Pixel Sampling (Wang et al., 2017) leveraged a human parsing model to recognize upper clothes and pants, and then randomly changed them by sampling pixels from other people, enforcing the model to automatically learn cloth-agnostic cues. DSA-ReID(Wang et al., 2018) and ASAG-Net(Wang et al., 2019) proposed to use dense human semantics to generate semantics-aligned images in the discretized DensePose UV space, while 3DSL (Chen et al., 2019) considered the low-dimensional SMPL shape parameters as 3D shape features, and directly fused them to global features. None of these methods consider establishing pixel-wise and continuous 2D-3D correspondences between image pixels and the entire 3D body surface, which effectively bridges the gap between 2D and 3D shape space.
### Cloth-Changing ReID Datasets
General person ReID datasets(Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) assume that the appearance of the same individual is consistent, which is often not the case in real-world scenarios. Models trained on these datasets rely excessively on clothing appearance, making it difficult for them to generalize well to long-term cloth-changing scenarios. In recent years, a few datasets were collected specifically for the cloth-changing setting. Celebrities (Celebrities, 2018) were obtained from the Internet, which consists of street snapshots of celebrities. PRCC (Wang et al., 2018) provides indoor cloth-changing person images with their corresponding contour sketches. COCAS (Wang et al., 2018) is a large-scale dataset that provides a variety of clothes templates for cloth-changing person ReID. LTCC (Wang et al., 2018) assumes that different people wear different clothes and assigns a unique clothing label to each person image in the dataset. VC-Clothes (Wang et al., 2018) is a large realistic synthetic dataset rendered by the GTA5 game engine. CSCC (Wang et al., 2018) considers different degrees of cloth-changing. NKUP (Wang et al., 2018) contains both indoor and outdoor person images with complex illumination conditions, while NKUP+ (Wang et al., 2018) has more diverse scenarios, perspectives, and appearances.
### Continuous Surface Embeddings
Continuous Surface Embeddings (CSE) target at pixel-wisely learning an embedding of the corresponding 3D vertex from an RGB image (Wang et al., 2018), which demonstrates strong human body representation capabilities. HumanGPS (Wang et al., 2018) employs contrastive learning to enhance CSE representations. BodyMap (Wang et al., 2018) introduced a coarse-to-fine learning scheme, establishing high-definition full-body continuous correspondences by refining coarse correspondences. SurfEmb (Chen et al., 2019) applied Continuous Surface Embeddings to the field of object pose estimation and learned correspondence distributions in a self-supervised fashion.
## 3. The 3D Dense Persons Dataset
Obtaining ground-truth 3D structure information for pedestrians is of substantial importance as it can address potential geometric ambiguities that may arise from relying solely on RGB modality.
In this section, we introduce the 3D Dense Persons (DP3D), a large-scale cloth-changing ReID dataset that provides densely annotated 2D-3D correspondences and a corresponding 3D mesh for each person image, filling the gap in the field.
### Data Collection
The raw videos we collected have high resolutions and cover a time span of one year. We selected a total of 15 cameras, with 5 of them having a resolution of 4K, 2 having a resolution of 2K, and the remainder being set to a resolution of 1080P. The use of high-resolution cameras ensures the recorded pedestrians to be as clear as possible, which is advantageous for the ReID task under clothing change. The shooting scenes encompass various outdoor locations, such as street scenes, park landscapes, construction sites, and parking lots. All pedestrians were captured by at least 2 cameras, with the majority being captured by 3 or more. We adopted the Mask R-CNN (Chen et al., 2019) framework to detect the bounding box of each person after framing.
### Annotation System
Due to the dramatic variations in people's clothing styles over the course of a year, we first identified the volunteers and conducted a manual inspection to avoid misidentification, while assigning a camera ID label, a person ID label, and a clothing ID label to each person image. Then, as shown in Figure 2, we annotated dense correspondences via a carefully designed pipeline. In the first stage, we ran the universal model of Graphonomy (Chen et al., 2019) with 20 part labels to segment the images, then uniformly sampling 40 pixels across the entire human body region. We also utilized
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline Datasets & Scene & IDs & Image & Cam & Time & 3D View & Dense Corr. \\ \hline Celebrities (Celebrities, 2018) & - & 590 & 10,842 & - & - & ✗ & ✗ \\ LTCC (Wang et al., 2018) & In & 152 & 17,138 & 12 & 2 Months & ✗ & ✗ \\ PRCC (Wang et al., 2018) & In & 221 & 33,698 & 3 & - & ✗ & ✗ \\ COCAS (Wang et al., 2018) & In & 5,266 & 62,382 & 30 & - & ✗ & ✗ \\ C-Clothes (Wang et al., 2018) & - & 512 & 19,080 & - & ✗ & ✗ \\ CSC (Wang et al., 2018) & Out & 267 & 36,700 & 13 & 12 Months & ✗ & ✗ \\ NKUP (Wang et al., 2018) & InOut & 107 & 9,738 & 15 & 4 Months & ✗ & ✗ \\ NKUP+ (Wang et al., 2018) & InOut & 361 & 40,217 & 29 & 1 Months & ✗ & ✗ \\ DFSD (Ours) & Out & 413 & 39,100 & 15 & 12 Months & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparison of DP3D and existing cloth-changing ReUD datasets (’In’: Indoor; ’Out’: Outdoor).
Figure 2. Examples of annotating person images in the DP3D dataset. (a) Cross-appearance images of the same person; (b) Generating pixels to be labeled (corresponding pixels are visualized with purple dots); (c) Annotating ground-truth corresponding 3D mesh vertices. (d) fitting the SMPL model to the person images under the guidance of dense correspondences; (e) the projected 2D full-body images used for annotation.
k-means clustering to obtain 5 to 10 centroid pixels for each part based on its size. Compared to DensePose (Beng et al., 2017), our sampling method avoids seams between body parts and ensures a sufficient number of sampling points for smaller parts. However, since people may wear loose clothes, we manually filtered out those sampling pixels that did not fall within the human body regions underneath the clothes. For each pair of images belonging to the same person, we additionally selected 10 corresponding pixels for consistency learning, which correspond to the same 10 mesh vertices. In the second stage, as shown in Figure 2 (e), we projected the SMPL mean template mesh from 6 predefined viewpoints to generate full-body images. When annotating a specific pixel, it was only necessary to choose the most suitable projected image, and its 2D coordinates were used to localize the corresponding 3D vertex. In cases certain pixels were challenging to determine from the projected images, we directly annotated the correspondences on the 3D mesh surface through rotation. It is worth noting that we did not annotate in a part-by-part manner, but rather adopted a global approach using full-body projected images for annotation, which ensured accurate annotations at the junctions of body parts. In the last stage, to obtain accurate SMPL parameters, we employed a modified SMPLity-X (Zhu et al., 2018) to fit the SMPL model to the person images under the guidance of densely annotated correspondences.
### Statistics and Comparison
The proposed DP3D dataset is characterized by its diverse scenes, multiple perspectives, large number of individuals, and long time span. It comprises 39,100 person images belonging to 413 different persons, which were captured over the course of a year (during four distinct seasons). Depending on its resolution, each person image has approximately 80 to 125 annotated correspondences, where 10 correspondences have mesh vertices shared among all images of the same person. We divided the images into a training set and a testing set, with each set containing approximately equal numbers of identities. For same-appearance images of a specific person, we randomly select one image per viewpoint to construct the query set, while the remaining images in the testing set form the gallery set. We present in Table 1 a comparison between DP3D and existing cloth-changing ReID datasets.
## 4. Methodology
In this section, we first provide an overview of our proposed framework in Section 4.1. Next, in Section 4.2 and 4.3, we elaborate the learning scheme of continuous 2D-3D correspondences, as well as the design principles of the cross-modality fusion module, respectively. Subsequently, we provide a comprehensive description of the training losses in Section 4.4.
### Overview
As shown in Figure 3 (a), person images are input separately into the ResNet-50 (He et al., 2016) backbone and CNN embedding layers to extract global RGB features and continuous surface embeddings. For each foreground pixel, CSCL maps it to a continuous embedding space of the SMPL mesh surface under the supervision of geodesic distances.
Figure 3. The architecture of the CSCL framework. (a) Our framework learns pixel-wise and continuous 2D-3D correspondences, which enables the extraction of fine-grained shape features. Cloth-agnostic shape knowledge is then complemented for global RGB features via cross-modality fusion; (b) Consistency learning between cross-view corresponding pixels.
Subsequently, a shape extraction network with a ResNet-50 architecture is further employed to extract fine-grained shape features from the learned surface embeddings, while simultaneously mapping them to the same size as global RGB features. Following that, we adaptively integrate shape features with global RGB features via an improved cross-modality fusion module, where a novel Latent Convolutional Projection (LCP) layer is designed to perform feature projection. Cross-attention mechanism is then applied to aggregate features from the two distinct modalities, which are then added to the original features. After the fusion, we conduct Global Average Pooling (GAP), followed by two separate fully-connected classifiers, to obtain the final global RGB features and shape features. We also introduce a learnable class token for each of the two modalities, which exhibits strong cross-modality compatibility and also contributes to the ID loss. In the inference stage, the two class tokens are concatenated with global RGB features and shape features to construct the final identity feature.
### Establishing Continuous Correspondences
Considering the huge domain gap between 2D person images and the 3D space perceived by human eyes, we believe that establishing continuous correspondences between image pixels and the entire 3D human body is of substantial importance, which bridges the gap between the 2D and 3D shape space and therefore benefit the understanding of global body shape.
Given a person image \(I\in\mathbb{R}^{H\times W\times 3}\) of height \(H\) and width \(W\), we first extract the segmentation mask \(M\) of the foreground person. Then, the CNN embedding layers map the person image into continuous surface embeddings \(E\in\mathbb{R}^{H\times W\times D}\), while preserving the spatial resolution of the image. For pixels within the foreground mask \(M\), we employ geodesic distances on the 3D surface to supervise the learning of surface embeddings. More concretely, we scale the cross-entropy loss of pixel-to-vertex classification on the mesh surface using geodesic distances. This constraint is reasonable as it quantifies the deviation of vertex prediction on the 3D surface. Furthermore, as illustrated in Figure 3 (b), we also conduct consistency learning for corresponding pixels in images that belong to the same person. Suppose we have two distinct images of the same person, donated as \(I_{1}\), \(I_{2}\), where foreground pixels \(p_{1}\) and \(p_{2}\) belong to image \(I_{1}\), and pixel \(q\) belongs to image \(I_{2}\). Both \(p_{1}\) and \(q\) correspond to the same vertex \(v_{1}\) on the mesh surface, while \(p_{2}\) corresponds to vertex \(v_{2}\). We first compute the cosine distance in the embedding space to measure the similarity between \(p_{1}\) and \(q\):
\[d(p_{1},q)=1-cos(E_{1}(p_{1}),E_{2}(q)) \tag{1}\]
where \(E_{1}\) and \(E_{2}\) denote surface embeddings of images \(I_{1}\) and \(I_{2}\). By minimizing the cosine distance \(d(p_{1},q)\), the embedding vectors of two corresponding pixels are brought closer. However, during training, only considering the consistency of corresponding pixels may lead to all embeddings mapping to similar values. Therefore, for different pixels \(p_{1}\) and \(p_{2}\) in the same person image, we keep their relative affinity by enforcing embedding distances to follow geodesic distances, i.e. minimizing \(|d(p_{1},p_{2})-s(g(v_{1},v_{2}))|\), where \(g(\cdot,\cdot)\) calculates the geodesic distance between two mesh vertices and \(s(\cdot)\) scales it to match the range of the cosine distance \(d(\cdot,\cdot)\).
Establishing 2D-3D correspondences allows for learning the continuous shape distributions on the 3D surface at the pixel level, i.e. \(Pr(o|I,p,p\in M)\), where I denotes the person image, and M denotes the foreground mask. To further extract fine-grained shape features, we feed the learned embeddings into the shape extraction network with a ResNet-50 architecture, while mapping them to the same size as global RGB features. Note that the extracted shape features are insensitive to clothing appearance as texture features are already filtered out in the correspondence learning process.
### Cross-Modality Feature Fusion
To adaptively integrate the shape features extracted from the established continuous correspondences with global RGB features, a cross-modality fusion module is designed. As discussed in CVT (Wang et al., 2017), convolutional layers are renowned for their remarkable ability to capture intricate local spatial token structures, which allows the removal of positional embeddings from the transformer (Wang et al., 2018) framework. However, the utilization of fixed-size convolutional kernels hampers the effectiveness of capturing global positional correlations between non-adjacent tokens. To mitigate this issue, we propose a novel Latent Convolutional Projection (LCP) layer. It adds the same latent embedding to each token in the token map, which is the latent vector of a pretrained auto-encoder designed to reconstruct the token map. During the training of CSCL, only the encoder of the auto-encoder is preserved and fixed to ensure the universal nature of the latent embedding, whereas the decoder is disregarded. This design not only greatly enhances the correlation and sharing among different tokens, but also enables better adaptation to images with diverse backgrounds. The projection of an LCP layer can be formulated as follows:
\[Q/K/V=Flatten(Conv2d(Reshape2D(F)+I)) \tag{2}\]
where \(Q/K/V\) represents the projected queries, keys, and values, \(F\) is the input token map, \(l\) represents the latent embedding, and \(Reshape2D\) denotes the operation to reshape the feature map \(F\) to a 2D token map. After separately passing global RGB features \(F^{g}\in\mathbb{R}^{h\times w\times c}\) and shape features \(F^{s}\in\mathbb{R}^{h\times w\times c}\) through two distinct LCP layers, the cross-attention mechanism is applied to adaptively integrate features from different modalities. We first take global RGB features as queries and shape features as keys/values, reshape the fused feature to match the size of \(F^{g}\), and finally add it to \(F^{g}\):
\[F^{g}=F^{g}+Reshape3D(MHA(Q_{g},K_{s},V_{s})) \tag{3}\]
where \(Reshape3D\) denotes the operation of reshaping a 2D token map to match the size of \(F^{g}\), and MHA represents the multi-head attention. We also take shape features as queries and global RGB features as keys/values for identity modeling of shape features.
\[F^{s}=F^{s}+Reshape3D(MHA(Q_{s},K_{g},V_{g})) \tag{4}\]
In other words, we enable bidirectional access between global RGB features and shape features, which allows the model not only complements fine-grained cloth-agnostic shape knowledge for global RGB features \(F^{g}\), but also integrates essential identity-related characteristics for shape features \(F^{s}\) to assist identity modeling. Additionally, we introduce learnable class tokens for each of the two modalities, which are also utilized to compute the ID loss.
### Loss Function
**CSE Losses.** As discussed in Section 4.2, to mask out the background pixels, the foreground silhouette for each person image is retrieved, thus a binary cross-entropy loss \(\mathcal{L}_{sil}\) is employed to penalize unsatisfactory silhouette predictions. Furthermore, we employ geodesic distances on the mesh surface to scale the per-pixel vertex classification loss, which penalizes the misclassified pixels based on the degree of deviation on the surface. The geodesic loss can be formulated as follows:
\[\mathcal{L}_{geo}=-\frac{1}{N}\sum_{p\in I}g(v_{p},\hat{v_{p}})\cdot log(p( \hat{v_{p}})) \tag{5}\]
where \(N\) indicates the number of pixels with ground-truth annotations in image \(I\), \(v_{p}\) and \(\hat{v_{p}}\) represent the ground-truth and predicted mesh vertices corresponding to pixel p, and \(g(\cdot,\cdot)\) calculates geodesic distances between two mesh vertices. For consistency learning of continuous surface embeddings, we design the following consistency loss \(\mathcal{L}_{cst}\):
\[\mathcal{L}_{cst} =\frac{1}{N_{1}}\sum_{p\in I,q\in I_{2}}log(1+exp(d(p,q))\] \[+\frac{1}{N_{2}}\sum_{p_{1},p_{2}\in I}log(1+exp(|d(p_{1},p_{2})- s(g(v_{1},v_{2}))|)) \tag{6}\]
where \(N_{1}\) and \(N_{2}\) indicate the number of annotated pairs, \(p\) and \(q\) are corresponding pixels in cross-view images, \(p_{1}\) and \(p_{2}\) stand for different pixels in the same image, \(d(\cdot,\cdot)\) and \(g(\cdot,\cdot)\) respectively denote the cosine distance in the embedding space and the geodesic distance on the surface, and \(s(\cdot)\) represents the scale function. The first term of \(\mathcal{L}_{cst}\) ensures consistency between embeddings of cross-view corresponding pixels, while the second term enforces embedding distances to follow geodesic distances for different pixels in the same image, thus pushing apart their embeddings, and avoiding the degradation cases that may occur during training.
**ReID Losses.** The ReID losses employed in our framework consist of a cross-entropy loss (ID loss) for classification and a triplet loss (Krizhevsky et al., 2015) for similarity learning in the feature space. The final global RGB feature \(f^{g}\), shape feature \(f^{s}\), and two class tokens all contribute to the ID loss:
\[\mathcal{L}_{id}=\mathcal{L}_{id}^{g}+\mathcal{L}_{id}^{s}+\mathcal{L}_{id}^{ cls} \tag{7}\]
where \(\mathcal{L}_{id}^{cls}\) represents the summation of ID losses of the two class tokens. We introduce separate triplet losses for global RGB features and shape features to enhance their discriminative capability, which are combined to obtain the final triplet loss:
\[\mathcal{L}_{tri}=\mathcal{L}_{tri}^{g}+\mathcal{L}_{tri}^{s} \tag{8}\]
**Final Loss.** The overall objective function of our proposed Continuous Surface Correspondence Learning (CSCL) framework compromises the aforementioned CSE losses and ReID losses, which can be formulated as follows:
\[\mathcal{L}=\mathcal{L}_{sil}+\lambda_{1}(\mathcal{L}_{geo}+\alpha\mathcal{L }_{cst})+\lambda_{2}\mathcal{L}_{id}+\lambda_{3}\mathcal{L}_{tri} \tag{9}\]
where \(\lambda_{1}\), \(\alpha\), \(\lambda_{2}\) and \(\lambda_{3}\) are weights for balancing each term.
## 5. Experiments
### Datasets and Protocals
We conduct experiments on four existing cloth-changing ReID datasets (i.e. LTCC (Yang et al., 2017), PRCC (Yang et al., 2018), VC-Clothes (Yang et al., 2018) and DP3D). Furthermore, three different settings are involved in our experiment: (1) **Standard Setting**: the test set includes both same-appearance and cross-appearance samples; (2) **Cloth-Changing Setting**: the test set only includes cross-appearance samples; (3) **Same-Clothes Setting**: the test set only includes same-appearance samples. For LTCC and DP3D, we provide experimental results in the standard setting and cloth-changing setting, while for PRCC and VC-Clothes, results in the same-clothes setting and cloth-changing setting are reported. We additionally validate our method on two general ReID datasets (i.e. Market-1501 (Zhu et al., 2017) and DukeMTMC (Yang et al., 2018)), following their evaluation metrics. For evaluation, we adopt the mean average precision (mAP) and rank-1 accuracy to evaluate the effectiveness of ReID methods. We also utilize Geodesic Point Similarity (GPS) (Garnik et al., 2017) scores to measure the quality of the established correspondences:
\[GPS_{I}=\frac{1}{N}\sum_{p\in I}exp\frac{-g(v_{p},\hat{v_{p}})^{2}}{2\sigma^{ 2}} \tag{10}\]
where I indicates a person image, N is the number of ground-truth correspondences, \(v_{p}\) and \(\hat{v_{p}}\) denote the ground-truth vertex and the estimated vertex, \(g(\cdot,\cdot)\) represents geodesic distances, and \(\sigma\) is a normalizing factor set to 0.255. When GPS scores exceed a certain threshold, the correspondences are considered as correct. Therefore, following the metric of BodyMap (Krizhevsky et al., 2015), we report Average Precision (AP) and Average Recall (AR) based on GPS scores.
### Implementation Details
For datasets without ground-truth dense correspondences, we fit the SMPL body model to the person images under the guidance of OpenPose (Beng et al., 2017) keypoint detections and foreground silhouettes. For each SMPL mesh vertex, there is a reprojected point on the 2D image plane, and the pixel closest to this point is utilized to establish the correspondence. If different vertices correspond to the same pixel, only the vertex closest to the camera is recorded. Based on the image resolution, we uniformly sampled 80 to 125 pseudo correspondences within the entire body region. All input images are resized to 256\(\times\)128. A skip-connecting UNet (Zhu et al., 2017) architecture pretrained on the DensePose-COCO dataset (Garnik et al., 2017) is employed as embedding layers, while two distinct ResNet-50 backbone pretrained on ImageNet (Deng et al., 2017) with the last downsampling layer discarded are employed to extract global RGB features and shape features, respectively. In the training stage, the Adam optimizer(Kingmae et al., 2014) was utilized for optimization. We first trained the embedding layers for 50 epochs with a learning rate of \(5\times 10^{-5}\), and then fixed them to train the rest of the network for 100 epochs with a linear warm-up phase. The learning rate was increased from \(1\times 10^{-5}\) to \(1\times 10^{-4}\) in the first 5 epochs. Finally, we trained the network in an end-to-end manner for 40 epochs with a fixed learning rate of \(1\times 10^{-5}\). The embedding dimension \(D\) is set to 64. The values of \(\lambda_{1}\), \(\alpha\), \(\lambda_{2}\), \(\lambda_{3}\) in Eq. 9 are set to 0.3, 5.0, 1.0, 0.8, and the margin parameter for the triplet loss is set to 0.3, respectively.
### Comparison with State-of-the-arts
As shown in Table2, we compare our proposed CSCL with seven SOTA cloth-changing methods (i.e. SE+CSED (Shi et al., 2017), PSAM (Shi et al., 2018), 3DSL (Chen et al., 2018), UCAD (Wang et al., 2018), MVSE (Chen et al., 2018), M2NET (Wang et al., 2018) and CAL (Chen et al., 2018)) on LTCC, PRCC, VC-Clothes, and DP3D. To assess the feasibility of CSCL in cases without clothing change, we also choose four SOTA short-term methods (i.e. PCB (Shi et al., 2017), HACNN (Wang et al., 2018), MGN (Wang et al., 2018), and Trans-ReID (Shi et al., 2017)) as competitors. The comparative results on the Market-1501 and DukeMTMC are presented in Table 3.
Based on the results in Table 2 and Table3, we have the following key observations: (1) In the cloth-changing setting, CSCL exceeds other competitors on PRCC, VC-Clothes, and DP3D by a large margin, achieving a rank-1 improvement of 4.9%/3.5%/9.6% and a mAP improvement of 6.8%/3.5%/10.9%. This is attributed to the powerful shape representation capability of the continuous correspondences. However, there is still a limitation to CSCL. Due to the poor quality of person images, the generated pseudo correspondences on LTCC are not reliable enough. Despite this limitation, CSCL still achieves comparable results with the SOTA method MVSE on LTCC, indicating a certain tolerance for vertex position errors. (2) CSCL generalizes well to the general ReID datasets where appearance features dominate, achieving comparable performance with the SOTA short-term methods. This is because the distribution of global RGB features is well preserved in the fusion stage.
improvement of 11.5%/10.5% on DP3D. This demonstrates that establishing pixel-wise and continuous correspondences complement rich and essential identity-related shape features for global RGB features. However, there is no significant improvement when directly downsampling the learned correspondences without a shape extraction network, and we will further analyze this issue in Section 5.5. Moreover, the cross-modality fusion module also brings significant improvement, which indicates that features of the two modalities become more compatible via cross-modality fusion. Furthermore, by comparing different feature projection methods for generating Q/K/V, we observe that LCP shows a certain degree of improvement over linear projection and convolutional projection. This is attributed to the inclusion of latent embeddings, which greatly facilitates the sharing among tokens.
Additionally, we evaluate the quality of established correspondences on different ReID datasets in Table 5. By combining the results from Table 2 and Table 5, we can clearly observe a robust positive correlation between the quality of correspondences and the magnitude of performance improvement.
**Influence of consistency loss.** As shown in Table 5, the removal of consistency loss \(\mathcal{L}_{cst}\) from the correspondence learning process leads to a 5% decrease in vertex classification accuracy on DP3D, which indicates that performing consistency learning is beneficial for establishing reliable correspondences. From Table 2, we also observe that removing \(\mathcal{L}_{cst}\) results in a decline in the overall performance of ReID, verifying the importance of consistency learning for CSE.
**Impact of using different features for inference.** During inference, we select the model corresponding to Model 6 in Table 4 to verify the effectiveness of different features. As shown in Table 6, while relying solely on shape features is not reliable enough, the shape features can enhance the performance of other features. Concatenating global RGB features, shape features, and two class tokens results in the best performance at inference time.
### Further Analysis
**Visualization of Continuous Surface Embeddings.** We employ PCA to reduce the dimension of continuous surface embeddings from \(H\times W\times D\) to \(H\times W\times 3\), where \(H\) and \(W\) denote the height and width of person images, \(D\) represents the embedding dimension. Visualization results on DP3D are presented in Figure 4. Since the color differences reflect the feature distances in the embedding space, we can clearly observe that the established 2D-3D correspondences between images pixels and the entire body surface are relatively smooth. Different from discretized UV mappings such as the DensePose, the smooth and continuous 2D-3D correspondences can provide richer and more reliable global knowledge of human shape for cloth-changing ReID.
**Identity modeling for shape features.** Multi-modal auxiliary information itself is not sufficiently discriminative for the ReID task, making it necessary to conduct identity modeling. However, some existing CC-ReID methods, such as 3DSL, directly regulate multi-modal auxiliary features via ReID losses, which disrupts the distribution of shape space. As shown in Table 4, directly using downsampling operations without a proper shape extraction network (Model3\(\rightarrow\)Model2) leads to significant performance degradation. We believe that multi-modal auxiliary features should first be mapped to an intermediary feature space before identity modeling to alleviate the incompatibility between feature spaces of different tasks, which is beneficial for the fusion of shape and global RGB features.
**Future works.** Current 3D shape-based ReID methods suffer from a huge domain gap between the RGB image space and the 3D shape space. Our work essentially targets at bridging the gap between these two spaces. Therefore, future works can consider transforming the surface mebddings into different forms of 3D shape features and assess their potential benefits for CC-ReID.
## 6. Conclusion
We have proposed a new shape embedding paradigm that establishes pixel-wise and continuous surface correspondences to mine fine-grained shape features for cloth-changing ReID. Moreover, an optimized cross-modality fusion module is designed to adaptively integrate shape features with global RGB features. To facilitate the research, we have constructed 3D Dense Persons (DP3D), which is the first cloth-changing ReID dataset with densely annotated 2D-3D correspondences and corresponding 3D meshes. Experiments on both cloth-changing and cloth-consistent ReID benchmarks demonstrate the robustness and superiority of our method.
###### Acknowledgements.
This work was supported in part by the Research Project of ZJU-League Research & Development Center, Zhejiang Lab under Grant 2019KD0AB01.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Features} & \multicolumn{2}{c|}{PRCC} & \multicolumn{2}{c}{DP3D} \\ \cline{2-5} & Rank-1 & mAP & Rank-1 & mAP \\ \hline RGB & 55.9 & 57.7 & 36.8 & 26.3 \\ CLS & 61.1 & 61.8 & 37.9 & 27.4 \\ Shape & 42.5 & 45.9 & 23.9 & 16.8 \\ RGB + Shape & 61.4 & 62.6 & 37.4 & 27.2 \\ CIS + Shape & 63.5 & 64.2 & 38.7 & 28.4 \\ RGB + CLS + Shape & **64.2** & **64.5** & **39.2** & **28.7** \\ \hline \hline \end{tabular}
\end{table}
Table 6. Ablation studies of deploying different features for inference. CLS denotes the two learnable class tokens.
Figure 4. PCA visualization results of the learned continuous surface embeddings. The person images in each row are cross-appearance images of the same person in DP3D. We reduce the channel dimension of the learned continuous surface embeddings from 64 to 3 for visualization. |
2308.03520 | Water-Wave Vortices and Skyrmions | Topological wave structures -- phase vortices, skyrmions, merons, etc. -- are
attracting enormous attention in a variety of quantum and classical wave
fields. Surprisingly, these structures have never been properly explored in the
most obvious example of classical waves: water-surface (gravity-capillary)
waves. Here we fill this gap and describe: (i) water-wave vortices of different
orders carrying quantized angular momentum with orbital and spin contributions,
(ii) skyrmion lattices formed by the instantaneous displacements of the
water-surface particles in wave interference, (iii) meron (half-skyrmion)
lattices formed by the spin density vectors, as well as (iv) spatiotemporal
water-wave vortices and skyrmions. We show that all these topological entities
can be readily generated in linear water-wave interference experiments. Our
findings can find applications in microfluidics and show that water waves can
be employed as an attainable playground for emulating universal topological
wave phenomena. | Daria A. Smirnova, Franco Nori, Konstantin Y. Bliokh | 2023-08-07T12:14:19Z | http://arxiv.org/abs/2308.03520v2 | # Water-Wave Vortices and Skyrmions
###### Abstract
Topological wave structures - phase vortices, skyrmions, merons, etc. - are attracting enormous attention in a variety of quantum and classical wave fields. Surprisingly, these structures have never been properly explored in the most obvious example of classical waves: water-surface (e.g., gravity) waves. Here we fill this gap and describe: (i) water-wave vortices of different orders carrying quantized angular momentum with orbital and spin contributions, (ii) skyrmion lattices formed by the instantaneous displacements of the water-surface particles in wave interference, (iii) meron (half-skyrmion) lattices formed by the spin density vectors, as well as (iv) spatiotemporal water-wave vortices and skyrmions. We show that all these topological entities can be readily generated in linear water-wave interference experiments. Our findings can find applications in microfluidics and show that water waves can be employed as an attainable playground for emulating universal topological wave phenomena.
_Introduction.--_Wave vortices are universal physical entities with nontrivial topological and dynamical properties: quantized phase increments around point phase singularities and quantum-like angular momentum. Examples of wave vortices and phase singularities are known since the 19th century, these has been observed and explored in tidal [1], quantum-fluid [2; 3], optical [4; 5; 6], sound [7; 8; 9], elastic [10], surface-plasmon [11], exciton-polariton [12], quantum electron [13], neutron [14], and atom [15] waves.
Strikingly, wave vortices have not been properly studied in the most obvious example of classical waves: water-surface (e.g., gravity) waves. Only a recent series of experiments [16; 17; 18; 19] described the generation of a square lattice of alternating wave vortices in the interference of orthogonal standing water waves.
However, the theoretical description of these experiments lack identification of the generated lattice with _wave vortices_. It was indicated that the hydrodynamical vorticity appears due to nonlinearity [16; 17], and that these vortices are closely related to the Stokes drift and angular momentum [18; 19], but no relation to the quantized topological and dynamical properties of wave vortices have been mentioned. Being focused on classical hydrodynamical aspects, these works missed the fact that wave vortices are _very different_ from the usual hydrodynamical vortices. Furthermore, only the simplest first-order vortices were produced, while, e.g., vortices of higher orders up to \(10^{3}\) have been generated for quantum electrons [20; 21].
In this work, we describe wave vortices of arbitrary order in gravity water waves. We show that these vortices have distinct topological and angular momentum properties already in the linear regime. Circularly symmetric water-wave vortices are eigenmodes of the _total angular momentum_ operator, including the spin and orbital parts. In the linear regime, gravity waves have _zero vorticity_ and cannot be associated with classical hydrodynamical vortices. Nonetheless, the quadratic _Stokes drift_, which is inevitably present in any water waves, produces slow orbital motion of water particles and nonzero nonlinear vorticity. Importantly, water particles experience two kinds of circular motions with different spatial and temporal scales: (i) local linear-amplitude-scale circular motion with the wave frequency in the linear regime and (ii) slow wavelength-scale circular motion due to the nonlinear Stokes drift. These two motions are responsible for the spin and orbital contributions to the quantized total angular momentum.
Moreover, we show that water waves have inherent _vector_ wave properties. The local Eulerian displacement of water particles have the same generic features as the 3D polarization of structured optical or acoustic wavefields [22; 23]. Therefore, following great recent progress in the generation of topological vector entities - _skyrmions_[24] - in classical electromagnetic [25; 26; 27; 28; 29; 30; 31], sound [32; 33], and elastic [34] waves, here we describe _water-wave skyrmions_. We show that the interference of three plane gravity waves can generate a hexagonal lattice of: (i) wave vortices; (ii) skyrmions of the instantaneous water-particle displacements and (iii) _merons_ (half-skyrmions) of the local spin density. This field configuration is just one step from the recent experiments with square lattices in two interfering standing waves [16; 17; 18; 19], and is quite feasible for the experimental implementation.
Finally, following enormous current interest in _spacetime_ structured waves [35; 36], in particular _spatiotemporal vortices_[37; 38; 39; 40; 41], we show that detuning the frequency of one of the interfering waves, one can readily produce moving lattices of spatiotemporal water-wave vortices and _spatiotemporal skyrmions_.
Our results describe new structures in linear water waves, with remarkable topological and dynamical properties, and show that water waves offer a perfect classical
platform for emulating universal quantum and topological wave phenomena. Furthermore, the water-wave incarnations of these phenomena can find applications in micro- and acousto-fluidics [42; 43].
_Water-wave vortices.--_ We first consider monochromatic gravity waves on a deep-water surface. The 3D Eulerian displacement of the water particles on the \(z=0\) surface is \(\mathbf{\mathcal{R}}(\mathbf{r}_{2},t)=\mathrm{Re}[\mathbf{R}(\mathbf{r}_{2})e^{- i\omega t}]=(\mathcal{X},\mathcal{Y},\mathcal{Z})\), where \(\mathbf{R}=(X,Y,Z)\) is the complex displacement wavefield, \(\mathbf{r}_{2}=(x,y)\) and \(\omega\) is the frequency. Separating the vertical and in-plane components of 3-vectors as \(\mathbf{a}\equiv(a_{x},a_{y},a_{z})=(\mathbf{a}_{2},a_{z})\), the wave equations of motion can be written as [19]
\[\omega^{2}\mathbf{R}_{2}=g\mathbf{\nabla}_{2}Z\,,\quad\omega^{2}Z=-g\mathbf{\nabla}_{ 2}\cdot\mathbf{R}_{2}\,. \tag{1}\]
Here \(g\) is the gravitational acceleration, and Eqs. (1) yield the dispersion relation \(\omega^{2}=kg\) (\(k\) is the wave number).
The vortex solutions of Eqs. (1) are obtained as a superposition of plane waves with wavevectors uniformly distributed along the \(k_{x}^{2}+k_{y}^{2}=k^{2}\) circle with the azimuthal phase increment \(2\pi\ell\), \(\ell\in\mathbb{Z}\), Fig. 1(b). Constructing the complex vertical displacement in this way, we obtain:
\[Z=\frac{A}{2\pi}\int_{0}^{2\pi}e^{i\mathbf{k}\cdot\mathbf{r}+i\ell\phi}d\phi= AJ_{\ell}(kr)e^{i\ell\varphi}\,. \tag{2}\]
Here \(A\) is the wave amplitude, \(J_{\ell}\) is the Bessel function of the first kind, \(\phi\) is the azimuthal angle in the \((k_{x},k_{y})\) plane, whereas \((r,\varphi)\) are the polar coordinates in the \((x,y)\)-plane.
Equation (2) describes 2D scalar cylindrical Bessel waves, Fig. 1(c). However, water waves have a vectorial nature, and the other two components of the wavefield can be found from the first Eq. (1). It is convenient to write these in the basis of 'circular polarizations' [44; 45]:
\[R^{\pm}\equiv\frac{X\mp iY}{\sqrt{2}}=\pm\frac{A}{\sqrt{2}}J_{\ell\mp 1}(kr)\,e^{ i(\ell\mp 1)\varphi}. \tag{3}\]
In this basis, the \(z\)-component of the spin-1 operator, universal for classical vector waves, reads \(\hat{S}_{z}=\mathrm{diag}(1,-1,0)\), while the \(z\)-component of the orbital angular momentum (OAM) operator is \(\hat{L}_{z}=-i\partial_{\varphi}\)[5]. Introducing the 'wavefunction' \(|\psi\rangle=(R^{+},R^{-},Z)\), one can see that the water-wave vortices (2) and (3), are _not_ the OAM eigenmodes because of the \(e^{i(\ell\mp 1)\varphi}\) factors in \(R^{\pm}\). However these are eigenmodes of the _total_ angular momentum with the quantized eigenvalue \(\ell\):
\[\hat{J}_{z}|\psi\rangle=(\hat{L}_{z}+\hat{S}_{z})|\psi\rangle=\ell\,|\psi \rangle\,. \tag{4}\]
Such behavior is a common feature of all cylindrical vector waves: optical [44; 46; 47], quantum [48], acoustic [49], and elastic [45]. It can be interpreted as a signature of the inherent spin-orbit coupling.
Figure 1(a) shows instantaneous water surfaces \(\mathcal{Z}(\mathbf{r}_{2},0)\) and water-particle trajectories \(\mathbf{\mathcal{R}}(\mathbf{r}_{2},t)\) for vortices with different \(\ell\). The water-particle trajectories are 3D ellipses, entirely similar to the electric-field polarization in optical fields [23]. The normal to the ellipse and its ellipticity determine the local cycle-averaged angular momentum, i.e., _spin density_ in water waves [19; 50]: \(\mathbf{S}=(\rho\omega/2)\mathrm{Im}(\mathbf{R}^{*}\times\mathbf{R})\), where \(\rho\) is the water mass density. One can see that water-wave vortices are characterized by inhomogeneous polarization textures. In the vortex center \(r=0\), the polarization is purely vertical, \(|\psi\rangle\propto(0,0,1)\), for \(\ell=0\); it is purely circular, \(|\psi\rangle\propto(1,0,0)\) and \(|\psi\rangle\propto(0,1,0)\), for \(\ell=\pm 1\); and the vector wavefield vanishes, \(|\psi\rangle\propto(0,0,0)\), for \(|\ell|>1\).
Importantly, the water-wave vortices are _not_ the usual hydrodynamical vortices, which are formed by steady water motion with a nonzero circulation of the velocity \(\mathbf{\mathcal{V}}=\partial_{t}\mathbf{\mathcal{R}}\) and vorticity \(\mathbf{\nabla}\times\mathbf{\mathcal{V}}\neq\mathbf{0}\)[51]. In contrast, a superposition of linear monochromatic gravity waves form a field with zero vorticity: \(\mathbf{\nabla}\times\mathbf{V}=\mathbf{0}\), where \(\mathbf{V}=-i\omega\mathbf{R}\) is the complex velocity field. This follows from Eqs. (1) and the incompressibility equation \(\mathbf{\nabla}\cdot\mathbf{V}=0\). Wave vortices are _topological_ entities with _quantized phase singularities_ in the center. The 'topological charge' can be defined in two equivalent ways [52; 53]:
\[\frac{1}{2\pi}\oint\mathbf{\nabla}_{2}\mathrm{Arg}(Z)\cdot d\mathbf{r}_{2}=\frac{1 }{4\pi}\oint\mathbf{\nabla}_{2}\mathrm{Arg}(\mathbf{R}\cdot\mathbf{R})\cdot d \mathbf{r}_{2}=\ell\,, \tag{5}\]
where the contour integral is taken along a circuit enclosing the vortex center. These relations show that the center of the first-order \(|\ell|=1\) vortex can be considered as the first-order phase singularity in the scalar field \(Z(x,y)\) or the second-order polarization singularity (C-point of purely circular polarization) in the vector field \(\mathbf{R}(x,y)\)[52; 53; 54; 23]. Any perturbation breaking the cylindrical symmetry splits the second-order C-point into a pair of the first-order C-points, with topologically-robust Mobius-strip orientations of the polarization ellipses around these points [55; 56; 23; 33; 53].
Remarkably, the nonzero vorticity and circulation does appear in water-wave vortices, but in the _nonlinear_ second-order correction to linear wave solutions. Namely, water particles experience a slow _Stokes drift_, originating from the difference between the Lagrangian and Euler velocities, with the velocity [57; 19; 58]
\[\mathbf{U}=\frac{\omega}{2}\mathrm{Im}[\mathbf{R}^{*}\cdot(\mathbf{\nabla}_{2}) \mathbf{R}]\,. \tag{6}\]
Multiplied by the mass density, it corresponds to the cycle-averaged canonical wave _momentum density_ (sometimes called 'pseudomomentum') [19; 59; 60; 61; 62]: \(\mathbf{P}=\rho\mathbf{U}\).
Figure 1(c) shows the azimuthal Stokes-drift flow in water-wave vortices. It determines the \(z\)-directed OAM density \(\mathbf{L}=\mathbf{r}_{2}\times\mathbf{P}\), \(L_{z}=(\rho\omega/2)\mathrm{Im}(\mathbf{R}^{*}\cdot\partial_{\varphi}\mathbf{R})\). The orbital Stokes drift is mostly localized near the first radial maximum of the Bessel function \(J_{\ell}(kr)\). Note that the local circular motion of water particles due to the 'circular polarization' (spin) and the global circulation due to the Stokes-drift (OAM) have very different scales, both in space and time. The typical radius of the spin motion is
the small linear-wave amplitude \(A\), while the typical radius of the orbital drift is the wavelength \(k^{-1}\gg A\). The angular frequency of the spin motion is \(\omega\), while for the orbital drift it can be estimated as \(U/r\sim\omega k^{2}A^{2}\ll\omega\). The spin and OAM densities in the vortices (2) and (3) satisfy the relation following from Eq. (4) [45; 49]:
\[J_{z}=L_{z}+S_{z}=\frac{\rho\omega}{2}\ell|\mathbf{R}|^{2}=2\frac{\ell}{\omega} T\,, \tag{7}\]
where \(T=\rho|\mathbf{V}|^{2}/4\) is the cycle-averaged kinetic energy density.
Thus, water-wave vortices are naturally described by a quantum-like formalism and possess nontrivial topological properties. Recent experiments [16; 17; 18; 19] generated square lattices of alternating first-order vortices with \(\ell=\pm 1\) by interfering orthogonal standing waves with the \(\pi/2\) phase difference. The orbital Stokes drift and circular polarization (spin) in the vortex centers were clearly observed in Refs. [18; 19], but quantized topological properties of these vortices have not been described. Higher-order water-wave vortices with \(|\ell|>1\) have never been observed, to the best of our knowledge. Such vortices can provide areas of unperturbed water surface surrounded by intense circular waves and orbital Stokes flows.
_Water-wave skyrmions and merons.--_ The 3D vector nature of water waves allows the generation of generic vector topological textures, such as skyrmions or merons [24; 25; 26; 27; 29; 30; 31; 32; 33; 34]. Such textures can be produced by interfering several plane waves with the same frequency and wavevectors \(\mathbf{k}_{j}=k(\cos\phi_{j},\sin\phi_{j},0)\), \(j=1,...,N\). The displacement field can be written as
\[\mathbf{R}=\sum_{j=1}^{N}\mathbf{R}_{0j}e^{i\mathbf{k}_{j}\cdot\mathbf{r}+i \phi_{j}},\ \mathbf{R}_{0j}=A_{j}(i\cos\phi_{j},i\sin\phi_{j},1), \tag{8}\]
where \(A_{j}\) and \(\Phi_{j}\) are the real-valued amplitudes and phases of the interfering waves.
Consider, for example, the interference of \(N=3\) waves, equally distributed with \(\phi_{j}=2\pi(j-1)/N\), equal amplitudes \(A_{j}=A\), and vortex phases \(\Phi_{j}=\phi_{j}\), Fig. 2(c). These waves form a hexagonal periodic lattice with the displacement field \(\mathbf{R}\):
\[\begin{pmatrix}X\\ Y\\ Z\end{pmatrix}\propto A\begin{pmatrix}ie^{ikx}+ie^{-i\frac{kx}{2}}\sin\left( \frac{\sqrt{3}ky}{2}+\frac{\pi}{6}\right)\\ -\sqrt{3}e^{-i\frac{kx}{2}}\cos\left(\frac{\sqrt{3}ky}{2}+\frac{\pi}{6}\right) \\ e^{ikx}-2e^{-i\frac{kx}{2}}\sin\left(\frac{\sqrt{3}ky}{2}+\frac{\pi}{6}\right) \end{pmatrix}. \tag{9}\]
This field exhibits a number of nontrivial topological features. First, it contains a lattice of wave vortices with alternating topological charges \(\ell=\pm 1\), shown in Fig. 2(d). Such vortex lattices are well known in optics [63].
Figure 1: (a) Instantaneous water surfaces \(\mathcal{Z}(x,y,0)\) and Eulerian water-surface particle trajectories \(\mathbf{\mathcal{R}}(x,y,t)\) for circular water-wave vortices with different topological charges \(\ell\), Eqs. (2) and (3). The spin density \(\mathbf{S}\) is directed normally to the elliptical particle trajectories and quantifies the angular momentum of this elliptical motion. (b) The plane-wave spectrum of a circular water-wave vortex with color-coded phases for \(\ell=1\). (c) The complex vertical-displacement field \(Z(x,y)\) for vortices from panel (a), with the phases and amplitudes coded by the colors and brightness, respectively. The white arrows indicate the second-order Stokes drift \(\mathbf{U}\), Eq. (6), characterizing the wave momentum density.
Second, Fig. 2(a) shows the instantaneous water surface \(\mathcal{Z}(\mathbf{r}_{2},0)\) and the surface-particle displacements \(\mathbf{\mathcal{R}}(\mathbf{r}_{2},0)\) for the field (9). The displacements in a hexagonal unit cell of the lattice contain all possible directions and can be mapped onto a unit sphere. This is a signature of a skyrmion, which can be characterized by the topological number
\[Q=\frac{1}{4\pi}\iint_{\text{u.c.}}\mathbf{\mathcal{\bar{R}}}\cdot[\partial_{x}\mathbf{ \mathcal{\bar{R}}}\times\partial_{y}\mathbf{\mathcal{\bar{R}}}]\,dx\,dy, \tag{10}\]
where \(\mathbf{\mathcal{\bar{R}}}=\mathbf{\mathcal{R}}/|\mathbf{\mathcal{R}}|\). In the case under consideration, the skyrmion number is \(Q=1\) at \(t=0\), but it can change its sign over time, because the displacement evolves and becomes opposite after half a period, \(t=\pi/\omega\)[33]. Figure 2(b) displays another representation of the skyrmion lattice, where colors and black vectors indicate the \(z\) and \((x,y)\) components of the displacement-direction field \(\mathbf{\mathcal{\bar{R}}}\). Moving from the center of the cell towards its boundary, the vector \(\mathbf{\mathcal{\bar{R}}}\) undergoes a rotation, where its \(z\)-component changes sign, resulting in a nontrivial winding captured by the nonzero skyrmion charge \(Q\). Similar skyrmion lattices have been observed in electromagnetic [25], sound [32; 33], and elastic [34] vector wavefields.
Third, instead of the instantaneous vector field \(\mathbf{\mathcal{R}}\), one can trace the 3D direction of the spin density vector \(\mathbf{S}\) (normal to the local polarization ellipse). Figure 2(e) shows the distribution of the unit spin vector \(\mathbf{\bar{S}}=\mathbf{S}/|\mathbf{S}|\) in the field (9). One can see that the unit hexagonal cell is split into triangular zones with \(\bar{S}_{z}>0\) and \(\bar{S}_{z}<0\) separated by lines with \(\bar{S}_{z}=0\) and having singular points \(\mathbf{S}=\mathbf{0}\) in the vertices. The centers of these triangles with \(\bar{S}_{z}=\pm 1\) (i.e., circular in-plane polarizations) correspond to the centers of wave vortices with \(\ell=\pm 1\), Fig. 2(d) [19]. Calculating the topological charges (10) for the spin field \(\mathbf{\bar{S}}\) in the triangular zones, we obtain \(Q_{S}=\mp 1/2\) for the zones with \(\bar{S}_{z}\lessgtr 0\). Such topologically nontrivial textures are called _merons_ or half-skyrmions, because the spin directions in each zone covers the upper or lower semisphere. Similar spin merons have been observed in electromagnetic waves [27; 30; 64].
Here we showed only one simple example of the water-wave interference field, which already exhibits three types of topological structures: wave vortices, field skyrmions, and spin merons. These topological entities are rather universal and appear in many other interference fields. In particular, the square lattice from the interference of two standing waves, observed in [16; 17; 18; 19], contains a lattice of vortices and spin merons (cf. [64; 30]), while the zero-order \(\ell=0\) Bessel mode, Eqs. (2) and (3) and Fig. 1, contains a field skyrmion (cf. [25; 31]).
Figure 2: Hexagonal lattice produced by the interference of three waves with equal frequencies, amplitudes, and color-coded phases shown in (c). (a) Instantaneous water surface \(\mathcal{Z}(x,y,0)\) and water-surface particle displacements \(\mathbf{\mathcal{R}}(x,y,0)\) for the field (9). The displacement directions in the unit hexagonal cell is mapped onto the unit sphere, providing a skyrmion with the topological charge \(Q=1\), Eq. (10). (b) The unit displacement-direction field \(\mathbf{\mathcal{\bar{R}}}(x,y,0)\) represented by colors (vertical component \(\mathcal{\bar{Z}}\)) and black arrows (in-plane components \(\mathbf{\mathcal{\bar{R}}}_{2}\)). (d) The complex vertical-displacement field \(Z(x,y)\) and the Stokes drift \(\mathbf{U}\) indicating the lattice of alternating vortices with \(\ell=\pm 1\). (e) The unit spin-density field \(\mathbf{\bar{S}}(x,y)\) represented similar to (b). The hexagonal unit cell is split into triangular zones of spin merons (half-skyrmions) with topological charges \(Q_{S}=\pm 1/2\) and centers with \(\bar{S}_{z}=\pm 1\) corresponding to the \(\ell=\pm 1\) vortices in (d).
_Spatiotemporal vortices and skyrmions.--_ Finally, we demonstrate another class of topological entities which can be readily generated in water waves: _spatiotemporal_ vortices [37; 38; 39; 40; 41] and skyrmions. The generation of an isolated spatiotemporal vortex pulse, similar to the one produced in optics and acoustics, can be challenging and here we describe a spatiotemporal vortex lattice. For this, it is sufficient to slightly detune the frequency of one of the three interfering plane waves in Fig. 2: \(\omega_{1}\to\omega+\delta\omega\), \(k_{1}\to k+\delta k=(\omega+\delta\omega)^{2}/g\), Fig. 3(b). This transforms the wave field (8) as \(\Phi_{1}\to\Phi_{1}-i\delta\omega t\), so that the spatial lattice in Fig. 2 becomes moving along the \(x\)-axis, and the field becomes a function of space and time: \(\mathbf{R}(\mathbf{r}_{2},t)\).
The real displacement field is \(\mathbf{\mathcal{R}}(\mathbf{r}_{2},t)=\mathrm{Re}[\mathbf{R}(\mathbf{r}_{2},t)e^ {-i\omega t}]\), but we will analyze the real field \(\mathbf{\mathcal{R}}^{\prime}(\mathbf{r}_{2},t)=\mathrm{Re}[\mathbf{R}(\mathbf{r} _{2},t)]\) subtracting the common fast oscillations \(e^{-i\omega t}\). Plotting the complex field \(Z\) and real field \(\mathbf{\mathcal{R}}^{\prime}\) in the spacetime domain \((t,y)\) at the fixed coordinate \(x=0\), we find that they exhibit a hexagonal (up to scaling) lattice of vortices and skyrmions, Fig. 3. These spatiotemporal vortices and skyrmions have opposite topological charges \(\ell\) and \(Q\) compared to their spatial counterparts in Fig. 2.
_Conclusions.--_ We have analyzed the fundamental topologically nontrivial objects in linear water-surface (gravity) waves, namely: wave vortices, surface-particle displacement skyrmions, spin-density merons, as well as spatiotemporal vortcies and skyrmions. All these objects are universal across different types of waves and only require standard wave-interference ingredients: relative phases/amplitudes, polarizations, and spectral detuning. These parameters control the geometry and topology of the wave interference field.
Surprisingly, the wave vortices, skyrmions, and other topological objects, which have been intensively studied and have found applications in numerous quantum and classical waves, have not been properly explored in the most usual classical water-surface waves. At the same time, recent experiments [16; 17; 18; 19] show that the generation and detection of such topological structures in water-wave interference is quite feasible.
Notably, the vector features of water waves (displacement fields) are directly observable, while in other fields these are usually measured via various indirect methods. Therefore, we argue that water waves offer a highly attractive platform for emulating topologically nontrivial field structures and wave phenomena in a unified fashion. Furthermore, nontrivial dynamical properties of topological water-wave objects -- circulating Stokes-drift currents, fast circular motions (spin) in the centers of the first-order vortices, vanishing fields in the centers of higher-order vortices, etc. -- can be attractive for fluid-mechanical applications, such as manipulations of particles in microfluidics [42; 43].
This work is supported in part by the Japan Society for the Promotion of Science (JSPS), Nippon Telegraph and Telephone Corporation (NTT) Research, the Asian Office of Aerospace Research and Development (AOARD) [Grant No. FA2386-20-1-4069], and the Foundational Questions Institute Fund (FQXi) [Grant No. FQXi-IAF19-06].
|
2303.11256 | The spherical Whittaker Inversion Theorem and the quantum non-periodic
Toda Lattice | In this paper the spherical case of the Whittaker Inversion Theorem is given
a relatively self-contained proof. This special case can be used as a help in
deciphering the handling of the continuous spectrum in the proof of the full
theorem. It also leads directly to the solution of the quantum non-periodic
Toda Lattice. This is also explained in detail in this paper. | Nolan R. Wallach | 2023-03-20T16:41:47Z | http://arxiv.org/abs/2303.11256v3 | # The spherical Whittaker Inversion Theorem and the quantum non-periodic Toda Lattice
###### Abstract
In this paper the spherical case of the Whittaker Inversion Theorem is given a relatively self-contained proof. This special case can be used as a help in deciphering the handling of the continuous spectrum in the proof of the full theorem. It also leads directly to the solution of the quantum non-periodic Toda Lattice. This is also explained in detail in this paper.
## 1 Introduction
The main purpose of this paper is to give a complete, relatively self contained, proof of the spherical Whittaker Inversion Theorem for real reductive groups. This result is an important special case of the general theorem but it is unencumbered by the complications caused by discrete spectrum. Reading it can be used as a help in understanding the arguments used in [W3] to handle the continuous spectrum. By relatively self contained I mean that it will be based on two main results: The first is the Harish-Chandra Plancherel Formula for \(L^{2}(G/K)\) (actually the inversion formula) as developed in the work of Helgason [He1],[He2].The second is the holomorphic continuation of the Jacquet Integral for minimal parabolic subgroups and its brilliant implication due to Raphael Beauzart-Plessis [B]. The full theorem was announced in the 1980's (see [RRGII]) but a correct proof has only recently appeared in [W3]. The applications of these results to the theory of automorphic forms inundate the literature. We will include in the last section of this paper this paper an application to the quantum non-periodic Toda lattice (see [GW] for
more background on the subject). If you are a physicist or a mathematician who is not an expert in Representation Theory, I would recommend that you read Section 7 of this paper first.
## 2 Some notation
Let \(G\) be a real reductive group. In this paper there is no loss of generality to take \(G\) to be the identity component of a subgroup of \(GL(n,\mathbb{R})\) that is the locus of zeros of a set of polynomials on \(M_{n}(\mathbb{R})\) such that \(G\) invariant under transpose (\(g\mapsto g^{T}\)). We set \(K=G\cap O(n)\) a maximal compact subgroup of \(G\). We choose an Iwasawa decomposition of \(G\) given by \(G=NAK\) with \(N\) a maximal unipotent subgroup (i.e. the elements of \(N\) are of the form \(I+X\) with \(X\) nilpotent) of \(G\), \(A\) a subgroup maximal among the subgroups of \(G\) contained in the set of symmetric positive definite matrices and \(aNa^{-1}\subset N,a\in A\). Let \(\mathfrak{a}=Lie(A),\mathfrak{n}=Lie(N)\) and \(\mathfrak{g}=Lie(G)\). Then \(ad(h)_{|\mathfrak{n}}\) for \(h\in\mathfrak{a}\) simultaneously diagonalize yielding the roots \(\Phi^{+}\) of \(\mathfrak{a}\) on \(\mathfrak{n}\). The exponential map \(\exp:\mathfrak{a}\to A\) is a Lie group isomorphism (\(\mathfrak{a}\) is a group under addition) set \(\log:A\rightarrow\mathfrak{a}\) equal to the inverse map. We define for \(\lambda\in\mathfrak{a}_{\mathbb{C}}^{*},a\in A\). \(a^{\lambda}=e^{\lambda(\log(a))}.\) Define
\[\rho(h)=\frac{1}{2}\mathrm{tr}(ad(h)_{|\mathfrak{n}}).\]
If \(x,y\in\mathfrak{g}\) then set \(\langle x,y\rangle=\mathrm{tr}(xy^{T})\) and \(\|x\|=\langle x,x\rangle^{\frac{1}{2}}\). Let \(m=\dim\mathfrak{n}\). On \(\wedge^{m}\mathfrak{g}\) we put the inner product induced by \(\langle...,...\rangle\). Let \(Int(g)\) be the transformation of \(\,\mathfrak{g}\) given by \(Int(g)x=gxg^{-1}\). Define for \(g\in G\), \(|g|\) to be the operator norm of \(\wedge^{m}Int(g)\). Set \(A_{G}\) equal to the subgroup of elements of the center of \(G\) that are positive definite. Then \(KA_{G}[G,G]=G\). Define
\[\|kag\|=e^{\|\log a\|}\left|g\right|^{\frac{1}{2}},k\in K.a\in A_{G},g\in[G,G].\]
If \(g\in G\) then \(g\) can be written in the form \(g=k_{1}ak_{2}\) with \(a\in A\) such such such that \(a^{\alpha}\geq 1\) for \(\alpha\in\Phi^{+}\) and \(k_{1},k_{2}\in K\). One can check that if \(g\) is of this form and if \(g\in[G,G]\) then
\[|g|=a^{2\rho}.\]
It is easily seen that
\[\|x\|\geq 1,\|xy\|\leq\|x\|\left|y\right|\]
and that the sets
\[\|x\|\leq r\]
are compact for \(r<\infty\). It is a bit harder to prove that there exists \(d\) such that
\[\int_{G}\left|x\right|^{-1}(1+\log\|x\|)^{-d}dx<\infty.\]
Let \(M=\{k\in K|ka=ak,a\in A\}\).
We now recall the Harish-Chandra Schwartz space. If \(f\in C^{\infty}(G)\), \(x,y\in U(\mathfrak{g})\ d\in\mathbb{R}\) then set
\[p_{x,y,d}(f)=\sup\{\left|R_{x}L_{y}f(g)\right|\left|g\right|^{\frac{1}{2}}(1+ \log\|g\|)^{d}|g\in G\}.\]
Here if \(X\in U(\mathfrak{g})\) then \(R_{X}f(g)=\frac{d}{dt}f(g\exp tX)_{|t=0}\) and \(L_{Y}f(g)=\frac{d}{dt}f(\exp(-tX)g)_{|t=0}\) and since \([L_{X},L_{Y}]=L_{[X,Y]}\) and \([R_{X},R_{Y}]=R_{[X,Y]}\) the universal mapping property of the universal enveloping algebra allows us to define \(L_{x}\) and \(L_{y}\) for all \(x.y\in U(\mathfrak{g})\).
Define \(\mathcal{C}(G)=\{f\in C^{\infty}(G)|p_{x,y,d}(f)<\infty,x\in U(\mathfrak{g}), d\in\mathbb{R}\}\) endowed with the topology defined by the semi-norms \(p_{x,y,d}\).
## 3 The Plancherel Theorem for \(\mathcal{C}(G/K)\)
Note that the map \(\theta:G\to G\) given by \(\theta(g)=(g^{T})^{-1}\) is an automorphism of \(G\). Set \(\bar{N}=\theta(N)\). We have the corresponding Iwasawa decomposition \(G=\bar{N}AK\). Define for \(g=\bar{n}ak\), \(\bar{n}\in\bar{N},a\in A,k\in K,\)\(a(g)=a,k(g)=k\). Since the Iwasawa decomposition is unique, \(a:G\to A\) and \(k:G\to K\) are \(C^{\infty}\). If \(f\in C^{\infty}(M\backslash K)\) define for \(\nu\in\mathfrak{a}_{\mathbb{C}}^{*}\)
\[f_{\nu}(\bar{n}ak)=a^{\nu-\rho}f(k).\]
This defines \(f_{\nu}\) as a \(C^{\infty}\) function on \(G\). Define an action of \(G\) on \(C^{\infty}(M\backslash K)\) by
\[\left(\pi_{\nu}(g)f\right)(k)=f_{\nu}(kg).\]
If we put the \(C^{\infty}\) topology on \(C^{\infty}(M\backslash K)\) then \(\left(\pi_{\nu},C^{\infty}(M\backslash K)\right)\) is a smooth Frechet representation of \(G\). We also put the \(L^{2}\)-inner product on \(C^{\infty}(M\backslash K)\)
\[\langle u,w\rangle=\int_{K}u(k)\overline{w(k)}dk\]
and find that
\[\left\langle\pi_{\nu}(g)u,w\right\rangle=\left\langle w,\pi_{-\bar{\nu}}(g^{-1})u\right\rangle\]
where \(\bar{\nu}(h)=\overline{\nu(h)}\). In particular, if \(\nu\in\mathfrak{a}^{*}\) then \((\pi_{i\nu},L^{2}(M\backslash K))\) is a unitary representation. Let \(f\in C_{c}^{\infty}(G/K)\) then set
\[\left(\pi_{\nu}(f)u\right)(k)=\int_{G}u_{\nu}(kg)f(g)dg.\]
If \(\lambda\in\mathfrak{a}^{*}\) define \(H_{\lambda}\in\mathfrak{a}^{*}\) by \(\left\langle H_{\lambda},h\right\rangle=\lambda(h)\) for \(h\in\mathfrak{a}\). Define \((\lambda,\mu)=\left\langle H_{\lambda},H_{\mu}\right\rangle\) and extend \((...,...)\) to \(\mathfrak{a}^{*}_{\mathbb{C}}\) bilinearly. Then set
\[c(\nu)=\int_{N}a(n)^{\nu-\rho}dn\]
where we leave, for the moment, the bi-invariant measure on \(N\) unnormalized. This integral converges absolutely for \(\nu\in\mathfrak{a}^{*}_{\mathbb{C}}\) such that
\[\mathrm{Re}(\nu,\alpha)<0\]
for all \(\alpha\in\Phi^{+}\) and uniformly in compacta in this set. Thus \(c(\nu)\) defines a holomorphic function on this subset. It has a meromorphic continuation to all of \(\mathfrak{a}^{*}_{\mathbb{C}}\) indeed Gindikin-Karpelovic derived an explicit formula (c.f. [He2] or [W2]) 8.10.18) if we set for \(\alpha\in\Phi^{+}_{0}=\{\beta\in\Phi^{+}|\frac{\beta}{2}\notin\Phi^{+}\},\)
\[c_{\alpha}(\nu)=\left\{\begin{array}{c}B(\frac{\dim\mathfrak{n}_{\alpha}}{2 },\frac{(\nu,\alpha)}{(\alpha,\alpha)})\text{ if }2\alpha\notin\Phi^{+}\\ B(\frac{\dim\mathfrak{n}_{\alpha}}{2},\frac{(\nu,\alpha)}{(\alpha,\alpha)})B( \frac{\dim\mathfrak{n}_{2\alpha}}{2},\frac{(\nu,\alpha)}{2(\alpha,\alpha)}+ \frac{\dim\mathfrak{n}_{\alpha}+\dim\mathfrak{n}_{2\alpha}}{2})\text{ if }2\alpha\in\Phi^{+} \end{array}\right..\]
Then the the measure on \(N\) can be normalized so that
\[c(\nu)=\prod_{\alpha\in\Phi^{+}_{0}}c_{\alpha}(\nu).\]
Set for \(\nu\in\mathfrak{a}^{*}\)
\[\mu(\nu)=\frac{1}{c(i\nu)c(-i\nu)}=\frac{1}{\left|c(i\nu)\right|^{2}}\]
one can show using the formula for \(c(\nu)\) and basic properties of the \(\Gamma\)-function that that \(\mu(\nu)\leq C(1+\left\|\nu\right\|^{r})\) for some \(r\).
We are now ready develop the Plancherel theorem for \(G/K.\) We first recall a special case of the Harish-Chandra Plancherel Theorem:
**Theorem 1**: _The measure on \(G\) and \(\mathfrak{a}^{*}\) can be normalized so that if \(\phi\in\mathcal{C}(K\backslash G/K)\) (that is \(\phi(k_{1}gk_{2})=f(g),k_{1},k_{2}\in K\)) then_
\[\phi(g)=\int_{\mathfrak{a}^{*}}\left\langle\pi_{i\nu}(L_{g^{-1}}\phi)1,1 \right\rangle\mu(\nu)d\nu.\]
There is a relatively elementary proof of this Theorem due to Anker [A] that specifically proves this result. Harish-Chandra proved much more. We recall an argument in Helgason [He2] section III.1, in the proof of the following implication.
**Theorem 2**: _Let \(f\in\mathcal{C}(G/K)\) then with the same normalizations as in the previous result we have_
\[f(g)=\int_{\mathfrak{a}^{*}}\left\langle\pi_{i\nu}(L_{g^{-1}}f)1,1\right\rangle \mu(\nu)d\nu.\]
**Note:**The point here is that \(f\) is not necessarily left \(K\)-finite (since under that condition the result will be a special case of Harish-Chandra's Theorem).
**Proof.** Define for \(g,x\in G\)
\[\phi(g,x)=\int_{K}f(gkx)dk.\]
Noting that the Casimir operator, \(C\), corresponding to the choice of \(B\) on \(\mathfrak{g}\) yields the Laplacian of the Riemannian structure on \(G\) given by the inner product on \(\mathfrak{g}/Lie(K)\) induced by \(B\). This and Sobolev theory implies that if \(g\) is fixed then \(x\mapsto\phi(g,x)\) is in \(\mathcal{C}(K\backslash G/K)\). Thus keeping \(g\) fixed we have
\[\phi(g,I)=\int_{\mathfrak{a}^{*}}\left\langle\pi_{i\nu}(\phi(g,\cdot))1,1 \right\rangle\mu(\nu)d\nu.\]
Now
\[=\int_{G}\int_{K}f(gx)\left\langle\pi_{i\nu}(k^{-1}x)1,1\right\rangle dkdx= \int_{G}f(gx)\left\langle\pi_{i\nu}(x)1,1\right\rangle dx=\left\langle\pi_{i \nu}(L_{g^{-1}}f)1,1\right\rangle.\]
Noting that \(\phi(g,I)=f(g)\) completes the proof.
The holomorphic continuation of Jacquet integrals and a Theorem of Beuzart-Plessis
Retain the notation of the previous sections. If \(\chi:N\to S^{1}\) is a unitary (one dimensional) character of \(N\) we consider for \(u\in C^{\infty}(M\backslash K)\)
\[J_{\chi,\nu}(u)=\int_{N}\chi(n)^{-1}u_{\nu}(n)dn.\]
Note that if \(\chi=1\) and \(u=1\) then the integral is the one that defines the Harish-Chandra c-function. This implies that the integral defining \(J_{\chi,\nu}\) converges absolutely if \(\mbox{Re}(\nu,\alpha)<0\) for all \(\alpha\in\Phi^{+}\). Let \(\Delta\) be the set of elements of \(\Phi^{+}\) that appear in \(\mathfrak{n}/[\mathfrak{n},\mathfrak{n}].\) Thus \(\mathfrak{n}/[\mathfrak{n},\mathfrak{n}]=\oplus_{\alpha\in\Delta}\left( \mathfrak{n}/[\mathfrak{n},\mathfrak{n}]\right)_{\alpha}.\) We say that \(\chi\) is generic if its differential is non-zero on each of the spaces \(\left(\mathfrak{n}/[\mathfrak{n},\mathfrak{n}]\right)_{\alpha}\) with \(\alpha\in\Delta\). The Jacquet integrals are the \(J_{\chi,\nu}\) with \(\chi\) generic. The holomorphic continuation of Jacquet integrals in this generality (non-K-finite \(u\)) was first proved in [W1] (c.f. [RRGII] Theorem 15.6.7).
**Theorem 3**: _If \(\chi\) is generic and if \(u\in C^{\infty}(M\backslash K)\) then \(J_{\chi,\nu}(u)\) has a holomorphic continuation to \(\mathfrak{a}_{\mathbb{C}}^{*}\), furthermore the map \(\nu\mapsto J_{\chi,\nu}\) is a weakly holomorphic map of \(\mathfrak{a}_{\mathbb{C}}^{*}\) to \(C^{\infty}(M\backslash K)^{\prime}\) (the continuous dual space)._
If \(\chi\) is not generic then one can use this result combined with the meromorphic continuation of conical vectors to prove a meromorphic continuation result.
The Theorem of Beuzart-Plessis [B] which is based on the holomorphy of \(\nu\mapsto J_{\chi,\nu}(1)\) is
**Theorem 4**: _If \(\chi\) is a generic character of \(N\) then there exists \(\varepsilon>0\) such that_
\[\int_{\ker(\chi)}a(n)^{-(1-\varepsilon)\rho}dn<\infty.\]
We will also need the following results (see [W3] Theorem 43 for the full details of a proof of Proposition 6) whose proof is complicated and uses parts of the proof of the holomorphic continuation of the Jacquet integrals ([W1]). We will just give an idea of why they are true.
**Lemma 5**: _Assume that \(\chi\) is generic. There exists a continuous semi-norm, \(q\), on \(C^{\infty}(M\backslash K)\) such that_
\[|J_{\chi,\nu}(u)|\leq q(u)c(\mathop{\rm Re}\nolimits\nu)\]
_if \(u\in C^{\infty}(M\backslash K)\) and \(\mathop{\rm Re}\nolimits(\nu,\alpha)<0\) for \(\alpha\in\Phi^{+}\)._
**Proof.** We have if \(\nu\) satisfies the condition then
\[J_{\chi,\nu}(u)=\int_{N}\chi(n)^{-1}u_{\nu}(n)dn=\int_{N}\chi(n)^{-1}a(n)^{\nu -\rho}u(k(n))dn\]
so defining \(q(u)=\sup_{k\in K}|u(k)|\) we have
\[|J_{\chi,\nu}(u)|\leq q(u)\int_{N}a(n)^{\mathop{\rm Re}\nolimits\nu-\rho}d\nu= q(u)c(\mathop{\rm Re}\nolimits\nu).\]
**Proposition 6**: _Assume that \(\chi\) is generic. And let \(0<r<\infty.\) There exists a continuous semi-norm, \(q_{r}\), on \(C^{\infty}(M\backslash K)\) and \(m_{r}\) such that_
\[|J_{\chi,\nu}(u)|\leq q_{r}(u)(1+\|\mathop{\rm Im}\nolimits\nu\|)^{m_{r}}\]
_for \(\nu\in{\mathfrak{a}}_{\mathbb{C}}^{*}\) such that \(0>\mathop{\rm Re}\nolimits(\nu,\alpha)>-r(\rho,\alpha),\alpha\in\Phi^{+}\)._
**Proof.** This is proved using an argument involving tensoring with finite dimensional representations and details from the shift argument used in the proof of the holomorphic continuation of \(J_{\chi,\nu}(u)\) in Section 15.5 of [RRGII].
**Proposition 7**: _Assume that \(\chi\) is generic. If \(0<r<\infty\) is fixed and if \(f\in{\cal C}(G/K)\) then for each \(m\) there exists \(C_{l,r}\) such that_
\[|J_{\chi,i\nu-z\rho}(\pi_{i\nu}(f)1)|\leq C_{m,r}(1+\|\nu\|)^{-m}\]
_for \(\nu\in{\mathfrak{a}}^{*}\) and \(0\geq\mathop{\rm Re}\nolimits z>-r\)_
**Proof.** We note that if \(p\) is a continuous semi-norm on \(C^{\infty}(M\backslash K)\) then there exists \(k\) and a constant \(L\) such that
\[p(u)\leq L_{P}\left\|(1+C_{K})^{k_{P}}u\right\|.\]
Thus Proposition 6 implies that
\[|J_{\chi,i\nu-z\rho}(\pi_{i\nu}(f)1)|\leq q_{r}(\pi_{i\nu}(f)1)(1+\|\nu\|)^{m_{r}}\]
\[\leq L_{q_{r}}\left\|\pi_{i\nu}(L_{(1+C_{K})^{k_{qr}}}f)1\right\|(1+\|\nu\|)^{m _{r}}.\]
Now applying Lemma 15 in Appendix 1 to \(L_{(1+C_{K})^{k_{qr}}}f\) with \(m=m_{r}+l\) completes the proof.
By analogy with the Harish-Chandra Schwartz space we have the Whittaker Schwartz space : If \(g\in G\) and \(g=nak\)\(n\in N,a\in A,k\in K\) then set \(a_{o}(g)=a\). That is \(a_{o}(g)=a(\theta(g))^{-1}\).
\[C^{\infty}(N\backslash G;\chi)=\{f\in C^{\infty}(G)|f(ng)=\chi(n)f(g),n\in N,g\}.\]
If \(f\in C^{\infty}(N\backslash G;\chi),x\in U(\mathfrak{g})\) then set
\[q_{x,d}(f)=\sup_{g\in G}a_{o}(g)^{-\rho}(1+\|\log a_{o}(g)\|)^{d}|R_{x}f(g)|.\]
Then \({\cal C}(N\backslash G;\chi)\) is the space of \(f\in C^{\infty}(N\backslash G;\chi)\) such that \(q_{x,d}(f)<\infty\) for all \(x,d\).
## 5 The key formula
In this section \(f\in{\cal C}(G/K)\). Then we have seen that
\[f(g)=\int_{\mathfrak{a}^{*}}\left\langle\pi_{i\nu}(L_{g^{-1}}f)1,1\right\rangle \mu(\nu)d\nu=\int_{\mathfrak{a}^{*}}\left\langle\pi_{i\nu}(f)1,\pi_{i\nu}(g)1 \right\rangle\mu(\nu)d\nu.\]
Thus
\[\overline{f(g)}=\int_{\mathfrak{a}^{*}}\left\langle\pi_{i\nu}(g)1,\pi_{i\nu} (f)1\right\rangle\mu(\nu)d\nu.\]
**Theorem 8**: \(\int_{N}\chi(n)\overline{f(ng)}dn=\int_{\mathfrak{a}^{*}}J_{\chi^{-1},i\nu}( \pi_{i\nu}(g)1)\overline{J_{\chi^{-1},i\nu}(\pi_{i\nu}(f)1)}\mu(\nu)d\nu.\)__
**Proof.** In this proof we will use the notation \(u(\nu)=\pi_{i\nu}(f)1\in C^{\infty}(M\backslash K))\). Using the non-compact model for the unitary principal series (c.f. [W2] 8.4.7) one has
\[\left\langle\pi_{i\nu}(g)1,\pi_{i\nu}(f)1\right\rangle=\int_{N}a(ng)^{i\nu- \rho}a(n)^{-i\nu-\rho}\overline{u(\nu)k(n))}dn.\]
Thus
\[\int_{N}\chi(n)\overline{f(ng)}dn=\int_{N}\chi(n_{1})\int_{\mathfrak{a}^{*}}\int_ {N}a(nn_{1}g)^{i\nu-\rho}a(n)^{-i\nu-\rho}\overline{u(\nu)_{i\nu}(k(n))}dn\mu( \nu)d\nu dn_{1}.\]
We first deform the parameter and consider
\[\int_{N}\chi(n_{1})\int_{\mathfrak{a}^{*}}\int_{N}a(nn_{1}g)^{i\nu-(1+z)\rho}a( n)^{-i\nu-(1+z)\rho}\overline{u(\nu)_{i\nu}(k(n))}dn\mu(\nu)d\nu dn_{1}\]
for \(\operatorname{Re}z<0\). If we put absolute values on all of the terms we have
\[\int_{N}\int_{\mathfrak{a}^{*}}\int_{N}a(nn_{1}g)^{-(1+\operatorname{Re}z) \rho}a(n)^{-(1+\operatorname{Re}z)\rho}\left|\overline{u(\nu)_{i\nu}(k(n))} \right|dn\mu(\nu)d\nu dn_{1}\]
and using Lemma 15 in Appendix 1 (and the notation therein) we have \(\left|\overline{u(\nu)_{i\nu}(k(n))}\right|\leq C_{1,l}(1+\|\nu\|)^{-l}\) with \(C_{1,l}<\infty\) Thus the integrand is dominated by
\[\int_{\mathfrak{a}^{*}}\int_{N}\int_{N}a(nn_{1}g)^{-(1+\operatorname{Re}z) \rho}a(n)^{-(1+\operatorname{Re}z)\rho}dn(1+\|\nu\|)^{-m}d\nu dn_{1}\]
since \(\mu(\nu)\leq B(1+\|\nu\|)^{r}\) for some \(r\) and we take \(m\) to be greater than \(\dim A\). which converges for \(\operatorname{Re}z<0\). noting that \(a(ng)^{-\rho}\leq C_{\omega}a(n)^{-\rho}\) if \(g\in\omega\) a compact set we see that the integral of the absolute values is dominated by a multiple of
\[C_{\omega}^{(1+\operatorname{Re}z)\rho}\int_{N}\int_{N}a(n_{1})^{-(1+ \operatorname{Re}z)\rho}a(n)^{-(1+\operatorname{Re}z)\rho}dndn_{1}<\infty.\]
We can therefore do the deformed integral in any order. We choose
\[\int_{N}\chi(n_{1})\chi(n)^{-1}\int_{\mathfrak{a}^{*}}\int_{N}a(n_{1}g)^{i\nu -(1+z)\rho}a(n)^{-i\nu-(1+z)\rho}\overline{u(\nu)(k(n))}dn\mu(\nu)d\nu dn_{1}=\]
\[\int_{\mathfrak{a}^{*}}J_{\chi^{-1},i\nu-z\rho}(\pi_{i\nu-z\rho}(g)1) \overline{J_{\chi^{-1},i\nu-\bar{z}\rho}(\pi_{i\nu}(f)1)}\mu(\nu)d\nu.\]
We are left with taking the limit under the integral sign \(z\to 0\). This will be done indirectly.
Let \(x_{o}\) be perpendicular to \(\ker d\chi\) relative to \(\langle...,...\rangle\) and assume that \(d\chi(x_{o})=-i1.\) We define
\[\tau_{z}(t)=\int_{\ker\chi}\int_{\mathfrak{a}^{*}}\int_{N}a(n\exp tx_{o}n_{1}g )^{i\nu-(1+z)\rho}a(n)^{-i\nu-(1+z)\rho}\overline{u(\nu)(k(n))}dn\mu(\nu)d\nu dn _{1}\]
for \(\operatorname{Re}z\geq 0\). Then Proposition 17 in the Appendix 1 implies that if \(\omega\) is a compact subset of \(G\) then
\[|\tau_{z}(t)|\leq C_{\omega}^{(1+\operatorname{Re}z)}B(1+|t|)^{d}\]
with \(C_{\omega},d\) and \(B\) finite. Fix \(g\in G\). The estimate implies that we can define a family of tempered distribution on \(\mathbb{R}\) by
\[T_{z}(\phi)=\int_{-\infty}^{\infty}\tau_{z}(t)\phi(t)dt\]
for \(\operatorname{Re}z\geq 0\). Dominated convergence implies that the map \(z\mapsto T_{z}\) is a weakly continuous map of \(\{z|\operatorname{Re}z\geq 0\}\) to \(\mathcal{S}^{\prime}(\mathbb{R})\) (the space of tempered distributions). Thus if \(\mathcal{F}\) is the usual Fourier transform
\[\mathcal{F}(\phi)(s)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-ist}\phi( t)dt\]
which is a continuous linear isomorphism of \(\mathcal{S}(\mathbb{R}).\) If \(T\) is a tempered distribution we define (as usual) \(\mathcal{F}(T)=T\circ\mathcal{F}\). If \(T\) is given by integration by an element of \(L^{1}(\mathbb{R})\), i.e. \(T(\phi)=\int_{-\infty}^{\infty}\tau(t)\phi(t)dt\) with \(\tau\in L^{1}(\mathbb{R})\), then
\[\mathcal{F}(T)(\phi)=\int_{-\infty}^{\infty}\mathcal{F}(\tau(t))\phi(t)dt\]
If \(\operatorname{Re}z>0\) then as we have seen above
\[n_{1}\mapsto\int_{\mathfrak{a}^{*}}\int_{N}a(nn_{1}g)^{i\nu-(1+z)\rho}a(n)^{- i\nu-(1+z)\rho}\overline{u(\nu)(k(n))}dn\mu(\nu)d\nu\]
defines an element of \(L^{1}(N_{1}).\) So Fubini's theorem implies that \(\tau_{z}\in L^{1}(\mathbb{R})\) if \(\operatorname{Re}z>0\). If \(\operatorname{Re}z=0\) then
\[\tau_{0}(t)=\int_{\ker\chi}\int_{\mathfrak{a}^{*}}\left\langle\pi_{i\nu}( \exp tx_{o}ng)1,\pi_{i\nu}(f)1\right\rangle\mu(\nu)d\nu dn=\int_{\ker\chi} \overline{f(\exp tx_{o}ng)}dn.\]
Since \(f\), in particular, is in \(\mathcal{C}(G/K)\) the function \(n\mapsto\overline{f(ng)}\) on \(N\) is in \(L^{1}(N)\). Thus if \(z=0\) then \(\tau_{z}\in L^{1}(\mathbb{R})\). Let for \(s,t\in\mathbb{R}\), \(n\in\ker\chi\), \(\chi_{s}(\exp tx_{o}n)=e^{its}\) Then \(\chi_{1}=\chi\) and \(\chi_{s}\) is generic for \(s\neq 0\). Note that if \(\operatorname{Re}z>0\) and \(s\neq 0\) then
\[\mathcal{F}(\tau_{z})(s)=\int_{\mathfrak{a}^{*}}J_{\chi_{s}^{-1},i\nu-z\rho}( \pi_{i\nu-z\rho}(g)1)\overline{J_{\chi_{s}^{-1},i\nu-\bar{z}\rho}(\pi_{i\nu}( f)1)}\mu(\nu)d\nu.\]
Also if \(z=0\) and \(s\neq 0\) then
\[\mathcal{F}(\tau_{0})(s)=\overline{\int_{N}\chi_{s}(n)^{-1}f(ng)dn}.\]
Define for \(s\neq 0\)
\[\sigma_{z}(s)=\int_{\mathfrak{a}^{*}}J_{\chi_{s}^{-1},i\nu-z\rho}(\pi_{i\nu-z \rho}(g)1)\overline{J_{\chi_{s}^{-1},i\nu-\bar{z}\rho}(\pi_{i\nu}(f)1)}\mu(\nu )d\nu\]
then Proposition 7 implies that \(\sigma_{z}\) is continuous in \(z\) for \(s\neq 0\) and \(\operatorname{Re}z\geq 0\), Furthermore, if \(\operatorname{Re}z>0\) and \(s\neq 0\), then \(\sigma_{z}(s)=\mathcal{F}(\tau_{z})(s)\). We therefore have if \(\phi\) has support in \(\mathbb{R}-\{0\}\),
\[\lim_{\operatorname{Re}z>0}\ \ \mathcal{F}(\tau_{z})(\phi)=\mathcal{F}( \tau_{0})(\phi)=\int_{-\infty}^{\infty}\int_{N}\chi_{s}(n)f((ng)dn\phi(s)ds.\] \[z\to 0\]
Also
\[\lim_{\begin{array}{c}\operatorname{Re}z>0\\ z\to 0\end{array}}\ \int_{-\infty}^{\infty}\int_{\mathfrak{a}^{*}}J_{\chi_{s}^{-1},i \nu-z\rho}(\pi_{i\nu-z\rho}(g)1)\overline{J_{\chi_{s}^{-1},i\nu-\bar{z}\rho}( \pi_{i\nu}(f)1)}\mu(\nu)d\nu\phi(s)ds=\int_{-\infty}^{\infty}\sigma_{0}(s) \phi(s)ds.\]
This implies the theorem.
## 6 The spherical Whittaker Inversion Theorem.
Let if \(\chi\) is a generic character of \(N\) set \(J_{\chi,\nu}\) equal to the corresponding Jacquet integral. We will need the following
**Lemma 9**: _Assume \(\chi\) is generic. Let \(\psi\in\mathcal{C}(N_{o}\backslash G;\chi)\) and let \(\varphi\in C_{c}^{\infty}(N)\) is such that_
\[\int_{N_{o}}\chi(n)^{-1}\varphi(n)dn=1.\]
_Set \(f(nak)=\varphi(n)\psi(ak)\) for \(n\in N_{o},a\in A_{o},k\in K\). Then \(f\in{\cal C}(G)\) and if \(u\in C^{\infty}(M\backslash K)\) then_
\[J_{\chi^{-1},i\nu}(\pi_{i\nu}(f)u)=\int_{N\backslash G}J_{\chi^{-1},i\nu}(\pi_{ i\nu}(g)u)\psi(g)dg\]
**Proof.** Appendix 2, Corollary 20 proves that \(f\in{\cal C}(G)\). We calculate
\[\int_{N\backslash G}J_{\chi^{-1},i\nu}(\pi_{i\nu}(g)u)\psi(g)dg==\int_{A \times K}a^{-2\rho}J_{\chi^{-1},i\nu}(\pi_{i\nu}(ak)u)\psi(ak)dadk\]
\[=\int_{N}\chi(n)^{-1}\int_{A\times K}a^{-2\rho}J_{\chi^{-1},i\nu}(\pi_{i\nu}( ak)u)\varphi(n)\psi(ak)dadkdn\]
\[=\int_{N\times A\times K}a^{-2\rho}J_{\chi^{-1},i\nu}(\pi_{i\nu}(nak)u)\varphi( n)\psi(ak)dadkda=\int_{G}J_{\chi^{-1},i\nu}(\pi_{i\nu}(g)u)f(g)dg.\]
For each \(\nu\in{\mathfrak{a}}^{*}\)\(J_{\chi^{-1},i\nu}\) is a continuous functional on \(C^{\infty}(M\backslash K)\). Also since \(J_{\chi,i\nu}\) is tame in the sense of Theorem 15.2.5 in [RRGII], for each \(\nu\) in \({\mathfrak{a}}^{*}\) there exist \(C\) and \(d\) such that if \(u\in C^{\infty}(M\backslash K)\) then
\[|J_{\chi^{-1}.i\nu}(\pi_{i\nu}(g)1)|\leq C\,|g|^{-\frac{1}{2}}\,(1+\log\|g\|)^ {d}.\]
This implies that
\[\int_{G}J_{\chi^{-1},i\nu}(\pi_{i\nu}(g)u)f(g)dg=J_{\chi^{-1},i\nu}(\int_{G}f (g)\pi_{i\nu}(g)dg)=J_{\chi^{-1},i\nu}(\pi_{i\nu}(f)u).\]
**Theorem 10**: _Let \(\psi\in{\cal C}(N\backslash G/K;\chi)\). Set_
\[W_{\chi}(\nu,\psi)=\int_{N\backslash G}\overline{J_{i\nu}(\pi_{i\nu}(g)1)} \psi(g)dg\]
_then_
\[\psi(g)=\int W_{\chi}(\nu)J_{i\nu}(\pi_{i\nu}(g)1)\mu(\nu)d\nu.\]
**Proof.** Let \(f\) be as in the preceding lemma for \(\psi\). We observe that
\[\overline{J_{i\nu}(\pi_{i\nu}(g)1)}=J_{\chi^{-1},-i\nu}(\pi_{-i\nu}(g)1).\]
Thus the preceding lemma implies that
\[W_{\chi}(\nu,\psi)=J_{\chi^{-1},-i\nu}(\pi_{i\nu}(f)1).\]
The spherical Plancherel Theorem implies that
\[f(g)=\int_{{\mathfrak{a}}^{*}}\left\langle\pi_{i\nu}(L_{g^{-1}}f)1,1\right\rangle \mu(\nu)dv=\int_{{\mathfrak{a}}^{*}}\left\langle\pi_{i\nu}(f)1,\pi_{i\nu}(g)1 \right\rangle\mu(\nu)dv\]
so
\[\bar{f}(g)=\overline{f(g)}=\int_{{\mathfrak{a}}^{*}}\left\langle\pi_{i\nu}(g) 1,\pi_{i\nu}(f)1\right\rangle\mu(\nu)dv.\]
Theorem 8 implies that
\[\bar{f}_{\chi^{-1}}(g)=\int_{{\mathfrak{a}}^{*}}J_{\chi^{-1},i\nu}(\pi_{i\nu}( g)1)\overline{J_{\chi^{-1},i\nu}(\pi_{i\nu}(f)1)}\mu(\nu)dv,\]
Thus
\[\psi(g)=f_{\chi}(g)=\int_{{\mathfrak{a}}^{*}}\overline{J_{\chi^{-1},i\nu}( \pi_{i\nu}(g)1)}J_{\chi^{-1},i\nu}(\pi_{i\nu}(f)1)\mu(\nu)dv.\]
The first part of this proof implies that
\[J_{\chi^{-1},i\nu}(\pi_{i\nu}(f)1)=W_{\chi}(-i\nu,\psi).\]
Also,
\[\overline{J_{\chi^{-1},i\nu}(\pi_{i\nu}(g)1)}=J_{-i\nu}(\pi_{-i\nu}(g)1).\]
Hence
\[\psi(g)=\int_{{\mathfrak{a}}^{*}}W_{\chi}(-\nu,\psi)J_{-i\nu}(\pi_{-\nu}(g)1) \mu(\nu)dv.\]
This proves the theorem since \(\mu(\nu)=\mu(-\nu)\).
**Corollary 11**: _With the notation as in the previous theorem, if \(f,h\in{\cal C}(N\backslash G/K;\chi)\) then_
\[\int_{N\backslash G}f(g)\overline{h(g)}dg=\int_{{\mathfrak{a}}^{*}}W_{\chi}( \nu,f)\overline{W_{\chi}(\nu,h)}d\nu.\]
**Proof.** We calculate. The previous theorem implies that
\[\int_{N\backslash G}f(g)\overline{h(g)}dg=\int_{N\backslash G}\int_{{ \mathfrak{a}}^{*}}W_{\chi}(\nu,f)J_{\chi,i\nu}(\pi_{i\nu}(g)1)d\nu\overline{ h(g)}dg\]
\[=\int_{{\mathfrak{a}}^{*}}W_{\chi}(\nu,f)\overline{\int_{N\backslash G}h(g) \overline{J_{\chi,i\nu}(\pi_{i\nu}(h)1)}dgd\nu}=\int_{{\mathfrak{a}}^{*}}W_{ \chi}(\nu,f)\overline{W_{\chi}(\nu,h)}d\nu.\]
The non-periodic Toda Lattice
The original non-periodic Toda Lattice is the Hamiltonian system with Hamiltonian
\[H(p,q)=\frac{1}{2}\sum_{i=1}^{n}p_{i}^{2}+\sum_{i=1}^{n-1}c_{i}^{2}e^{2(q_{i}-q_{ i+1})}\]
\(c_{i}\in\mathbb{R}-\{0\}\). Using the quantization rules (here Planck's constant is normalized) \(p_{j}\to i\frac{\partial}{\partial q_{j}}\) and \(f(q)\to m_{f(q)}\) with \(m_{f(q)}\) the operator on, \(C^{\infty}(\mathbb{R}^{n})\), given by multiplication by \(f(q)\). Thus the quantum Hamiltonian is
\[\mathcal{H=}-\frac{1}{2}\Delta_{q}+\sum_{i=1}^{n-1}c_{i}^{2}e^{2(q_{i}-q_{i+1})}\]
where
\[\Delta_{q}=\sum_{i=1}^{n}\frac{\partial^{2}}{\partial q_{j}^{2}}.\]
Consider the algebra, \(\mathcal{A}\), of linear differential operators on \(\mathbb{R}^{n}\) with coefficients in the algebra generated by \(e^{(q_{i}-q_{i+1})},i=1,...,n-1\). We take as a domain for this algebra the space, \(\mathcal{T}\), of \(f\in C^{\infty}(\mathbb{R}^{n})\) such that
\[t_{m,d,x}(f)=\sup_{q\in\mathbb{R}^{n}}e^{\sum_{j=1}^{n-1}m_{i}(q_{i}-q_{i+1})} (1+\|q\|)^{d}|xf(q)|<\infty\]
with \(m=(m_{1},...,m_{n}),m_{i}\) and \(d\in\mathbb{Z}_{\geq 0}\) and \(x\) is a constant coefficient differential operator on \(\mathbb{R}^{n}\) and endowed with the topology induced by these semi-norms. This space is invariant under \(\mathcal{A}\) and \(\mathcal{A}\) acts continuously on it. In [GW] Toda 1 Section 2 it was shown the centralizer of \(\mathcal{H}\) in \(\mathcal{W}\) is an algebra generated over \(\mathbb{C}\) by \(n\) elements \(D_{1}=\sum_{i=1}^{n}\frac{\partial}{\partial q_{j}},D_{2}=\mathcal{H},...,D_{n}\) with algebraically independent symbols, \(\sigma(D_{1}),...,\sigma(D_{n})\) generators for the \(S_{n}\) invariant constant coefficient differential operators. A solution to the quantum Toda lattice is thus a family \(K_{\nu}(q)\), \(\nu\in(\mathbb{R}^{n})^{*}\) such that
\[\int_{(\mathbb{R}^{n})^{*}}|K_{\nu}(q)f(q)|dq<\infty,f\in\mathcal{T}\]
and
\[D_{j}K_{\nu}=\sigma_{i}(\nu)K_{\nu},j=1,...,n.\]
One has the following inversion formula: There exists a non-negative function \(\gamma(\nu)\) on \(\left(\mathbb{R}^{n}\right)^{*}\) such that \(|\gamma(\nu)|\leq C(1+\|\nu\|)^{r}\) for \(\nu\in\left(\mathbb{R}^{n}\right)^{*}\) and such that if \(f\in\mathcal{T}\) and if
\[\mathcal{K}(f)(\nu)=\int_{\mathbb{R}^{n}}f(q)\overline{K_{\nu}(q)}dq\]
then
\[f(q)=\int_{\left(\mathbb{R}^{n}\right)^{*}}\mathcal{K}(f)(\nu)K_{\nu}(q)\gamma( \nu)d\nu.\]
and if \(f_{1},f_{2}\in\mathcal{T}\) then
\[\int_{\mathbb{R}^{n}}f_{1}(x)\overline{f_{2}(x)}dx=\int_{\left(\mathbb{R}^{n} \right)^{*}}\mathcal{K}(f_{1})(\nu)\mathcal{K}(f_{2})(\nu)\gamma(\nu)d\nu\]
We now return to the situation of the preceding sections. Let \(G\) and the notation be as in Section 2 so we assume that, in particular, \(G\subset GL(n,\mathbb{R})\) for some \(n\). In particular, \(\mathfrak{g=}Lie(G)\). Let \(C\) be the Casimir operator corresponding to the invariant form
\[B(X,Y)=\text{tr}XY\]
for \(X,Y\) in \(\mathfrak{g}\). That is, if \(X_{1},...,X_{m}\) is a basis of \(\mathfrak{g}\) and \(Y_{1},...,Y_{n}\) are defined by \(B(X_{i},Y_{j})=\delta_{ij}\) then
\[C=\sum X_{i}Y_{i}\in U(\mathfrak{g}).\]
Let \(\theta,N,A,K,\mathfrak{n},\mathfrak{a},\mathfrak{k},\Phi^{+},\Delta\) be as before. Then \(\mathfrak{n=}\sum_{\alpha\in\Phi^{+}}\mathfrak{n}_{\alpha}\) and let \(X_{\alpha,j}\),\(j=1,...,m_{\alpha}\) be an orthonormal basis of \(\mathfrak{n}_{\alpha}\) relative to the inner product
\[\langle X,Y\rangle=-B(X,\theta Y).\]
We define the generalized quantum non-periodic Toda Lattices associated with \(G\) to be the operator on \(C^{\infty}(A)\) given by
\[L_{c}=-\frac{\sum h_{i}^{2}}{2}+\sum_{\alpha\in\Delta}c_{\alpha}^{2}a^{2\alpha}\]
with \(c_{\alpha}\in\mathbb{R}-\{0\}\). For \(G=GL(n,\mathbb{R})\) then take \(A\) to be the group of diagonal \(n\times n\) matrices with positive coefficients then and \(N\) the group
of upper triangular \(n\times n\) matrices with ones on the main diagonal. Then identifying \(\mathfrak{a}\) with \(\mathbb{R}^{n}\) via the map
\[(x_{1},...,x_{n})\mapsto\left[\begin{array}{cccc}x_{1}&0&\cdots&0\\ 0&x_{2}&\cdots&0\\ 0&0&\ddots&0\\ 0&0&\cdots&x_{n}\end{array}\right]\]
we have \(\Delta=\{\alpha_{1},...,\alpha_{n-1}\}\) with \(\alpha_{i}(x)=x_{i}-x_{i-1}\). Thus
\[L_{c}=-\frac{1}{2}\sum_{i=1}^{n}\frac{\partial^{2}}{\partial x_{j}^{2}}+\sum_{ i=1}^{n-1}c_{\alpha_{i}}^{2}e^{2(x_{i}-x_{i+1})}\]
Set \(K_{\nu}(x)=a^{-\rho}J_{i\nu}(\pi_{\nu}(\exp x)1)\) for \(x\in\mathfrak{a}.\) Let \((...,...)\) also denote the complex bilinear extension of the dual form, \((...,...)\) of \(B_{|\mathfrak{a}}\) to \(\mathfrak{a}_{\mathbb{C}}^{*}.\)
**Proposition 12**: _Let \(\chi\) be a generic character of \(N\). Set for \(\alpha\in\Delta\)_
\[c_{\alpha}^{2}=-\sum_{j=1}^{m_{\alpha}}\left(d\chi(X_{\alpha,j})\right)^{2}>0\]
_Let If \(\nu\in\mathfrak{a}^{*}\) then_
\[L_{c}a^{-\rho}K_{i\nu}(a)=\frac{\|\nu\|^{2}}{2}a^{-\rho}K_{i\nu}(a).\]
**Proof.** We have for \(\mu\in\mathfrak{a}_{\mathbb{C}}^{*}\)
\[CJ_{\nu}(\pi_{\nu}(g)1)=J_{\nu}(\pi_{\nu}(g)d\pi_{\nu}(C)1)=((\nu,\nu)-(\rho, \rho))J_{\nu}(\pi_{\nu}(g)1).\]
The proposition now follows directly from the calculations in Appendix 3.
We note that if \(\mathcal{W}(\mathfrak{a})\) is as in Appendix 2 and if \(\phi\in\mathcal{W}(\mathfrak{a})\) then the function \(a^{-\rho}\phi\) is in the space \(\mathcal{T}(\mathfrak{a})\) defined as follows: Define for \(u\in C^{\infty}(\mathfrak{a})\) the semi-norm
\[t_{d,m,x}(u)=\sup_{x\in\mathfrak{a}}e^{\sum_{\alpha\in\Delta}c_{\alpha}\alpha (h)}(1+\|h\|)^{d}\left|x\phi(h)\right|\]
with \(m=\{m_{\alpha}|\alpha\in\Delta\},m_{\alpha},d\in\mathbb{Z}_{\geq 0}\) and \(x\) is a constant coefficient differential operator on \(\mathfrak{a}\). \(\mathcal{T}(\mathfrak{a})\) is the space of all \(u\) in \(C^{\infty}(\mathfrak{a})\) such that all of the
\(t_{d,m,x}(u)<\infty\) endowed with the topology induced by these semi-norms. Then the map
\[\mu\mapsto(h\mapsto e^{-\rho(h)}\mu(\exp h)\]
defines a topological isomorphism of \({\cal W}({\mathfrak{a}})\) onto \({\cal T}({\mathfrak{a}})\) with inverse \(u\mapsto(a\mapsto e^{\rho}u(\log a))\). The main results of the previous section can be stated in the following form.
**Theorem 13**: _If \(u\in{\cal T}({\mathfrak{a}}),\nu\in{\mathfrak{a}}^{*}\) set_
\[{\cal K}(u)(\nu)=\int_{{\mathfrak{a}}}u(h)\overline{K_{\nu}(h)}dh\]
_then_
\[u(h)=\int_{{\mathfrak{a}}^{*}}K_{\nu}(h){\cal K}(u)(\nu)\mu(\nu)d\nu.\]
_Furthermore, if \(u,w\in{\cal T}({\mathfrak{a}})\) then_
\[\int_{{\mathfrak{a}}}u(h)\overline{w(h)}dh=\int_{{\mathfrak{a}}^{*}}{\cal K} (u)(\nu)\overline{{\cal K}(w)(\nu)}\mu(\nu)d\nu.\]
**Proof.** We note that if \(u\in{\cal T}({\mathfrak{a}})\) then \(u(h)=e^{-\rho(h)}\phi(\exp h)\) with \(\phi\in{\cal W}={\cal C}(N\backslash G/K;\chi)_{|A}.\) Thus if \(\psi(nak)=\chi(n)\phi(a)\) then
\[{\cal K}(u)(\nu)=\int_{A}a^{-\rho}\phi(a)a^{-\rho}\overline{J_{i\nu}(\pi_{i \nu}(a)1)}da=\int_{A}a^{-2\rho}\phi(a)\overline{J_{i\nu}(\pi_{i\nu}(a)1)}da\]
\[=\int_{N\backslash G}\psi(g)\overline{J_{i\nu}(\pi_{i\nu}(g)1)}dg={\cal W}_{ \chi}(\nu,\psi).\]
The theorem says that
\[\psi(g)=\int_{{\mathfrak{a}}^{*}}{\cal W}_{\chi}(\nu,\psi)J_{i\nu}(\pi_{i\nu}( g)1)\mu(\nu)d\nu.\]
So
\[u(h)=e^{-\rho(h)}\psi(\exp(h))=e^{-\rho(h)}\int_{{\mathfrak{a}}^{*}}{\cal K} (u)(\nu)J_{i\nu}(\pi_{i\nu}(\exp h)1)\mu(\nu)d\nu=\int_{{\mathfrak{a}}^{*}}{ \cal K}(u)(\nu)K_{\nu}(h)\mu(\nu)d\nu.\]
The above formulas lead to the second assertion of the theorem.
The rest of this section involves recalling several results from [GW].Since \(\mathfrak{g}=\mathfrak{n}\oplus\mathfrak{a}\oplus\mathfrak{k}\) the Poincare-Birkhof-Witt Theorem implies that
\[U(\mathfrak{g})=U(\mathfrak{n}\oplus\mathfrak{a})\oplus U(\mathfrak{g}) \mathfrak{k}.\]
Let \(p\) denote the projection of \(U(\mathfrak{g})\) onto \(U(\mathfrak{n}\oplus\mathfrak{a})\) corresponding to this direct sum decomposition. If \(x\in U(\mathfrak{g})^{\mathfrak{k}}\) (the centralizer of \(\mathfrak{k}\) in \(U(\mathfrak{g})\)) and if \(y\in U(\mathfrak{g})\) then \(y=p(y)+\sum u_{i}Y_{i}\) and \(x=p(x)+\sum w_{i}Y_{i}\),with \(Y_{i}\in\mathfrak{k},u_{i},w_{i}\in U(\mathfrak{g})\). Thus
\[yx=p(y)x+\sum u_{i}Y_{i}x=p(y)x+\sum u_{i}xY_{i}\]
\[=p(y)p(x)+p(y)\sum w_{i}Y_{i}+\sum u_{i}xY_{i}.\]
This \(p(yx)=p(y)p(x)\). Consider the two sided ideal in \(U(\mathfrak{n}\oplus\mathfrak{a}),\mathcal{I=}U(\mathfrak{n}\oplus\mathfrak{a })[\mathfrak{n},\mathfrak{n}]\). Then
\[U(\mathfrak{n}\oplus\mathfrak{a})/\mathcal{I}\cong U(\mathfrak{n}/[\mathfrak{n},\mathfrak{n}]\oplus\mathfrak{a}).\]
Let \(\mu:U(\mathfrak{n}\oplus\mathfrak{a})\to U(\mathfrak{n}/[\mathfrak{n}, \mathfrak{n}]\oplus\mathfrak{a})\) be the corresponding surjection and \(q:U(\mathfrak{g})^{\mathfrak{k}}\to U(\mathfrak{n}/[\mathfrak{n}, \mathfrak{n}]\oplus\mathfrak{a})\) be the corresponding homomorphism, that is \(q=\mu\circ p\). If \(h\in\mathfrak{a}\) then define \(\partial_{h}f(x)=\frac{d}{dt}f(x+th)_{\setminus t=0}\). If \(\lambda\in\mathfrak{a}^{*}\) define \(m_{\lambda}f=e^{\lambda}f\). Then \([\partial_{h},m_{\lambda}]=\lambda(h)m_{\lambda}\). Let \(\mathcal{A}\) be the algebra of operators on \(C^{\infty}(\mathfrak{a})\) generated by \(\partial_{h}\) and \(m_{\alpha}\) for \(h\in\mathfrak{a}\) and \(\alpha\in\Delta\). We define \(\tau(h)=\partial_{h}\) and \(\tau(x)=d\chi(x)m_{\alpha}\) if \(x\in\mathfrak{n}/[\mathfrak{n}.\mathfrak{n}]_{\alpha}\). Then \(\tau\circ q\) defines a homomorphism of \(U(\mathfrak{n}/[\mathfrak{n},\mathfrak{n}]\oplus\mathfrak{a})\) onto \(\mathcal{A}\). In [GW] we proved
**Theorem 14**: _The centralizer of \(\tau\circ q(C)\) in \(\mathcal{A}\) is \(\tau\circ q(U(\mathfrak{g})^{\mathfrak{k}})\)._
This result implies that the centralizer of \(L_{m}\) in \(\mathcal{A}\) is an algebra generated by \(\dim A\) elements with algebraically independent symbols. The results of [GW] imply that if \(D\in\mathcal{A}\) and \([D,L_{c}]=0\) then \(DK_{\nu}=\sigma(D,i\nu)K_{\nu}\). Thus in particular all of the assertions for the quantum non-periodic Toda lattice have been proved.
## 8 Appendix 1: Some inequalities
The purpose of this appendix is to prove some estimates that will be used in the body of the paper. The notation is as in Section 2.
**Lemma 15**: _Let \(f\in{\cal C}(G/K)\) then for each \(l\geq 0\) and \(D\) a constant coefficient differential operator on \({\mathfrak{a}}^{*}\) there exists \(B_{D,l}\) such that_
\[\left|D(\pi_{i\nu}(f)1)(k)\right|\leq B_{D,l}(1+\|\nu\|)^{-d},\nu\in{\mathfrak{a }}^{*},k\in K.\]
**Proof.** By definition
\[\left(\pi_{i\nu}(f)1\right)(k)=\int_{G}f(g)a(kg)^{i\nu-\rho}dg=\int_{G}f(k^{-1 }g)a(g)^{i\nu-\rho}dg.\]
Up to normalization of measures one has the standard integration formula (c.f. [W2] 7.7.4)
\[\int_{G}\varphi(g)dg=\int_{\bar{N}\times A\times K}a^{2\rho}\varphi(\bar{n}ak) d\bar{n}dadk.\]
Thus
\[\left(\pi_{i\nu}(f)1\right)(k)=\int_{\bar{N}\times A}a^{2\rho}f(k^{-1}\bar{n} a)a^{i\nu-\rho}d\bar{n}da=\int_{\bar{N}\times A}a^{\rho}f(k^{-1}\bar{n}a)a^{i\nu}d \bar{n}da.\]
The map \(\varphi\mapsto(h\mapsto e^{\rho(h)}\int_{\bar{N}}\varphi(\bar{n}\exp h)d\bar{ n}=\varphi^{\bar{P}})\) is a continuous map of \({\cal C}(G/K)\) to \({\cal S}({\mathfrak{a}})\) (c.f. Theorem 7.2.1 [RRGI]..The map \(f\mapsto L_{k}f\) is a continuous map of \(K\) to \({\cal C}(G/K)\) and the Fourier transform, \({\cal F}\), is a continuous map from \({\cal S}({\mathfrak{a}})\) to \({\cal S}({\mathfrak{a}}^{*})\). So if \(q\) is a continuous semi-norm on \({\cal S}({\mathfrak{a}}^{*})\) then \(q({\cal F}((R_{k}f)^{\bar{P}})\leq B_{q}.\) Thus if \(q_{D,l}(u)=\sup_{\nu\in{\mathfrak{a}}^{*}}(1+\|\nu\|)^{l}\left|D\alpha(\nu)\right|\) for \(l\geq 0\) and \(D\) a constant coefficient differential operator on \({\mathfrak{a}}^{*}\) then since \(K\) is compact
\[\max q_{D,l}({\cal F}((R_{k}f)^{\bar{P}}))\leq B_{D_{l},q_{l}}.\]
This implies that
\[\left|D{\cal F}((R_{k}f)^{\bar{P}})(\nu)\right|\leq B_{D,q_{l}}(1+\|\nu\|)^{- l}.\]
Unraveling the above we have
\[\left(\pi_{i\nu}(f)1\right)(k)={\cal F}((R_{k}f)^{\bar{P}})(-\nu)\]
so the lemma is proved.
**Lemma 16**: _If \(x,y\in G\) then \(a(xg)^{-\rho}\leq|g|^{\frac{1}{2}}\,a(x)^{-\rho}.\)_
**Proof.** Let (as in section 2) \(E=\wedge^{m}\mathfrak{g}\) (\(m=\dim\mathfrak{n}\)) and let \(u_{o}\in\wedge^{m}\bar{\mathfrak{n}}\) be a unit vector. If \(g\in G\) with \(g=\bar{n}a(g)k\) with \(\bar{n}\in\overline{N}\) and \(k\in K\) then \(\|\wedge^{m}g^{-1}u_{o}\|=\|\wedge^{m}k^{-1}\wedge^{m}a(g)^{-1}\wedge^{m}\bar{ n}^{-1}u_{o}\|=a(g)^{2\rho}\). This implies that if \(x,g\in G\) then
\[a(x)^{2\rho}=\left\|\wedge^{m}g\wedge^{m}g^{-1}\wedge^{m}x^{-1}v_{o}\right\| \leq|g|\,a(xg)^{2\rho}\]
hence
\[a(xg)^{-\rho}\leq|g|^{\frac{1}{2}}\,a(x)^{-\rho}.\]
Let \(x_{o}\in\mathfrak{n}\) be orthogonal to \([\mathfrak{n},\mathfrak{n}]\) and such that \(d\chi(x_{o})=i\).
**Proposition 17**: _Let \(f\in\mathcal{C}(G/K)\) and if \(\nu\in\mathfrak{a}^{*}\) then set \(u(\nu)=\pi_{i\nu}(f)1\in C^{\infty}(M\backslash K).\) If \(\omega\) is a compact subset of \(G\) then there exist constants \(M,D,C_{\omega},d<\infty\) such that if \(g\in\omega\) then_
\[\left|\int_{\ker\,\chi}\int_{\mathfrak{a}^{*}}\int_{N}1_{i\nu-(1+z)\rho}(n_{1 }\exp tx_{o})ng)\overline{u(\nu)_{i\nu-(1+\bar{z}\rho)}(n_{1})}dn_{1}\mu(\nu) d\nu dn\right|\leq C_{\omega}^{1+\operatorname{Re}z}DM(1+|t|)^{d}.\]
**Proof.** We put the absolute values inside the integration. We are estimating
\[I=\int_{\ker\,\chi}\int_{\mathfrak{a}^{*}}\int_{N}a(n_{1}n(\exp tx_{o})g)^{-( 1+\operatorname{Re}z)\rho}a(n_{1})^{-(1+\operatorname{Re}z)\rho}\left|u(\nu) (k(n_{1}))\right|dn_{1}\mu(\nu)d\nu dn.\]
We note that
\[\mu(\nu)\leq B(1+\|\nu\|)^{r}\]
for some \(r\) and all \(\nu\in\mathfrak{a}^{*}\). In Lemma 15 we showed that there exists a constant \(L\) such that
\[|u(\nu)|\leq L_{m}(1+\|\nu\|)^{-r-m}\]
with \(m\) arbitrary we take \(m\) to be any \(m>\dim\mathfrak{a}\). Thus
\[I\leq M\int_{\ker\,\chi}\int_{N}a(n_{1}n(\exp tx_{o})g)^{-(1+\operatorname{Re }z)\rho}a(n_{1})^{-(1+\operatorname{Re}z)\rho}dn_{1}\mu dn\]
with
\[M=BL_{m}\int_{\mathfrak{a}^{*}}(1+\|\nu\|)^{-m}d\nu<\infty.\]
Also, Lemma 16 implies that
\[a(n_{1}n(\exp tx_{o})g)^{-(1+\operatorname{Re}z)\rho}\leq|g|^{\frac{1+ \operatorname{Re}z}{2}}\,a(n_{1}n(\exp tx_{o}))^{-(1+\operatorname{Re}z)\rho}\leq\]
\[|g|^{\frac{1+\operatorname{Re}z}{2}}\,a(n_{1}n\exp tx_{o}))^{-\rho}\leq\exp tx_{ o}|^{\frac{1}{2}}\,|g|^{\frac{1+\operatorname{Re}z}{2}}\,a(n_{1}n)^{-\rho}.\]
Since \(\wedge^{m}Ad(extx_{o})\) is a polynomial in \(t\) there exists a constant \(Q<\infty\) such that \(\leq Q(1+|t|)^{d}\). Setting \(C_{\omega}=\max_{g\in\omega}|g|^{\frac{1}{2}}\)
\[I\leq MC_{\omega}^{\frac{1+\operatorname{Re}z}{2}}Q(1+|t|)^{d}\int_{\ker\chi} \int_{N}a(n_{1}n)^{-(1+\operatorname{Re}z)\rho}a(n_{1})^{-(1+\operatorname{Re} z)\rho}dn_{1}dn\]
\[\leq MC_{\omega}^{\frac{1+\operatorname{Re}z}{2}}Q(1+|t|)^{d}\int_{\ker\chi} \int_{N}a(n_{1}n)^{-\rho}a(n_{1})^{-\rho}dn_{1}dn\]
\[=MC_{\omega}^{\frac{1+\operatorname{Re}z}{2}}\left(Q(1+|t|)^{d}\right)^{\frac {1+\operatorname{Re}z}{2}}\int_{\ker\chi}\Xi(n)dn.\]
As in the proof of Lemma 16 we have \(a(g)^{\rho}=\|\wedge^{m}Ad(g)^{-1}v_{o}\|^{\frac{1}{2}}\leq|g^{-1}|^{\frac{1}{ 2}}\). Also,
\[\Xi(x)\leq N\left|x\right|^{-\frac{1}{2}}(1+\log|x|)^{s}\]
for some \(s,N<\infty\) (c.f. Theorem 5.5.3 [RRGI]). Hence
\[\Xi(x)=\Xi(x^{-1})\leq N\left|x^{-1}\right|^{-\frac{1}{2}}(1+\log\left|x^{-1 }\right|)^{s}\leq N_{\varepsilon}\left|x^{-1}\right|^{-\frac{1}{2}+ \varepsilon}\leq N_{\varepsilon}a(x)^{-(1-2\varepsilon)}\]
for each \(\varepsilon>0\). Theorem 4 says that if \(\varepsilon\) is sufficiently small
\[\int_{\ker\chi}a(n)^{-(1-2\varepsilon)\rho}dn<\infty.\]
Completing the proof.
## 9 Appendix 2: The restriction of \(\mathcal{C}(N\backslash G/K;\chi)\) to \(A\)
The purpose of this appendix is to give a complete description of the restriction in its title.
**Lemma 18**: _If \(m=(m_{1},...,m_{l}),m_{i},d\in\mathbb{Z}_{\geq 0}\) then there exists a continuous semi-norm, \(q_{m,d}\),on \(\mathcal{C}(N\backslash G;\chi)\) such that if \(f\in\mathcal{C}(N\backslash G;\chi)\) then \(|f(\exp hk)|\leq q_{m,d}(f)e^{\rho(h)}e^{-\sum m_{i}\alpha_{i}(h)}(1+\|h\|)^{d}\) for \(h\in\mathfrak{a}\)._
**Proof.** Let \(F=\{i|m_{i}>0\}\) and let \(x_{1},...,x_{n}\) be a basis of \(\mathfrak{g}\). If \(X\in\mathfrak{g}\) and if \(k\in K\) then we can write \(Ad(k)X=\sum a_{i}(k,X)x_{i}\). Note that there exists \(C\) such that
\[|a_{i}(k,X)|\leq C\,\|X\|\]
for all \(k\in K\). Now let \(X_{i}\) be an element of the \(\alpha_{i}\) root space in \(\mathfrak{n}_{o}\) such that \(d\chi(X_{i})=z_{i}\neq 0\). Then
\[f(\exp(h)k)=z_{i}^{-1}L_{X_{i}}f(\exp(h)k)=z_{i}^{-1}\frac{d}{dt}_{|t=0}f(\exp (tX_{i})\exp(h)k)=\]
\[z_{i}^{-1}\frac{d}{dt}_{|t=0}f(\exp(h)\exp(tAd(\exp(-h))X_{i})k)=\]
\[z_{i}^{-1}\frac{d}{dt}_{|t=0}f(\exp(h)\exp(te^{-\alpha_{i}(h)}Ad(k^{-1})X_{i}) k)=\]
\[e^{-\alpha_{i}(h)}z_{i}^{-1}\sum a_{j}(k^{-1},X_{i})R_{x_{i}}f(\exp(h)k).\]
Iterating this argument yields an expression
\[f(\exp(h)k)=e^{-\sum_{i\in F}m_{i}\alpha_{i}(h)}Z(k)f(ak)\]
with \(Z\) a smooth function from \(K\) to \(L=U^{\sum_{i\in F}m_{i}}(\mathfrak{g})\) with \(U^{j}(\mathfrak{g})\) the standard filtration. If we choose a basis of \(L\), \(y_{1},...,y_{r}\) then we have
\[Z(k)=\sum b_{i}(k)y_{i}\]
with \(b_{i}\) continuous functions on \(K.\) Let \(C_{j}=\max_{k\in K}|b_{j}(k)|\). We have
\[|f(\exp(h)k)|\leq e^{-\sum_{i\in F}m_{i}\alpha_{i}(h)}\sum_{j}C_{j}\,|y_{j}f( \exp(h)k)|\leq\]
\[e^{-\sum_{i\in F}m_{i}\alpha_{i}(h)}(1+\|h\|)^{-d}e^{\rho_{o}(h)}\sum_{j}C_{j }q_{d,y_{j}}(f).\]
**Lemma 19**: _Let \(\psi\in C^{\infty}(G)\) be expressed in the form_
\[\psi(nak)=\sum_{i=1}^{r}\sum_{j=1}^{s}\phi_{i}(n)f_{ij}(a)\gamma_{j}(k)\]
_for \(n\in N,a\in A,k\in K\) with \(r,s<\infty\), \(\phi_{i}\in C_{c}^{\infty}(N),\gamma_{j}\in C^{\infty}(K)\) and \(f_{ij}\in C^{\infty}(A)\) such that if \(m=(m_{1},...,m_{l})\) with \(m_{i}\in\mathbb{Z}_{\geq 0},d\in\mathbb{Z}_{\geq 0}\) and \(x\in U(\mathfrak{a})\) then there exists \(C_{ij,m,d,x}\) such that_
\[|xf_{ij}(a)|\leq C_{ij,m,d,x}a^{\rho}a^{-c_{1}\alpha_{1}-...-c_{l}\alpha_{l}}(1 +\|\log a\|)^{-d}.\]
_Then for \(d\in\mathbb{Z}_{\geq 0}\) there exists \(B_{d}\) such that_
\[|\psi(g)|\leq B_{d}\,|g|^{-\frac{1}{2}}\,(1+\log\|g\|)^{-d}.\]
_Also, if \(x,y\in U(\mathfrak{g})\) then \(L_{x}R_{y}\psi\) is of the same form._
**Proof.** To prove the inequality we may assume that \(r,s=1\) so
\[\psi(nak)=\phi(n)f(a)\gamma(k)\]
Let \(\omega\) be the the supports of \(\phi\). Let \(c_{1}\geq 1\) be such that
\[\max\{\|n\|\,,\|n^{-1}\|\}\leq c_{1},\min\{\|n\|\,,\|n^{-1}\|\}\geq c_{1}^{-1}, n\in\omega.\]
We note that \(|n|^{\frac{1}{2}}=\|n\|\) and \(|k|=\|k\|=1.\) We have for \(n\in N,a\in A.k\in K\)
\[|nak|=|na|\leq c_{1}\,|a|\]
and
\[|a|=\left|n^{-1}nak\right|\leq c_{1}\,|nak|\,.\]
By the same argument we have the same inequalities for \(\|...\|\). If \(h\in\mathfrak{a}\) let \(s\in W(A)\) be such that \(\alpha(sh)\geq 0,\alpha\in\Phi^{+}\). Then \(|a|^{\frac{1}{2}}=\exp(sh)^{\rho}=a^{s^{-1}\rho}\) so if \(s_{o}\) is the element of \(W(A)\) such that \(s_{o}\Phi^{+}=-\Phi^{+}\) then
\[|nak|^{-\frac{1}{2}}\leq c_{1}\,|a|^{-\frac{1}{2}}=c_{1}a^{s^{-1}s_{o}\rho}.\]
Also note that \(s^{-1}s_{o}\rho=\rho-\sum_{i=1}^{l}u_{i}\alpha_{i}\) with \(u_{i}\in\mathbb{Z}_{\geq 0}\). We also leave it to the reader to check that there exists \(c_{2}>0\) such that \(\|a\|\leq e^{c_{2}\|\log a\|}\) for \(a\in A\).
Thus \((1+\log\|a\|)\geq c_{3}(1+\|\log a\|).\) With these observations in place we have
\[|\psi(nak)|\leq\left(\sup_{n\in\omega,k\in K}|\phi(n)\gamma(k)|\right)|f(a)|=c_{4 }f(a)\leq c_{4}C_{1,1,u,d}a^{\rho-\sum u_{i}\alpha_{i}}(1+\|\log a\|)^{-d}\]
\[=c_{4}C_{1,1,u,d}\,|a|^{-\frac{1}{2}}\,c_{3}^{-d}(1+\log\|a\|)^{d}.\]
Thus if we take the maxima of the \(C_{1,1,u,d}\) for the \(s\in W(A)\) and incorporate the constants that appear in the estimates at the beginning of the proof we have
\[\psi(g)|\leq C_{d}\left|g\right|^{-\frac{1}{2}}(1+\log\|g\|)^{-d}\]
as asserted.
To complete the proof of the lemma we now we consider the derivatives. It is enough to show that \(R_{X}\psi\) and \(L_{X}\psi\) are of the same form for \(X\in\mathfrak{g}\) We start with \(R_{X}\). Again it is enough to show that if \(s,t=1\) then \(R_{X}\psi\) is of the form indicated in the statement of the lemma. Let \(X_{1},...,X_{n}\) be a basis of \(\mathfrak{g}\) such that \(X_{1},...,X_{r}\in\mathfrak{n}\) with \([h,X_{i}]=\beta_{i}(h)X_{i}\)\(h\in\mathfrak{a}\), \(X_{r+1},...,X_{r+l}\in\mathfrak{a}\), \(X_{r+l+1},...,X_{n}\in Lie(K)\). Then
\[Ad(k)X=\sum c_{i}(k,X)X_{i}.\]
We have
\[R_{X}\psi(nak)=\frac{d}{dt_{|t=0}}\psi(nak\exp tX)=\frac{d}{dt_{|t=0}}\psi(na \exp tAd(k)Xk)\]
\[=\sum_{i=1}^{n}c_{i}(k,X)\frac{d}{dt_{|t=0}}\psi(na\exp tX_{i}k)=\sum_{i=1}^{ r}c_{i}(k,X)a^{\beta_{i}}\left(R_{X_{i}}\phi(n)\right)f(a)\gamma(k)\]
\[+\sum_{i=r+1}^{r+l}c_{i}(k,X)\phi(n)\left(R_{X_{i}}f(a)\right)\gamma(k)=+\sum_ {i=r+1}^{r+l}c_{i}(k,X)\phi(n)f(a)\left(L_{X_{i}}\gamma(k)\right)\]
which is easily seen to be of the right form.
To handle the left derivative we consider a different basis \(Y_{i}=X_{i}\), \(i=1,...,r+l,Y_{r+l+1},...,Y_{r+l+m}\) a basis of \(Lie(M)\) and \(Y_{r+l+m+i}=\theta X_{i},i=1,...,r\). Then
\[Ad(n^{-1})X=\sum d_{i}(n,X)Y_{i}\]
\[L_{X}\psi(nak)=-\sum_{i=1}^{n}d_{i}(n,X)\frac{d}{dt_{|t=0}}\psi(n\exp tY_{i}ak)=- \sum_{i=1}^{r}d_{i}(n,X)\left(R_{i_{i}}\phi(n)\right)f(a)\gamma(k)\]
\[+\sum_{i=r+1}^{r+l}d_{i}(n,X)\phi(n)L_{Y_{i}}f(a)\gamma(k)+\sum_{i=r+l=1}^{r+l+ m}d_{i}(n,X)\phi(n)f(a)L_{Y_{i}}\gamma(k)+\]
\[-\sum_{i=r+l+m+1}^{n}d_{i}(n,X)\frac{d}{dt_{|t=0}}\psi(n\exp tY_{i}ak).\]
All but the last term are of the right form so we will show that it is also. Set \(\mu=r+l+m\) then
\[\exp tY_{\mu+i}a=a\exp(tAd(a)^{-1}Y_{\mu+i})=a\exp(ta^{\beta_{i}}Y_{\mu+i}).\]
So we are looking at
\[-\sum_{i=1}^{r}d_{\mu+i}(n,X)a^{\beta_{i}}\frac{d}{dt_{|t=0}}\psi(na\exp tY_{ \mu+i}k)\]
Now \(Y_{\mu+i}+X_{i}=Z_{i}\in Lie(K).\) Thus \(Y_{m+i}=Z_{i}-X_{i}\). So
\[-\sum_{i=1}^{r}d_{\mu+i}(n,X)a^{\beta_{i}}\frac{d}{dt_{|t=0}}\psi(na\exp tY_{ \mu+i}k)=\sum_{i=1}^{r}d_{\mu+i}(n,X)a^{\beta_{i}}\frac{d}{dt_{|t=0}}\psi(na \exp tX_{i}k)+\]
\[-\sum_{i=1}^{r}d_{i}(n,X)a^{\beta_{i}}\frac{d}{dt_{|t=0}}\psi(na\exp tZ_{i}k)= \sum_{i=1}^{r}d_{\mu+i}(n,X)a^{2\beta_{i}}R_{X_{i}}\phi(n)f(a)\gamma(k)\]
\[-\sum_{i=1}^{r}d_{\mu+i}(n,X)\phi(n)f(a)L_{Z_{i}}\gamma(k).\]
The result is, finally, proved.
**Corollary 20**: _If \(f\in{\cal C}(N\backslash G;\chi)\) is right \(K\) finite then and if \(\phi\in C_{c}^{\infty}(N)\) then the function on \(G\) defined by_
\[\psi(nak)=\phi(n)f(a)\]
_is in \({\cal C}(G)\)._
**Proof.** Let \(V=\mbox{span}_{\mathbb{C}}\{R_{k}f|k\in K\}\). Then \(\dim V<\infty.\) Let \(v_{1},...,v_{m}\) be a basis if \(V\) then \(R_{k}f=\sum\gamma_{i}(k)v_{i}\). Thus
\[f(ak)=\sum v_{i}(a)\gamma_{i}(k).\]
Since, \(v_{i}\in{\cal C}(N\backslash G;\chi)\) we see that \(R_{x}v_{i|A}\) satisfies the inequalities for all \(x\in U({\mathfrak{a}})\). The result is now a direct consequence of the definition of \({\cal C}(G)\) and Lemma 19.
**Theorem 21**: _If \(\psi\in{\cal C}(G)\) then \(\psi_{\chi}(g)=\int_{N}\chi(n)^{-1}\psi(ng)dn\) defines an element of \({\cal C}(N\backslash G;\chi)\)._
**Proof.** The Harish-Chandra spherical function \(\Xi(g)=\langle\pi_{0}(g)1,1\rangle\) satisfies
\[\Xi(g)\geq|g|^{-\frac{1}{2}}\,.\]
Thus since
\[|\psi(na)|\leq C_{d}\left|na\right|^{-1/2}(1+\log\left\|na\right\|)^{-d}\]
for all \(d\geq 0\) we have
\[|\psi_{\chi}(ak)|\leq C_{d}\int_{N}\left|na\right|^{-\frac{1}{2}}(1+\log\left\| an\right\|)^{-d}dn\leq C_{d}\int_{N}\Xi(na)(1+\log\left\|na\right\|)^{-d}dn.\]
Also \(\left\|na\right\|\geq\left\|a\right\|\) and
\[\left\|n\right\|=\left\|naa^{-1}\right\|\leq\left\|a^{-1}\right\|\left\|an \right\|=\left\|a\right\|\left\|an\right\|\leq\left\|an\right\|^{2}.\]
Thus
\[|\psi_{\chi}(ak)|\leq C_{d+r}\int_{N}\Xi(na)(1+\log\left\|na\right\|)^{-d}dn(1 +\log\left\|a\right\|)^{-r}.\]
Now in the proof of Theorem 7.2.1 in [RRGI] we have seen that there exists \(d\) such that
\[a^{-\rho}\int_{N}\Xi(na)(1+\log\left\|na\right\|)^{-d}dn\leq B<\infty\]
Since \(\left(R_{x}\psi\right)_{\chi}=R_{x}(\psi_{\chi})\) the theorem now follow from the definition of \({\cal C}(N\backslash G;\chi)\).
If \(f\in C^{\infty}({\mathfrak{a}})\) define for \(m=(m_{1},...,m_{l}),d\), \(m_{i},d\in{\mathbb{Z}}_{\geq 0}\) and \(x\) a constant coefficient differential operator on \({\mathfrak{a}}\)
\[w_{m,d}(f)=\sup_{h\in{\mathfrak{a}}}e^{-\rho(h)}e^{\sum m_{i}\alpha_{i}(h)}( 1+\left\|h\right\|)^{d}\left|xf(h)\right|.\]
Then we set \({\cal W}({\mathfrak{a}})\) equal to the space of all \(f\in C^{\infty}({\mathfrak{a}})\) such that \(w_{c,d}(f)<\infty\) for all \(m=(m_{1},...,m_{l}),d,m_{i},d\in{\mathbb{Z}}_{\geq 0}\) endowed with the topology determined by these semi-norms
**Theorem 22**: _Assume that \(\chi\) is generic. If \(\psi\in C^{\infty}(N\backslash G/K;\chi)\) set \(T(\psi)(h)=\psi(\exp h)\) for \(h\in{\mathfrak{a}}\). Then \(T({\cal C}(N\backslash G/K;\chi))\subset{\cal W}({\mathfrak{a}})\) and \(T\) defines a continuous isomorphism of \({\cal C}(N\backslash G;\chi)\) onto \({\cal W}({\mathfrak{a}})\)._
**Proof.** Lemma 18 implies that \(T({\cal C}(N\backslash G/K;\chi))\subset{\cal W}({\mathfrak{a}})\). Using the definitions of the semi-norms defining the topologies of \({\cal C}(N\backslash G/K;\chi)\) and \({\cal W}({\mathfrak{a}})\) the continuity of \(T\) follows. \(T\) is injective since \(G=NAK\). Let \(\phi\in C^{\infty}_{c}(N)\) be such that
\[\int_{N}\chi(n)^{-1}\phi(n)dn=1.\]
If \(f\in{\cal W}({\mathfrak{a}})\) set \(\psi(nak)=\phi(n)f(a).\) Lemma 19 implies that \(\psi\in{\cal C}(G/K)\). Then \(\psi_{\chi}\in{\cal C}(N\backslash G/K;\chi)\) and \(T(\psi_{\chi})=f\). Thus the map is surjective. The open mapping theorem now implies that \(T^{-1}\) is continuous.
## 10 Appendix 3. The Whittaker radial component of the Casimir operator
Let \(f\in C^{\infty}(N\backslash G/K;\chi)\) and let \(\chi\) be a unitary character of \(N\). Let \({\mathfrak{m}}\) be the centralizer of \({\mathfrak{a}}\) in \({\mathfrak{k}}\). In this appendix we will calculate the differential operator on \(A\) corresponding to the Casimir operator of \(G\) on \(C^{\infty}(N\backslash G/K;\chi)\).
Let \(X_{\alpha,i},i=1,...,m_{\alpha}\) be a basis of \({\mathfrak{n}}_{\alpha}\) such that \(B(X_{\alpha,i},\theta X_{\alpha,j})=-\delta_{ij}\). Note, before we start calculating, that this implies that
\[[X_{\alpha,i},\theta X_{\alpha,j}]=-\delta_{ij}h_{\alpha}+m_{ij}\]
with \(m_{ij}\in{\mathfrak{m}}\) and \(m_{ij}=0\) if \(i=j\).Let \(h_{1},...,h_{l}\) be an orthonormal basis of \({\mathfrak{a}}\) relative to \(\left<...,...\right>.\) Then the Casimir operator of \(G\) relative to \(B\) is
\[C=-\sum_{j=1}^{m_{\alpha}}(X_{\alpha,j}\theta X_{\alpha,j}+\theta X_{\alpha,j }X_{\alpha,j})+C_{\mathfrak{m}}+\sum_{i=1}^{l}h_{i}^{2}\]
Where \(C_{\mathfrak{m}}\) is the Casimir operator corresponding to \(B_{|{\mathfrak{m}}}\). Let \(f\in C^{\infty}(N\backslash G/K,\chi)\) we wish to calculate \(R_{C}f(a)\) for \(a\in A\). We observe that \(X_{\alpha,i}+\theta X_{\alpha,j}\in Lie(K)\) and
\[(X_{\alpha,i}+\theta X_{\alpha,j})^{2}=X_{\alpha,i}^{2}+\theta X_{\alpha,i}^{ 2}+X_{\alpha,j}\theta X_{\alpha,j}+\theta X_{\alpha,j}X_{\alpha,j}.\]
Thus
\[R_{X_{\alpha,j}\theta X_{\alpha,j}+\theta X_{\alpha,j}X_{\alpha,j}}f=-R_{X_{\alpha,i}^{2}}f-R_{\theta X_{\alpha,i}^{2}}f.\]
Also,
\[R_{\theta X_{\alpha,j}}R_{\theta X_{\alpha,j}}f=-R_{\theta X_{\alpha,j}}R_{X_{ \alpha,j}}f=-R_{h_{\alpha}}f+R_{X_{\alpha,j}}R_{\theta X_{\alpha,j}}f=-R_{h_{ \alpha}}f-R_{X_{\alpha,j}}^{2}f,\]
\[\left(R_{X_{\alpha,j}^{2}}f\right)(a)=a^{2\alpha}\left(L_{X_{\alpha,j}^{2}}f \right)(a)=2d\chi(X_{\alpha,j})^{2}a^{2\alpha}f(a)\]
and
\[R_{C_{\mathfrak{n}}}f=0.\]
The upshot is that
\[Cf(a)=2\sum_{\alpha\in\Phi^{+}}\sum_{j=1}^{m_{\alpha}}d\chi(X_{\alpha,j})^{2}a ^{2\alpha}f(a)-\sum_{\alpha\in\Phi^{+}}m_{\alpha}h_{\alpha}f(a)+\sum_{i=1}^{l }h_{i}^{2}f(a).\]
If \(\alpha\notin\Delta\) then \(d\chi(X_{\alpha,i})=0\) and \(\sum_{\alpha\in\Phi^{+}}m_{\alpha}h_{\alpha}=2h_{\rho}\), so
\[Cf(a)=2\sum_{\alpha\in\Delta}\sum_{j=1}^{m_{\alpha}}d\chi(X_{\alpha,j})^{2}a^ {2\alpha}f(a)-2h_{\rho.}f(a)+\sum_{i=1}^{l}h_{i}^{2}f(a).\]
Noting that \(d\chi=i\xi_{\chi}\) with \(\xi_{\chi}\in\mathfrak{n}^{*}\) thus if we set \(\xi_{\chi,\alpha}=\xi_{\chi|\mathfrak{n}_{\alpha}}\)
\[\sum_{j=1}^{m_{\alpha}}d\chi(X_{\alpha,j})^{2}=-\left\|\xi_{\chi,\alpha}\right\| ^{2}.\]
We also have
\[a^{\rho}\sum_{i=1}^{l}h_{i}^{2}a^{-\rho}=(\rho,\rho)-2h_{\rho}+\sum_{i=1}^{l}h _{i}^{2}.\]
We have derived
**Lemma 23**: _if \(f\in C^{\infty}(N\backslash G/K;\chi)\) and \(a\in A\) then_
\[\left(C+(\rho,\rho)\right)f(a)=a^{\rho}(\sum_{i=1}^{l}h_{i}^{2}-2\sum_{\alpha \in\Delta}\left\|\xi_{\chi,\alpha}\right\|^{2}a^{2\alpha})a^{-\rho}f(a).\] |
2302.06403 | Sources of Richness and Ineffability for Phenomenally Conscious States | Conscious states (states that there is something it is like to be in) seem
both rich or full of detail, and ineffable or hard to fully describe or recall.
The problem of ineffability, in particular, is a longstanding issue in
philosophy that partly motivates the explanatory gap: the belief that
consciousness cannot be reduced to underlying physical processes. Here, we
provide an information theoretic dynamical systems perspective on the richness
and ineffability of consciousness. In our framework, the richness of conscious
experience corresponds to the amount of information in a conscious state and
ineffability corresponds to the amount of information lost at different stages
of processing. We describe how attractor dynamics in working memory would
induce impoverished recollections of our original experiences, how the discrete
symbolic nature of language is insufficient for describing the rich and
high-dimensional structure of experiences, and how similarity in the cognitive
function of two individuals relates to improved communicability of their
experiences to each other. While our model may not settle all questions
relating to the explanatory gap, it makes progress toward a fully physicalist
explanation of the richness and ineffability of conscious experience: two
important aspects that seem to be part of what makes qualitative character so
puzzling. | Xu Ji, Eric Elmoznino, George Deane, Axel Constant, Guillaume Dumas, Guillaume Lajoie, Jonathan Simon, Yoshua Bengio | 2023-02-13T14:41:04Z | http://arxiv.org/abs/2302.06403v5 | # Sources of Richness and Ineffability for Phenomenally Conscious States
###### Abstract
Conscious states--states that there is something it is like to be in--seem both rich or full of detail, and ineffable or hard to fully describe or recall. The problem of ineffability, in particular, is a longstanding issue in philosophy that partly motivates the explanatory gap: the belief that consciousness cannot be reduced to underlying physical processes. Here, we provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness. In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state and ineffability corresponds to the amount of information lost at different stages of processing. We describe how attractor dynamics in working memory would induce impoverished recollections of our original experiences, how the discrete symbolic nature of language is insufficient for describing the rich and high-dimensional structure of experiences, and how similarity in the cognitive function of two individuals relates to improved communicability of their experiences to each other. While our model may not settle all questions relating to the explanatory gap, it makes progress toward a fully physicalist explanation of the richness and ineffability of conscious experience--two important aspects that seem to be part of what makes qualitative character so puzzling.
###### Contents
* 1 Introduction
* 2 Preliminaries: Computation through neural dynamics
* 2.1 Neural activation state space
* 2.2 Neural dynamics
* 2.3 State attractors
* 2.3.1 Attractors are mutually exclusive: contractive dynamics discretize the state
* 2.3.2 Emergent attractors in task-optimized networks
* 3 An information theoretic dynamical systems perspective on conscious experience
* 3.1 Motivating attractor dynamics as a model for conscious experience
* 3.1.1 Working memory
* 3.1.2 Stability and robustness of conscious states
* 3.2 Richness and ineffability
* 3.3 Intra-personal ineffability
* 3.3.1 Information loss from attractor dynamics
* 3.3.2 Information loss at verbal report
* 3.3.3 Hierarchical attractor dynamics
* 3.4 Inter-personal ineffability
* 3.4.1 A blank-slate listener
* 3.4.2 A typical listener
* 3.5 Phenomenal and access consciousness
* 3.5.1 Effability, accessibility, reportability
* 3.5.2 Existence and report of phenomenal experience
* 4 Conclusion
Introduction
Conscious states--states that there is something it is like to be in (Nagel, 1974)--present many apparent contradictions. On the one hand, every time we have a thought, look out at the world, or feel an emotion, we have a rich experience that seems impossible to fully describe. At the same time, conscious experiences are conceptualizable, with similar properties across individuals, and can often be communicated with a degree of fidelity.
This paper provides an information theoretic dynamical systems perspective on how and why consciousness may appear to us the way it does, namely as both _rich_ or full of detail, and _ineffable_ or hard to fully describe or recall--in other words, why it seems that an experience is "worth a thousand words". In addition, a dynamical systems model for consciousness offers an explanation for why much of the conscious content that _is_ reportable has a discrete nature that can be expressed with words. Our key contention is that these aspects of consciousness are implicated by a dynamical systems model of neural processing, in particular by "attractors": patterns of joint neural activity that remain relatively stable over short timescales and yield a discrete partition over neural states. Importantly, interpreting cognitive processing through the lenses of dynamical systems and information theory will give us the ability to reason about richness, ineffability, and communicability in general terms, without relying on implementation details of the neural processes that may give rise to consciousness. Broadly, the suggestion is that the rather abstract level of explanation afforded by information theory is the commensurate level of explanation for some key questions about richness and ineffability.
By "consciousness" we mean phenomenal consciousness, i.e. the felt or subjective quality of experience. A state is phenomenally conscious when, in the words of Nagel (1974), there is _something it is like_ to be in that state. Phenomenal consciousness is the form of consciousness that gives rise to what Joseph Levine calls the "explanatory gap" (Levine, 1993) and what David Chalmers calls the "hard problem of consciousness" (Chalmers, 1996): the problem of showing that phenomenal consciousness can be explained in terms of, or reduced to, underlying physical processes. The explanatory gap is one of the central problems in the philosophy of mind, and it relies heavily on the intuition that "physicalist theories leave out [phenomenal consciousness] in the epistemological sense, because they reveal our inability to explain qualitative character in terms of the physical properties of sensory states" (Levine, 1993).
Here, we address one aspect of this problem by developing a structural/mechanistic explanation of the richness and ineffability of conscious experience, one that is given entirely in terms of information processing in a dynamical system such as the brain. Our model assumes that conscious experiences are derived from neural processes according to known physical laws, and can therefore be understood using the standard methods of cognitive neuroscience. While our model may not settle all questions relating to the explanatory gap, it will make progress toward a fully physicalist explanation of the richness and ineffability of conscious experience--two important aspects that seem to be part of what
makes qualitative character so puzzling. Richness and ineffability figure in several important live debates about consciousness in the philosophical literature. Here we summarize two: the illusionism debate and the overflow debate.
Illusionists argue that consciousness is an illusion, while realists deny this (Frankish, 2016). Illusionists generally argue that our expectations for consciousness are too high: that the job of describing a conscious experience is too demanding for any physical process to fulfill, and that (rather than rejecting physicalism) we should conclude that there is no such thing as consciousness (or at least, make do with a diminished conception of it) (Dennett, 1993; Graziano et al., 2020; Humphrey, 2020). Daniel Dennett famously lists ineffability as one of the hard-to-fulfill conditions that should lead one to illusionism: the prospect that conscious contents somehow escape our attempts to fully describe them is, for Dennett, a sign that consciousness is chimerical (Dennett, 1993). Notably, illusionists acknowledge that something gives rise to the relevant illusions: there must be an explanation of why it seems plausible to us, on introspection, that we are the subjects of (ineffable) conscious states. Qualia realists, in contrast, see conscious experience as the subjective viewpoint from which all else is observed or known, and therefore consider it to be an explananum that cannot be discarded (Chalmers, 2010; Descartes, 1986; Tononi and Edelman, 1998).
The overflow debate is between those who hold that consciousness is indeed rich and ineffable, and those who deny it (while still maintaining that consciousness exists). Richness is a relative term, and one contender for a reference object that justifies the characterization of consciousness as rich is the accessible content of working memory. Empirically there appears to be a clear bandwidth limitation on the latter (Cohen et al., 2016; Miller and Buschman, 2015; Sperling, 1960), which is what makes it difficult, for example, to remember all of the names of the people you meet at a party or all of the digits of a phone number. Proponents of overflow say that consciousness is considerably richer than this sort of working memory and includes ineffable content unavailable for report (Block, 2007; Bronfman et al., 2019; Lamme, 2007; Vandenbroucke et al., 2012), while the staunchest opponents of overflow will maintain that consciousness is no richer than the bandwidth-restricted content of working memory, generally because they take consciousness to just _be_ working memory or a supporting system for it (Cohen and Dennett, 2011; Naccache, 2018; Phillips, 2016; Ward, 2018).
We thus have two important debates where those on both sides may benefit from a formal model of ineffability: illusionists and realists who deny overflow may benefit from a general model of why it seems to us that we are the subjects of rich and ineffable experiences, while realists who accept overflow may benefit from a characterization of how it emerges.
The aim of this paper is to propose and justify a formal description of how neural dynamics could give rise to the ordinary sense of richness and ineffability in the brain. Our key contributions are summarized as follows:
* We relate the philosophical notions of richness and ineffability to the computational notion of information. Assuming that brain dynamics are cast
as information processing functions, we contend that the richness of conscious experience can be interpreted as the amount of information in conscious state, and ineffability as the amount of information lost in processing.
* Attractor dynamics are empirically ubiquitous in neural activity across cortical regions and have been proposed as a computational model for working memory (Khona and Fiete, 2022; Rolls, 2010), while prominent models of consciousness argue that conscious experience is a projection of working memory states (Baars, 2005; Dehaene and Naccache, 2001). We connect these theories by contending that significant information loss induced by attractor dynamics offers an account for the significant ineffability of conscious experience.
* By considering information at multiple stages during inter-personal communication, we show how different point-to-point pathways of information loss arise during cognitive processing, going beyond the specific case of ineffability of conscious experience at verbal report.
* Using Kolmogorov information theory (Kolmogorov, 1965) we prove a formal result that connects cognitive dissimilarity between individuals with increasing ineffability of conscious experience. This highlights the difference between cognitive dissimilarity and knowledge inadequacy, shedding light on the philosophical conundrum of what color scientist Mary learns when leaving her black and white room (Jackson, 1986).
* Since information loss is a function of neural states, it can be approximately computed by cognitive processing, providing a mechanistic justification for the report of ineffability, or the contention that consciously inaccessible rich representations exist (Sperling, 1960).
Several existing works argue that attractor dynamics have the right functional characteristics to serve as a computational model for consciousness (Colagrosso and Mozer, 2004; Grossberg, 1999; Mathis and Mozer, 1994, 2019; Mozer, 2009; Rumelhart et al., 1986) but do not examine how information loss arising from such dynamics relate to the rich and ineffable aspects of conscious experience. Instead of defining "access" as triggering correct behavior on a per-experience basis (Colagrosso and Mozer, 2004), we contend that there is a natural correspondence between access and preservation of information, which allows for quantification using mutual information and analysis by applying information theoretic reasoning to the abstract computation graph. We utilize a minimal computational model without relying on implementation details of neural processing functions to maximize the generality of arguments. Casting ineffability as information loss allows us to reason about the ineffability of conscious experience from the computation graph without depending on the exact definition of conscious experience.
The paper is structured as follows. In Section 2, we introduce key concepts on computation and neural dynamics, in particular the role of attractor states
that can be used for computations involving short-term memory and have a dual discrete and high-dimensional nature. We present our dynamical systems model of conscious experience in Section 3, beginning with Section 3.1 which motivates the use of attractor dynamics for modeling conscious processing using prior arguments from the literature that are independent of our own, including evidence for the Global Workspace Theory (Baars, 1993, 2005; Dehaene et al., 1998). Section 3.2 formalizes the notions of richness and ineffability using both Shannon information theory (Shannon, 1948) and Kolmogorov complexity (Kolmogorov, 1965), which play a central role in making our later arguments precise. Core contributions are presented in Sections 3.3 and 3.4, which discuss various sources of ineffability in conscious experience and explain the conditions under which these experiences can be partially communicated to others. We then briefly discuss the implications of our model on the debate surrounding 'phenomenal' vs. 'access' consciousness (Block, 1995), before concluding with a high-level discussion in Section 4.
## 2 Preliminaries: Computation through neural dynamics
In Section 3, we will argue that we can account for the richness and ineffability of experience by modeling conscious states as neural trajectories in a high-dimensional dynamical system with attractors. To do so, we will now provide a brief overview of the essential concepts needed to understand the model.
First, we will introduce the notion of a neural activation space, in which temporally evolving states of neural activity follow trajectories governed by recurrent dynamics in the brain. Next, we will explain how state attractors, which are emergent properties of dynamical systems, can allow neural networks to solve computational problems that require some form of persistent memory. Along the way, we will highlight key examples from the computational neuroscience literature where this dynamical systems framework was used to explain how populations of neurons solve perceptual and cognitive tasks.
### Neural activation state space
At any given moment, every neuron in the brain has some level of activity, and this activity can be numerically quantified in several different ways (e.g., firing or not, firing rate over some time window, membrane voltage, etc.), which we illustrate in Fig. 1a. Together, this instantaneous pattern of activity defines the brain's current _state_, which may be compactly represented as a vector in an \(N\)-dimensional state space, where \(N\) is the number of neurons in the brain (or in the subpopulation of interest). In such a representation, each index in this vector identifies a particular neuron, and the value of a particular index corresponds to that neuron's current level of activity (Fig. 1b). We reason at the level of neuronal activity for clarity, but strictly our framework makes no assumptions about the appropriate level of granularity: where these make direct
contributions to cognitive information processing, other cells such as astrocytes or cell components such as dendrites may be state-space parameters in their own right (Godfrey-Smith, 2016).
A benefit of describing neural activity in this manner is that it allows us to draw on the mathematical framework of dynamical systems theory to reason about mental states. For example, we can now talk about what a pattern of neural activity _represents_ by projecting the state onto lower-dimensional subspaces that encode some meaningful feature. To explain this by example, it might be the case that when perceiving an object certain dimensions of the elicited state represent its color, others represent its shape, yet others represent its function, etc. In addition, given a probabilistic transition model for states that accounts for noise in neural activity and other sources of uncertainty, we can measure quantities such as the likelihood and information content of a state. We can also quantify the similarities between states according to some distance metric between their vectors.
### Neural dynamics
While neural states can be used to represent an instantaneous pattern of activity, the brain is a complex dynamical system and must ultimately be understood in terms of how neural activity unfolds in time. The temporal evolution of neural activity--and any other dynamical system--is governed by two factors.
First, neurons in the brain have a large number of synapses that form recurrent loops. Recurrency means that even in the absence of any sensory input, brain states will evolve dynamically; the activity of one neuron at a particular time will influence the future activity of surrounding neurons, which may in turn influence the original neuron's activity at a later time in a causal loop. The dynamics governing these neural state trajectories are defined by the joint
Figure 1: **Visualization of neural state space.****A.** The activity trace for multiple neurons, where activity can be quantified in several different ways (e.g., firing or not, firing rate over some time window, membrane voltage, etc.). Colored boxes denote joint activity patterns across all neurons at specific timepoints. _B._ At any particular timepoint, the joint activity pattern across \(N\) different neurons can be expressed as a vector in an \(N\)-dimensional state space.
synaptic connectivity profile between all neurons in the brain. Any given connectivity profile results in a set of rules for how each state transitions to the next. This can be visually illustrated for the entire system using a _vector field_ as shown in Fig. 2a: each vector indicates how a state at that location would evolve in the next instant in the absence of noise, and where the size of the vector denotes the speed of the change. Intuitively, one can understand the dynamics of the system by starting off at an initial point in neural state space and tracing a trajectory that follows the vector field at each point in time. A different connectivity profile would yield different transition dynamics (i.e., a different vector field), and therefore the same initial neural state would follow a different trajectory.
Another factor that governs neural dynamics is the input to the system, which may itself evolve over time. The dynamics of a sub-population of neurons (e.g., a particular brain region) are modulated extrinsically by signals from surrounding neurons that synapse onto the population, including information from the stream of sensory signals entering the brain. Illustrated visually in Fig. 2b, this means that inputs warp the vector field that define transitions from the current state to the next, ultimately resulting in potentially very different trajectories from those that would have occurred given other inputs.
Much of the field of computational neuroscience is concerned with understanding neural population coding through the lens of dynamical systems, thanks to their rich theoretical underpinnings and the mechanistic models they provide (Favela, 2021). Historically, this approach has been particularly fruitful in two systems: sensory integration (Burak, 2014; Zhang, 1996) and motor control (Churchland et al., 2012; Michaels et al., 2016; Shenoy et al., 2013). For example, Churchland et al. (2012) recorded from a population of neurons in the primate motor cortex and found that they exhibited rotational dynamics during a simple reaching task (Fig. 2c). While this was initially surprising because the
Figure 2: **Neural dynamics and trajectories in activation space.****A.** A dynamical system whose behavior is depicted using vector fields and example trajectories. **B.** External inputs can modulate the behavior of a dynamical system (compare vector fields and trajectories with those in Panel A). **C.** An example of neural dynamics empirically observed in the primate motor cortex. As the neural dynamics are high-dimensional, jPCA was used to reduce their dimensionality for visualization. The figure was reproduced with permissions from Churchland et al. (2012).
movement itself was not rhythmic, the authors proposed a theory that muscle activity is constructed from an oscillatory basis, which was later supported by additional experiments. The neural dynamics, then, can be understood as pattern generators that generate sequences of muscle activity optimized for producing natural movements.
Despite the success of this framework in sensory and motor domains, much less is understood about the dynamical underpinnings of higher-level cognition, although such dynamical systems are also implemented with neural substrates and would presumably share similar mechanisms. A contribution of our work is the application of dynamical systems to high-level conscious cognition and analysis of the implications for explaining the richness and ineffability of experience.
### State attractors
When neural dynamics are used to solve computational tasks, it is often the case that the solutions require some form of persistent memory, meaning that at least some projections of the neural activity must be self-sustaining. A dynamical system can implement this behavior by forming regions in its state space where states are drawn towards steady states (Fig. 3a). These regions are called "basins of attraction" because any state trajectory that enters them would progress towards the steady state in the absence of noise or changes in external inputs and dynamics. By steady states, we mean regions within the basins that deterministic trajectories eventually converge to. More generally, these sets of states are called "attractors" because neural activity trajectories that have reached the basin progress towards attractor states and remain there--approximately, in the presence of intrinsic noise in neural activity or changes in external input--until sufficient noise or external input activity nudge the state to escape the attractor basin. In general, dynamical systems can produce attractors that have complex and high-dimensional structure within the basin (e.g. manifold, fractal structure) and can exhibit their own internal dynamics, as in is the case of chaotic attractors (also called "strange"). Other common attractors contain fewer points, such as stable periodic orbits, or stable fixed points--single state points that do not change in time. In this section we will focus on fixed point attractors for simplicity, but arguments in subsequent sections apply to the general case of attractor subspaces. The important aspect of attractors for our purposes is that they are distinct and have non-overlapping basins of attraction.
Since trajectories that have converged to attractors have a tendency to remain there in the absence of strong external inputs, attractors can endow a dynamical system with a form of self-sustaining memory over short timescales that is useful for performing many computations essential to real-world tasks. Attractor dynamics can also be used for efficient long-term memory, without the brain having to directly store the high-dimensional vectors of the attractors in state space. As we will explain in Section 2.3.1, attractors are mutually exclusive and thus have a discrete structure; they can be identified with symbols
(e.g., words) that label _which_ attractor the system is in without describing the attractor's location in state space. The system could thus store a concise symbol in long-term memory rather than a high-dimensional vector. Afterwards, the memory could be retrieved by using the symbol as an input 'key' that drives the state to any location in the basin of the attractor, at which point the dynamics of the system will cause the trajectory to converge to the attractor. For example, to memorize an image of a face (represented by a high-dimensional vector) and associate it with a discrete entity like the name of a person, a learning process could update the parameters of the dynamical system, so that the image vector is an attractor state and system enters its basin of attraction when the name (or rather a neural code for it) is provided as an input.
It is important to emphasize that the existence of these attractors and the particular properties they have (e.g., cardinality, location, shape) are purely functions of the internal dynamics of the system. Neural networks are therefore particularly well-suited for implementing diverse computations through dynamical systems since they are composed of simple units whose connectivity can be flexibly tuned to achieve many possible complex attractor configurations, with the capacity for universal function approximation in the limit of large networks (Schafer and Zimmermann, 2007).
A dynamical system can be modulated by external inputs, therefore the nature of its attractors can also be driven by contextual signals. In the human brain, for example, this context could include both external sensory input and the content of short- and long-term memory. In particular, the previous content of working memory (which is a part of short term memory) might have a strong influence, so that our thoughts form coherent sequences and so that we can alternate between mutually exclusive interpretations of the world that are compatible with the context (e.g., flipping between different interpretations of the Necker cube--an ambiguous 2D line drawing of a cube that can be in one of two possible 3D orientations).
As was summarized in review articles by Rolls (2010) and Khona and Fiete (2022), the framework of attractor dynamics has been used to mechanistically explain the neural computations underlying decision-making (Wang, 2002, 2008; Wong and Wang, 2006), long-term memory (Chaudhuri and Fiete, 2019; Hopfield, 1982; Ramsauer et al., 2020), working memory (Barak and Tsodyks, 2014; Curtis and D'Esposito, 2003; Deco and Rolls, 2003; Durstewitz et al., 2000; Seeholzer et al., 2019), and the performance of simple cognitive tasks (Driscoll et al., 2022). Attractors have also been observed empirically across several experiments investigating decision-making (Kurt et al., 2008; Lin et al., 2014; Stevens, 2015) and working memory (Constantinidis et al., 2001; Curtis and D'Esposito, 2003; Gnadt and Andersen, 1988).
#### 2.3.1 Attractors are mutually exclusive: contractive dynamics discretize the state
An important property of attractors is that they are mutually exclusive: each attractor \(\mathbf{a}\) is associated with a basin of attraction \(B(\mathbf{a})\), which is the region
in state-space such that any state \(\mathbf{x}\) in \(B(\mathbf{a})\) will necessarily converge through the dynamics into \(\mathbf{a}\), in the absence of noise or external perturbations. This division into mutually exclusive basins of attraction thus creates a partition of the state space: one can associate to any state \(\mathbf{x}\) the attractor \(\mathbf{a}\) corresponding to the basin of attraction \(B(\mathbf{a})\) in which \(\mathbf{x}\) falls.
As a consequence of this mutual exclusivity, any attractor \(\mathbf{a}\) has a dual discrete and continuous nature (Jaeger, 1999): the symbol or composition of symbols \(i(\mathbf{a})\) that identify \(\mathbf{a}\) among all the other possible attractors in the current dynamics is discrete, while a fixed point \(\mathbf{a}\) is associated with a real-valued vector (also called embedding (Bengio et al., 2000; Morin and Bengio, 2005; Roweis and Saul, 2000) in the deep learning literature) corresponding to the state of the system at that fixed point. If the dynamics are not attractive over all dimensions, the same statement can be made for the subspace that is attractive, which means that this discretization effect need not cover every possible dimension and non-discretized dimensions may represent values in a continuous space.
Note that introducing randomness in the dynamics makes it possible to sample one of the attractors that may be reachable from the current state when that noise is taken into consideration. For example, if the state \(x\) is close to the boundary between basins of attraction of attractors \(A\) and \(B\), a small amount of additive noise would suffice to stochastically sample one destination or the other, with probabilities that would vary depending on how far \(\mathbf{x}\) is from the boundary and the specific dynamics in its area (for instance, basin depth or slope).
#### 2.3.2 Emergent attractors in task-optimized networks
To demonstrate how attractors naturally emerge as solutions to cognitive tasks, we briefly summarize relevant results from Sussillo and Barak (2013), where an artificial recurrent neural network (RNN) was trained to solve a simple memory task. An RNN is a network of artificial neurons which can be connected through recurrent feedback loops. Neurons can also form connections to special input and output units, which allow the network to interface with a task. The connection strength between each directed pair of neurons is parameterized using a scalar weight that modulates the degree to which activity in the first neuron drives future activity in the second, and these weights are optimized in order to minimize error on the task. Like the brain, RNNs have recurrent connections between neurons that define a dynamical system optimized to perform some computation, and are therefore useful models for studying emergent neural dynamics.
Sussillo and Barak (2013) train an RNN on the 3-bit flip-flop task (Fig. 3b), in which the network must learn to continuously output the sign (\(+1\) or \(-1\)) of the last binary spike across 3 input channels (which we can call the "red", "green", and "blue" channels). For instance, following the input sequence [red=\(+1\), green=-1, blue=\(+1\)], the correct output should be the vector {red=\(+1\), green=-1, blue=\(+1\)}. If the next input spike was red=-1, the new output would change to {red=-1, green=-1, blue=\(+1\)}. Importantly, while each input spike
only has a short duration, the network must continuously output the value of each channel's last spike, which imposes a memory demand.
When Sussillo and Barak (2013) inspected the learned dynamics of the RNN, they found that it solved the task through the use of fixed point attractors. Since the number of possible outputs is \(2^{3}=8\), the model represented each of these using an attractor. Due to their stability, the model was then able to continuously read out from whichever attractor the trajectory had most recently converged to. Whenever a new spike appeared in one of the input channels (with a value different from that channel's previous spike), the state escaped the current basin of attraction and followed transient dynamics towards the attractor for the new output. This simple task demonstrated how attractor dynamics can naturally emerge in neural networks and implement nontrivial computations, such as those involving transitions between discrete memory states.
## 3 An information theoretic dynamical systems perspective on conscious experience
The main contribution of our paper will be to argue that a dynamical systems model of consciousness with state attractors can account for the communicable, rich, and ineffable aspects of experience that we discussed in Section 1. Throughout this section, we will use an information theoretic perspective to character
Figure 3: **Attractor dynamics in neural networks.****A.** Attractors in a 2D state space. When a trajectory enters an attractor’s basin, it begins to converge to the attractor and remains there until sufficient external input or intrinsic noise allows it to escape. **B.** Sussillo and Barak (2013) train an artificial recurrent neural network to solve the 3-bit flip-flip memory task. In this task, the model must continuously output the sign of the most recent binary spike on 3 separate input channels. **C.** Fixed point attractors emerge in the learned dynamics of a recurrent neural network (RNN) as a solution to the task. Each of the attractors corresponds to one of the 8 (\(2^{3}\)) possible bit configurations, providing a stable memory state from which the output can be continuously read out. The plot shows a trajectory in the RNN’s state-space for changing inputs, where points along the trajectory are colored according to the correct output. The dimensionality of the state space was reduced using Principal Component Analysis (PCA) for visualization.
ize richness as information, ineffability as information loss, communicability as information retention, and we will deploy both the notions of Shannon information and Kolmogorov complexity (Kolmogorov, 1965). We will illustrate how information loss arises from dimensionality reduction implemented by attractor dynamics. We show how our model links the problem of accounting for ineffability to the Global Workspace Theory, which predicts that only representations with sufficient amplification and temporal duration (i.e., attractor states) can be broadcast to the rest of the brain for downstream verbal-behavioral reporting. Finally, we will generalize the notion of ineffability by discussing multiple forms of information loss in intra-personal and inter-personal communication pathways, going beyond the specific case of information loss between working memory and verbal report.
To contextualize our argument, we begin by drawing on existing work to highlight several connections between state attractor models and conscious experience.
### Motivating attractor dynamics as a model for conscious experience
#### 3.1.1 Working memory
The contents of working memory are typically considered to be the attended contents of short term memory: a function of short term representations held in the brain and context from task information or other executive functioning objectives (Cowan, 2008; Engle, 2002). A central claim in many leading theories of consciousness is that what we are consciously aware of is the contents of working memory. For example, the Global Workspace Theory (Baars, 1993, 2005) and its neuronal extension (Dehaene et al., 1998) state that information becomes conscious by gaining entry into a limited workspace that serves as a bottleneck for the distributed activity present across the brain. Pairs of brain regions are largely isolated from each other and arbitrary point-to-point communication is only possible via the workspace, which itself can both receive and broadcast information globally. The workspace, then, serves as a hub capable of coordinating brain-wide activity for centralized control and decision-making. It is easy to see the connection between the concepts of a global workspace and working memory (attentional selectivity, influence on executive decision-making, availability to verbal and behavioral reporting processes, limited capacity, arbitrary modalities) and there is little distinction between them in the Global Workspace Theory (Dehaene and Naccache, 2001). Similarly, the notion of "access consciousness" introduced in Block (1995) can be framed through the lens of a working memory whose contents are globally accessible across the brain.
The link between working memory and attractor dynamics, in turn, is well established. Empirical studies have demonstrated that attractor dynamics are ubiquitous in the brain, both across species and levels in the brain's hierarchy (Khona and Fiete, 2022; Rolls, 2010). The attractor model for working memory postulates that working memory emerges from recurrently connected
cortical neural networks that allow representations to be maintained in the short term (on the order of seconds) by self-generated positive feedback (Barak and Tsodyks, 2014; Curtis and D'Esposito, 2003; Deco and Rolls, 2003; Durstewitz et al., 2000; Seeholzer et al., 2019). Attractor dynamics can support both _suppression_ of inputs, for example in decision making where the brain state flows rapidly towards a discrete attractor and subsequent inputs or perturbations are discounted, as well as _integration_ over inputs, where the incremental response to inputs causes reversible flow along continuous attractor manifolds (Khona and Fiete, 2022; Redish et al., 1996; Wang, 2008). Neural winner-take-all (WTA) models implement hybrid analog-discrete computation (Wang, 2008; Wong and Wang, 2006). Robustness, discreteness, and temporal integration of information are all traits apparent in working memory (Khona and Fiete, 2022).
#### 3.1.2 Stability and robustness of conscious states
As a model of conscious processing, discrete attractor dynamics predict that our experience consists of a sequence of relatively stable states that transition swiftly from one to another. Such types of sequential dynamics have been hypothesized to be a key component of conscious thought and perception (James, 1892; Rabinovich et al., 2008; Tsuda, 2015; Varela, 1999). Empirically, one of the characteristics that distinguishes conscious vs. unconscious neural representations in psychophysics tasks is that they are significantly more stable in the "aware" condition (Schurger et al., 2015).
Qualitatively, subjects commonly report on the emergence of stable discrete "choices" within conscious perception. For instance, when looking at the Necker cube, subjects only perceive one single interpretation of its structure and orientation rather than a mixture of both possibilities. Occasionally, this interpretation will change to the alternative one, but the change will happen rapidly as an abrupt transition. Similarly, in the case of binocular rivalry, only a single image presented to one of the eyes will be consciously perceived rather than a mixture of the two, and which image is consciously perceived will abruptly change at random times. Such cases are characterizable by attractor dynamics that converge to one attractor and remain stable until sufficient input change or noise result in a rapid transition to another attractor.
Input change or noise may also result in basin transitions that occur without complete convergence to attractors. This is familiar in the cases of thought and speech. One common example is thought-disruptive external stimuli, in which external stimuli distract or interrupt one's chain of thought. A less well-known but equally important example is the role of internal time-saving mechanisms. These are active in cases where one does not need to spell something out in full detail. For example, in speech production, phonemes are often not fully articulated: this may be understood by noting that once one has arrived at an attractor basin it is disambiguated which point one converges toward (Roessig et al., 2019). A similar mechanism may explain the utility of verbal or symbolic thought, where the key may serve as synecdoche for the value.
Schurger et al. (2010) suggested that conscious states were associated with
increased robustness to noise in psychophysics experiments. A signature of neural representations in the "conscious" condition was that they were highly reproducible; given the same stimulus presentation across different trials, patterns of neural activity were similar, so long as the subjects reported awareness of the stimulus. In contrast, patterns of activity during the "nonconscious" condition in which subjects were unaware of the stimulus exhibited greater variability. Both robustness to noise and reproducibility of states, in turn, are core properties accommodated by attractor dynamics.
### Richness and ineffability
**Box 1. Notation**
Let lower case \(x\) denote an instance of random variable \(X\), \(\mathcal{X}\) denote the set of possible states for \(X\) with probability distribution \(P(X)\), \(\sum_{x\in\mathcal{X}}P(X=x)=1\), \(p(x)\) denote \(P(X=x)\), expectation \(\mathbb{E}_{p(x)}[f(x)]\) denote \(\sum_{x\in\mathcal{X}}p(x)f(x)\), and likewise for other variables. We restrict function domains to discrete variables including floating point representation of reals. \([n]\) denotes the list of natural numbers \(1,\ldots,n\).
--
What is meant by the richness of experience? Intuitively, whilst we find it easy to communicate certain aspects of our mental state, we struggle to convey their full content or meaning. One can consider color as an example. We are tempted to think of color space as a simple 3-dimensional surface, on the basis of perceptual similarity judgments that people tend to make. However, there is a far richer and higher dimensional structure to experiencing color. For instance, most people would describe the color "red" as warm and aggressive. There are myriad associations that we make with various colors that are not functions of their nominal definitions, and all of these associations as a whole contribute to the richness of the experience (Chalmers, 2010).
Broadly, richness means having a lot; the condition of being "well supplied or endowed" (Merriam Webster Dictionary, 2023). In the context of mental state attribution, richness gauges the amount of specificity--detail, texture, nuance or informational content--contained by a mental state. It is a common principle in aesthetics that experience is rich (a picture speaks a thousand words), and many philosophers acknowledge that conscious states at least appear to be highly detailed, nuanced, and contentful (Block, 1995; Chuard, 2007; Tye, 2006), though some take this appearance to be ultimately illusory (Cohen et al., 2016; Dennett, 1993).
This conception of richness corresponds well to the mathematical notion formalized by Shannon (Shannon, 1948), where richness of a random variable \(X\) is given by its entropy \(H(X)\). Here, a random variable represents a state type, e.g., experience of some face or other. To say that such a variable is high in entropy is to say that the number of values it could take (the number of possible states the system could be in, e.g., the different experiences of faces one could possibly have) is relatively large and the probability distribution over these is
relatively flat, and thus the state is unpredictable. Specifically, Shannon entropy \(H(X)\) quantifies the average number of bits (answers to yes-or-no questions) required to specify which state \(X\) takes as a measure of informational content.
The notion of ineffability is closely related. In popular usage, ineffable can be defined as "too great for words" (Oxford English Dictionary, 2023). The concept is often used in theological contexts, but it has been applied to descriptions of qualitative experience since at least (Dennett, 1993). Given the term's theological associations, the claim that experience is ineffable might sound like a profession of dualism: consciousness is something magic that no physicalist theory can account for. However, strictly speaking, to claim that experience is ineffable is simply to claim that its informational content exceeds what we can remember or report. Much hinges on what exactly we mean by "can remember or report". Of course, one can say a thousand words, so the fact that a picture speaks that many words does not necessarily make a picture ineffable. Below, we will develop tools to allow us to precisely refine the senses of ineffability at issue, and we will see that experience is ineffable in multiple senses (though none of them need involve magic or anything anathema to physicalist theories).
We propose that ineffability corresponds to the mathematical notion of information loss when trying to express a conscious state in words. Given a function that processes an input variable \(X\) and produces an output variable \(Y\), information loss of the input incurred by the output is measurable by conditional entropy \(H(X|Y)\), or entropy of the input variable given the output variable. Intuitively, conditional entropy \(H(X|Y)\) measures how well \(Y\) describes \(X\): how much uncertainty remains about the value of \(X\), once the value of \(Y\) is given. Conditional entropy \(H(X|Y)\) is mathematically equivalent to the entropy of the input \(X\) minus the mutual information between input and output, \(H(X|Y)=H(X)-I(X;Y)\), where the latter is a measure of information shared between them; the amount of information about the state of one variable obtained by observing the state of the other. Note the difference between conditional entropy and mutual information: mutual information is how much uncertainty one random variable removes from another, while conditional entropy describes how much uncertainty remains in the first variable after the value of the second is given. Later we will also make use of joint entropy, \(H(X,Y)\), which is the amount of information needed on average to specify the states of both \(X\) and \(Y\).
Usefully, quantifying ineffability in this manner allows us to offer a precise definition of effability as the negation of ineffability. Where ineffability is given by \(H(X|Y)\), negating ineffability gives effability: \(-H(X|Y)=I(X;Y)-H(X)\). Recalling that entropy is a measure of uncertainty or spread in a probability distribution, the smaller \(H(X|Y)\) is, the less uncertain \(X\) is given \(Y\), the less information is lost, and the more effable or communicable \(X\) is via \(Y\). We may say that given a variable \(X\) with entropy \(H(X)\), its effability to variable \(Y\) scales with the amount of shared information \(I(X;Y)\). Finally, since entropy \(H(X)\) is recoverable as \(H(X)=H(X|C)\) for any constant variable \(C\), richness may be considered the special case of ineffability where the output state is a constant.
In the foregoing we draw on the framework of Shannon information, but
there are advantages, for our purposes, to using Kolmogorov information (Kolmogorov, 1965) as an alternative way to characterize richness and ineffability. In the Kolmogorov formalism, richness of a state \(x\) corresponds to its complexity \(K(x)\), which is the length in bits of the shortest program written in a general programming language that outputs \(x\) and halts. Ineffability then corresponds to conditional Kolmogorov complexity of an input \(x\) given an output \(y\), \(K(x|y)\), the length of the shortest program needed to produce \(x\) if \(y\) is given, or intuitively the complexity of \(x\) minus the number of bits that can be saved from knowing \(y\), which is the Kolmogorov analog of Shannon information loss as conditional entropy or entropy minus mutual information. Note that since Kolmogorov complexity is defined on strings of bits, we restrict the domain of our functions to discrete variables and assume that floating point representation is used to encode real values (Box 1), which is natural for variables modelled or stored with computer memory. Construction of the computational model thus incurs a separate form of information loss resulting from discretization of real-valued continuous-time observations of neural state.
Shannon entropy and Kolmogorov complexity are closely related metrics of richness, and are described in more detail in Box 2 and Fig. 4. If the probability distribution over states is given, taking an expectation over the distribution on Kolmogorov complexity of its states allows Shannon entropy to be approximately recovered (Grunwald and Vitanyi, 2004). Under either framework, richness is characterizable as information measured in bits, ineffability as information loss or richness reduction, and communicability and ineffability are neither separate nor boolean traits, but direct opposites of each other and varying on a scale.
A major difference between Shannon entropy and Kolmogorov complexity is the former is defined given a probability distribution over variable states, whereas the latter is defined on individual states without assuming a given probability distribution. Shannon entropy is defined for a random variable given its probability distribution, measuring the average information carried by states, while Kolmogorov complexity is a measure of the information carried by an individual state \(x\), without dependency on the probability distribution of a random variable ranging over it. Knowledge of the distribution over a variable's states is generally a non-trivial assumption. The distribution may be undefined or highly privileged information in itself (that is, the meta-distribution over the distribution's parameters is rich). Consider, for example, measuring the amount of information in a book by considering the set of all possible books and the distribution over them (Grunwald and Vitanyi, 2003) or the information in a temporal snapshot of a high-dimensional brain state by considering the distribution over all possible states. In these cases, we want a way to measure informational content that does not require knowledge of a hard-to-specify distribution. This is especially salient for us where inter-personal ineffability is concerned. Even if we assume that a brain's parameters fully determine the distribution over its own states (and so in some sense individuals have direct access to their own distributions), still individuals cannot have this level of knowledge of the distributions of their interlocutors' brains. Explicitly allowing the com
municator's distributional parameters to be unknown is therefore convenient for characterizing inter-personal ineffability from the perspective of the listener.
A second drawback of Shannon's framework is that entropy is a measure of statistical determinability of states as opposed to difference in absolute states; information is fully determined by the probability distribution on states and unrelated to the meaning, structure or content of individual states (Grunwald and Vitanyi, 2003). For example, consider again a case where we want to measure inter-personal ineffability, as a relationship between a communicator's experience and a listener's. Conditional entropy of the communicator's experience given the listener's experience is low if the pairing is statistically unique, regardless of the semantic correspondence between experiences, whereas conditional Kolmogorov complexity is concerned with the difficulty of reconstructing the communicator's experience given the listener's experience, i.e. absolute difference, which corresponds more closely to the lay definition of ineffability. For example, it might just happen to turn out that whenever Alice thinks and talks about tennis, Bob almost always thinks about Beethoven. In this case, conditional entropy will be low, but conditional Kolmogorov complexity will be high, and therefore suited to capture the absolute difference between their experiences. For these reasons, we argue that particularly in the case of inter-personal communication, Kolmogorov complexity should be used to characterize richness and ineffability of experiences. However, Shannon entropy is functionally equivalent if the distribution is given, and we will refer to both frameworks.
Figure 4: Illustrating Shannon entropy and Kolmogorov complexity for discrete color distributions. Entropy (Eq. (1)) involves an expectation over the states of stochastic variable \(X\) whereas Kolmogorov complexity (Eq. (4)) is defined for an instance of state, \(x\). The distribution on the left has non-zero mass in one state and is the minimum entropy distribution; the distribution on the right is uniform over states and is the maximum entropy distribution for 8 states. Assume a universal RGB representation for colors where each RGB component ranges between 1 and 256. Without assumptions on the distribution over colors, the Kolmogorov complexity of each state is no greater than 24 (excluding program overheads) since color can be represented with 3 8-bit binary sequences, but may be lowered for smaller RGB values that do not require 8 bits if an optimized number encoding scheme is used (Grunwald and Vitanyi, 2003). Whereas entropy is the same for the same probability distribution over _any_ states, Kolmogorov complexity would increase for states whose values are algorithmically more difficult to construct.
**Box 2. Metrics for richness and ineffability**
Shannon entropy is given by
\[H(X)=\mathop{\mathbb{E}}_{p(x)}[-\log p(x)]. \tag{1}\]
If variable \(Y\) is produced by processing \(X\), \(y=f(x)\), with joint distribution denoted by \(p(x,y)\) and \(f\) stochastic in the general case, then information loss from \(X\) to \(Y\) is given by conditional entropy \(H(X|Y)=H(X)-I(X;Y)\), where \(I(X;Y)\) denotes Shannon mutual information between variables,
\[I(X;Y)=\mathop{\mathbb{E}}_{p(x,y)}\bigg{[}\log\frac{p(x,y)}{p(x)p(y)}\bigg{]}, \tag{2}\]
and \(H(X|Y)\) is given by
\[H(X|Y)=\mathop{\mathbb{E}}_{p(x,y)}[-\log p(x|y)]. \tag{3}\]
The Kolmogorov complexity of a state \(x\), \(K(x)\), is the length \(l(z)\) in bits of the shortest binary program \(z\) that prints \(x\) and halts. Specifically, let \(U\) be a reference prefix universal machine. The prefix Kolmogorov complexity of \(x\) is
\[K(x)=\min_{z}\{l(z):U(z)=x,z\in\{0,1\}^{*}\} \tag{4}\]
Conditional Kolmogorov complexity is the length of the shortest program that takes \(y\) as an input, prints \(x\) and halts. It is given by
\[K(x)-I(x:y)=K(x|y^{*})\stackrel{{\pm}}{{=}}K(x|y,K(y))\stackrel{{ \log}}{{=}}K(x|y) \tag{5}\]
where \(I(x:y)\) denotes Kolmogorov mutual information between states, \(y^{*}\) denotes the shortest program that produces \(y\) and halts, standard notation \(\stackrel{{\pm}}{{=}}\) and \(\stackrel{{\log}}{{=}}\) are used to denote equality up to constant and logarithmic factors respectively (Grunwald and Vitanyi, 2004; Li et al., 2008). As Eq. (5) shows, \(K(x|y^{*})\) and \(K(x|y)\) are comparable and either may be used to characterize information loss; in subsequent sections we will generally refer to \(K(x|y)\).
Kolmogorov complexity has several intuitive properties and similarities with Shannon entropy. Conditioning on more data cannot increase the information of a state, \(K(x|y)\leq K(x)\), as \(y\) is utilized if it allows for a shorter program and otherwise ignored. If \(y\) merely copies \(x\) there is no information loss, \(K(x|y)\stackrel{{\pm}}{{=}}0\). Under the Shannon framework, \(H(X|Y)=0\) if \(X=Y\) but also more generally if each state \(y\) corresponds to a unique \(x\). Figure 4 illustrates Shannon entropy and Kolmogorov complexity for a toy example. Shannon entropy is concerned with statistical determinability of a random variable given knowledge of its probability distribution, whereas Kolmogorov complexity can be considered as
a more tabula rasa (not knowing the distribution) measure of richness of a particular value of this variable. Shannon entropy and Kolmogorov complexity are related by the following constraints (Grunwald and Vitanyi, 2004):
\[0 \leq(\mathbb{E}_{p(x)}[K(x)])-H(X)\stackrel{{+}}{{ \leq}}K(p), \tag{6}\] \[I(X;Y) \stackrel{{+}}{{=}}\mathbb{E}_{p(x,y)}[I(x:y|p)],\] (7) \[I(X;Y)-K(p) \stackrel{{+}}{{<}}\mathbb{E}_{p(x,y)}[I(x:y)] \stackrel{{+}}{{<}}I(X;Y)+2K(p), \tag{8}\]
which conveys how Kolmogorov complexity pays a penalty for not assuming knowledge of the distribution, since it must be encoded within the program.
### Intra-personal ineffability
In this section we will develop our model of intra-personal ineffability, that is, ineffability between stages of processing within a single experiencer. We will be concerned with the following variables: Let \(X\) (with value \(\mathbf{x}\)) be a trajectory of neural activities that determine working memory content and conscious experience, and let it consist of a sequence of transient states \(X_{t}\) for \(t\in[T]\), where length \(T\) is fixed and sufficiently large such that all trajectories terminate near an attractor state. Let \(A\) (with value \(\mathbf{a}\)) denote the terminating attractor, \(S\) (with value \(\mathbf{s}\)) denote the conscious experience, \(D\) (with value \(\mathbf{d}\)) denote external input datum, let \(V\) (with value \(\mathbf{v}\)) denote a list of \(N\) subprocess states \(V_{n}\) for \(n\in[N]\) and fixed \(N\) that comprise computation affecting working memory trajectory \(X\), and let \(M\) (with value \(\mathbf{m}\)) denote the verbal report or output message of the individual. In addition, let \(\phi\) denote the brain's synaptic weights
Figure 5: **A model of intra-personal ineffability.** Information is channelled through the stages of input (\(\mathbf{d}\)), subprocesses state (\(\mathbf{v}\)), working memory (\(\mathbf{x}\), \(\mathbf{a}\)), conscious experience (\(\mathbf{s}\)) and verbal report (\(\mathbf{m}\)). A trajectory \(\mathbf{x}\) in the state-space of working memory follows attractor dynamics, converging near an attractor \(\mathbf{a}\). Each step transforming one variable to another is executed by the dynamics of the individual’s brain, which is determined by parameters \(\phi\). Conscious experience \(\mathbf{s}\) is a function of the subject’s cognitive parameters \(\phi\) and working memory trajectories \(\mathbf{x}\), and encodes the experience’s meaning.
that parametrize its dynamics. These variables are connected by a computation graph of functions (Fig. 5), given by \(\mathbf{v}=f_{\phi}^{V}(\mathbf{d})\), \(\mathbf{x}=f_{\phi}^{X}(\mathbf{v})\), \(\mathbf{a}=f_{\phi}^{A}(\mathbf{x})\), \(\mathbf{s}=f_{\phi}^{S}(\mathbf{x})\) and \(\mathbf{m}=f_{\phi}^{M}(\mathbf{a})\). The functions \(f_{\phi}^{A}\) (returns final attractor state) and \(f_{\phi}^{S}\) (outputs conscious experience that is fully determined by \(\mathbf{x}\)) are deterministic while \(f_{\phi}^{M}\), \(f_{\phi}^{V}\) and \(f_{\phi}^{X}\) are generally stochastic, meaning outputs may be dependent on hidden stochastic variables within the function that encode historical states or neural processing noise. Not speaking is encoded by a state of \(V\) corresponding to "no verbal report". Subscripting with \(\phi\) denotes that function behavior is determined by cognitive parameters \(\phi\). The computation graph defines a joint probability \(p_{\phi}(\mathbf{d},\mathbf{x},\mathbf{s},\mathbf{a},\mathbf{m})\), from which conditional and marginal probability distributions on individual variables may be obtained. Entropy \(H_{\phi}\) is also parameterized since it depends on \(p_{\phi}\). Finally, denote the transient state by \(\bar{X}\), where \(p_{\phi}(\bar{\mathbf{x}})=\frac{1}{T}\sum_{t\in[T]}P_{\phi}(X_{t}=\bar{ \mathbf{x}})\) is the probability that any transient state takes the value \(\bar{\mathbf{x}}\).
Our dynamical systems model of working memory distinguishes between two kinds of working memory state, attractor states and transient states, where the latter includes all time-varying states occupied by the system and the former corresponds to system output, or the accessible contents of working memory (Khona and Fiete, 2022). Our model remains neutral about whether conscious states correspond to working memory trajectories, transient states or attractor states but allows conscious state to be more generally a deterministic function of these states, thus conveying part of their information. Specifically, since \(\mathbf{s}=f_{\phi}^{S}(\mathbf{x})\), conscious experience is not restricted to be identical to transient working memory states or attractor states, but is the output of a deterministic function of the trajectory through working memory states, where the function depends on cognitive parameters \(\phi\). While we will not focus on the implementation details of how conscious experiences might relate to neural processes, intuitively \(\mathbf{s}\) can be thought of as a vector of real numbers representing one point in an abstract space of possible experiences. Subsequently, information theory gives us the ability to reason about the relative richness and ineffability of conscious experience based on the computation graph, without needing implementation details of the functions.
#### 3.3.1 Information loss from attractor dynamics
The relation of trajectories \(\mathbf{x}\) to a smaller subset \(\mathbf{a}\) of attractor states is a defining characteristic of attractor dynamics, whether the subset consists of a discrete number of fixed points or a set of states that trace out a complex shape such as a curved manifold. In this section, we argue that the presence of attractor dynamics decreases the richness of working memory states and conscious experience. We will identify two related effects. First, at the level of comparison between systems, the presence of attractors concentrates the probability mass of transient states onto a smaller subspace, reducing the richness of transient states. Second, we show that at the level of comparison between states, since attractor states are less rich than transient states in general and the former con
stitute outputs of the system, the richness of attractor states limits the richness of downstream variables.
Since dynamics are characterized by the flow of transient states towards an attractor in \(\mathcal{A}\) followed by persistent membership in \(\mathcal{A}\), and attractors \(\mathcal{A}\) typically constitute a significantly smaller subset of all possible transient states \(\bar{\mathcal{X}}\)(Khona and Fiete, 2022), the presence of attractors decreases the richness of transient states \(H_{\phi}(\bar{X})\). Since entropy is a measure of distributional spread, dynamics with larger non-attractor transient state spaces \(\bar{\mathcal{X}}\setminus\mathcal{A}\), implying more time spent in non-attractor states, yield richer distributions over transient states \(P_{\phi}(\bar{X})\); conversely, faster convergence to attractors and more time spent at attractors yields lower \(H_{\phi}(\bar{X})\). In turn, reducing the richness of transient states limits the richness of full trajectories and conscious experience (Box 3).
**Box 3. Implications of reducing transient state richness**
Reducing the richness of transient states \(H_{\phi}(\bar{X})\) also reduces a ceiling on the richness of full trajectories \(H_{\phi}(X)\), since \(H_{\phi}(X)=H_{\phi}(X_{1}\ldots X_{T})\leq\sum_{t\in[T]}H_{\phi}(X_{t})\leq T (H_{\phi}(\bar{X})+C)\) by the addition rule of entropy, where constant \(C=\max_{t\in[T]}(H_{\phi}(X_{t})-H_{\phi}(\bar{X}))\) limits the maximum deviation of entropy between individual timesteps and the temporal average. This in turn reduces a ceiling on the richness of conscious experience as \(H_{\phi}(S)\leq H_{\phi}(X)\). The latter can be shown as follows: the joint entropy \(H_{\phi}(S,X)=H_{\phi}(X)+H_{\phi}(S|X)=H_{\phi}(X)\) since \(f_{\phi}^{S}\) is deterministic, i.e., \(H_{\phi}(S|X)=0\). \(H_{\phi}(X)=H_{\phi}(S,X)=H_{\phi}(S)+H_{\phi}(X|S)\) and Shannon entropy is non-negative, thus \(H_{\phi}(S)\leq H_{\phi}(X)\).
This might seem to be an artefact of the Shannon approach, which directly concerns features of the distribution. However, the same reasoning applies under Kolmogorov's formalism if the probability distribution is known, because loosely speaking, knowing the distribution gives the encoder a short-cut: expected Kolmogorov complexity \(\mathbb{E}_{p_{\phi}(\bar{\mathbf{x}})}K(\bar{\mathbf{x}}|p_{\phi})\) is equivalent to entropy \(H_{\phi}(\bar{X})\) up to an additive constant if the distribution is given (Eq. (6)). Intuitively, this is because the shortest lossless descriptor of \(\bar{\mathbf{x}}\), given knowledge of the distribution \(P_{\phi}(\bar{X})\) and thus support \(\bar{\mathcal{X}}\), has length \(-\log p(\bar{\mathbf{x}})\) under Shannon's noiseless coding theorem (Grunwald and Vitanyi, 2004). Given knowledge of \(P_{\phi}(\bar{X})\), \(-\log p(\bar{\mathbf{x}})\) bits are all that is additionally needed to determine the state using a descriptionally simple (but not necessarily computationally short) computer program.
Thus far we have described how the presence of attractors can decrease the richness of transient states overall, i.e., as a matter of comparing between systems (e.g., two brains). We turn now to a second way in which attractors reduce richness, as a matter of comparison between states in a given system.
Global Workspace Theory postulates that the access of representations from working memory by diverse processes across the brain depends on the representations being _amplified and maintained over a sufficient duration_, for instance for a minimum of approximately 100ms (Dehaene and Naccache, 2001). In the language of the attractor framework, this amounts to the claim that the vari
able released to downstream processes such as verbal-behavioral reporting and long-term memory is \(A\), not \(X\). Crucially, attractor states are strictly less rich than trajectory states \(H_{\phi}(A)<H_{\phi}(X)\), as explained in Box 4. Thus selective release of attractor working memory states to downstream processing functions such as \(f_{\phi}^{M}\) implements an information bottleneck that limits the richness of downstream inputs. This constitutes an important source of ineffability, where our in-the-moment experiences \(S\) are richer than our later recollections, since richness of experience is upper bounded by the richness of trajectories (i.e. \(H_{\phi}(X)\geq H_{\phi}(S)\), Box 3), so the higher the richness of trajectories, the higher the ceiling on information loss from conscious experience to the attractor state and downstream variables. This will be relevant to our discussion of phenomenal overflow (Block, 2007) below. In practice one would expect the magnitude of information loss from trajectory \(X\) to working memory output \(A\) to be significantly large, since trajectories are sequences of brain states specifying the activity of billions of neurons, whereas working memory appears to be limited to representing a handful of items (Sperling, 1960), which gives us a clue to the magnitude of the bottleneck.
**Box 4. Richness of attractors strictly less than richness of trajectories**
As the full trajectory determines the attractor it terminates in, \(f_{\phi}^{A}\) is a deterministic function. It follows that \(H_{\phi}(A|X)=0\). We also know that \(H_{\phi}(X|A)>0\) since multiple possible trajectories terminate in the same attractor state. Our result follows from this asymmetry. By the general relationship between joint and conditional entropy we have \(H_{\phi}(X,A)=H_{\phi}(X)+H_{\phi}(A|X)\). Since \(H_{\phi}(A|X)=0\) we have \(H_{\phi}(X,A)=H_{\phi}(X)\). Re-applying the relation between joint and conditional probability we also have \(H_{\phi}(X,A)=H_{\phi}(A)+H_{\phi}(X|A)\). From these observations together we know that \(H_{\phi}(A)+H_{\phi}(X|A)=H_{\phi}(X)\). Since \(H_{\phi}(X|A)>0\), this yields \(H_{\phi}(A)<H_{\phi}(X)\).
#### 3.3.2 Information loss at verbal report
The ineffability of an experience is perhaps most obvious when we attempt to put it into words, due to the highly compressed nature of language (Kirby et al., 2015). From the computation graph, we can say that ineffability or information loss from conscious experience to verbal report is at least as great as information loss from conscious experience to working memory attractor (Box 5). Additionally, it would be reasonable to assume information losses \(H_{\phi}(A|M)\) and \(H_{\phi}(S|M)\) are strictly positive (i.e., \(H_{\phi}(A|M)>0\) and \(H_{\phi}(S|M)>0\)) if message \(M\) is a low-dimensional symbolic variable (such as a few words) whereas \(A\) and \(S\) are snapshots of working memory and conscious experience, since conditional entropy is strictly positive if every mapping is either one-to-one or one-to-many and there is at least one case of the latter. While it might appear that language is rich, note that \(n\) characters with an alphabet of 256 possible characters require no more than \(8n\) bits to represent, whereas neural
state is determined by the activity of up to approximately 100 billion neurons (Herculano-Houzel, 2009).
Information loss from attractor \(A\) or conscious experience \(S\) to verbal message \(M\) means the latter do not fully identify the former, and instead divide the space of attractors and conscious experiences more coarsely. For instance, saying that one "saw a fat cat" leaves out significant details about the specific attractor that generated the message, which would be difficult to communicate fully (e.g., the cat's color, size, pose, the surrounding environment, etc.). Positive information loss \(H_{\phi}(S|M)\) implies it is generally impossible to recover the conscious experience from the verbal message with certainty. Note that as long as \(H_{\phi}(A|M)\) is strictly positive, this means that conscious experience is somewhat ineffable to verbal report even if we identify conscious experience with working memory attractor states.
**Box 5. Ineffability of conscious experience to verbal report**
From the computation graph, \(S-X-A-M\) form a Markov chain (\(S\) is conditionally independent of \(A\) if given \(X\)), thus \(S-A-M\) is also a Markov chain (\(S\) is conditionally independent of \(M\) if given \(A\)). Thus \(I_{\phi}(S;A)\geq I_{\phi}(S;M)\) from the data processing inequality theorem, implying \(H_{\phi}(S)-H_{\phi}(S|A)\geq H_{\phi}(S)-H_{\phi}(S|M)\) and \(H_{\phi}(S|M)\geq H_{\phi}(S|A)\).
An additional source of ineffability is that attractors can have more complex and high-dimensional structure than simple fixed points, which is common in high-dimensional systems. Such a system would exhibit increased richness of attractor state \(H_{\phi}(A)\) and increased ineffability, as the same richness of messages \(H_{\phi}(M)\) and an increase in joint entropy \(H_{\phi}(A,M)\) implies an increase in information loss \(H_{\phi}(A|M)\), since \(H_{\phi}(A,M)=H_{\phi}(M)+H_{\phi}(A|M)\).
#### 3.3.3 Hierarchical attractor dynamics
The brain is hierarchical in nature with many levels of spatial and temporal organization that can be studied, ranging from molecular and synaptic activity to local networks and large-scale networks (Changeux and Dehaene, 1989). Attractor dynamics appear to be ubiquitous across organizational levels and cortical regions of the brain, with processing in the neocortex hypothesized to support many attractor networks each concerned with a different type of processing (executive function and working memory, general short-term memory, long-term memory etc.) (Khona and Fiete, 2022; Rolls, 2007b, 2010). The presence of multiple weakly coupled neocortical attractor networks yields benefits including specialization and increased memory capacity, and in addition has ramifications for understanding conscious experience.
Anatomically, the inferior temporal cortex is an example of a sensory processing area that responds discriminatively to novel stimuli, whereas the prefrontal cortex is implicated in maintaining attention-modulated projections of such representations in working memory (Miller et al., 1993; Renart et al., 1999; Rolls,
2007b). Neural activity in both regions maintains persistence over time and exhibits attractor dynamics, but the content of sensory memory is akin to the state of a worker subprocess whereas the content of working memory corresponds to the state of executive control; working memory representations exhibit increased temporal stability, persisting for longer durations of up to several seconds, and provide top-down feedback to diverse regions of the brain, including the inferior temporal cortex (Bushnell et al., 1981; Chelazzi, 1999; Rolls, 2010). The ability of the prefrontal attractor to stabilize in its high firing rate attractor state is attributable to positive feedback from strong internal recurrent connections that suppress incoming stimuli (Renart et al., 1999). The need to maintain information in working memory during periods where new stimuli may be perceived exemplifies why working memory and subprocess memory necessitate distinct attractor networks (Rolls, 2007b, 2010).
The well-known Sperling experiments (Sperling, 1960) illustrate different dynamics in working memory and sensory memory processes, notably in terms of duration (a few seconds or less after the brief visual presentation of an array of letters, only letters that have been consciously attended to remain reportable) and capacity (sensory memory is capable of holding rich information pertaining to all digits whereas the number of reportable items was limited to approximately 4). Numerous studies have demonstrated the short-lived nature of representations in sensory memory and the importance of top-down feedback, as backprojected attention appears necessary to avoid exponential decay in sensory memory representations (Cohen and Dehaene, 1998; Rolls, 2010; Rolls and Tovee, 1994; Tiitinen et al., 1994).
The limits imposed on the richness of working memory state by subprocess memory states may be illustrated in an information theoretic manner by considering that the latter is an input to the former (Box 6).
**Box 6. Richness of subprocess states constrains richness of conscious experience**
Extracting the stochasticity in \(f_{\phi}^{X}\) into an input variable \(\omega\), meaning assuming that computation of \(X\) is cast as \(X=\hat{f}_{\phi}^{X}(V,\omega)\) where \(\hat{f}_{\phi}^{X}\) is deterministic, the richness of \(X\) is bounded as \(H_{\phi}(X)\leq H_{\phi}(V_{1},\ldots,V_{N},\omega)\leq\sum_{n\in[N]}H_{\phi}(V _{n})+H_{\phi}(\omega)\) due to deterministic data processing and addition rule of entropy. That is, given a limit on the richness of noise \(H_{\phi}(\omega)\), a ceiling on the richness of working memory trajectories \(H_{\phi}(X)\) scales with the richness of the subprocess states that constitute its inputs. In turn this restricts ceilings on the richness of downstream variables such as conscious experience and working memory attractors (Box 3).
### Inter-personal ineffability
Communication channels are not limited to personal sensory processes and verbal or behavioral reporting processes but extend to channels between individ
uals. In this section, we will consider communication between two individuals using the model summarized in Fig. 6 in which a speaker, Alice, wishes to communicate her experience to a listener, Bob. We use the same variables as as in prior sections, but denote Bob's variables using a "\(\sim\)" (e.g., \(\tilde{\mathbf{s}}\) denotes Bob's conscious experience). Again we assume a computational chain of states \(\mathbf{x}\rightarrow\mathbf{a}\rightarrow\mathbf{m}\rightarrow\tilde{\mathbf{ x}}\rightarrow\tilde{\mathbf{a}}\) that elicit an experience \(\tilde{\mathbf{s}}=f_{\tilde{\phi}}(\tilde{\mathbf{x}})\) in Bob. In prior sections, we have already considered sources of ineffability up to \(H_{\phi}(\mathbf{s}|\mathbf{m})\) and \(K(\mathbf{s}|\mathbf{m},p_{\phi})\) in this chain. What remains is to identify additional sources of ineffability after the message is transmitted. In this section we use the Kolmogorov formalism, since we assume the parameters \(\phi\) of Alice's brain are not available to Bob.
#### 3.4.1 A blank-slate listener
Before considering the case in which Bob is a typical human listener, we begin with a discussion of ineffability when Bob is a blank-slate (setting \(\tilde{\phi}=\emptyset\), \(\tilde{x}=\emptyset\), \(\tilde{s}=\emptyset\), \(\tilde{a}=\emptyset\), where \(\emptyset\) denotes the null value). In this case the chain of communication ends at \(\mathbf{m}\), thus a quantity of interest is the ineffability \(K(\mathbf{s}|\mathbf{m})\) (without assuming access to Alice's cognitive parameters \(\phi\), as we did in Section 3.3.2). Intuitively what this quantity refers to is the _intrinsic_ ineffability of an experience given its message, without conditioning on extra information such as cognitive parameters \(\phi\) or \(\tilde{\phi}\). Taking an expectation to express average ineffability of conscious experience \(\mathbf{s}\), we have
Figure 6: **A model of inter-personal ineffability.** We model the communication pipeline between a speaker Alice and a listener Bob. A trajectory \(\mathbf{x}\) in Alice’s state-space of working memory follows attractor dynamics, converging near an attractor \(\mathbf{a}\). Alice then attempts to communicate the experience with a message \(\mathbf{m}\). On Bob’s end, the message is decoded and influences his working memory trajectory \(\tilde{\mathbf{x}}\), which in turn converges near an attractor \(\tilde{\mathbf{a}}\). Each step transforming one variable to another is executed by the dynamics of the subject’s brain, denoted by \(\phi\) for Alice and \(\tilde{\phi}\) for Bob. Conscious experiences \(\mathbf{s}\) and \(\tilde{\mathbf{s}}\) are functions of the subject’s cognitive parameters \(\phi\) and \(\tilde{\phi}\) and working memory trajectories \(x\) and \(\tilde{x}\) respectively, and encode the experience’s meaning. We are interested in the ineffability \(K(\mathbf{s}|\tilde{\mathbf{s}},p_{\tilde{\phi}})\) of Alice’s conscious experience \(\mathbf{s}\) given the experience \(\tilde{\mathbf{s}}\) elicited in Bob.
\(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m})\geq\mathbb{E}_ {p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},p_{\phi})\) trivially since conditioning on more information cannot increase the length of the shortest program that outputs \(\mathbf{s}\), but it is important to note that one would additionally expect the reduction to be significant, i.e., \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m})\gg \mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},p_{\phi})\). This is because under Shannon's noiseless coding theorem, knowledge of Alice's state distribution \(p_{\phi}\) reduces the problem of describing \(\mathbf{s}\) in the general space of high-dimensional vectors to the problem of describing its index amongst the set of all possible conscious experiences associated with \(\mathbf{m}\) for a brain parameterized by \(\phi\).
The inequality \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m})\gg \mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},p_{\phi})\) relates to an observation at the core of the philosophical debate on ineffability: our descriptions of our experiences never seem to come close to capturing their full richness. The gap is so significant that it has at times led some philosophers, scientists, and laypersons to the dualistic conclusion that conscious experiences are intrinsically indescribable, such that there is something more to their content than physically-embodied information encoded in neural activity. Using our model, we argue that these intuitions do not necessarily imply a non-physical basis for conscious experience but may be explained by physically grounded and significant information loss that is a natural consequence of computational processing between the cognitive states underlying our experiences and the linguistic messages that we use to express them.
The increase in ineffability from not conditioning on \(p_{\phi}\) also applies to the problem of describing the attractor state, i.e., \(\mathbb{E}_{p_{\phi}(\mathbf{a}|\mathbf{m})}K(\mathbf{a}|\mathbf{m})\gg \mathbb{E}_{p_{\phi}(\mathbf{a}|\mathbf{m})}K(\mathbf{a}|\mathbf{m},p_{\phi})\), due to \(\mathbf{a}\) being a high dimensional vector that represents the output of working memory and \(\mathbf{m}\) being a relatively low dimensional vector representing a sentence. Note that ineffability of the attractor imposes a lower bound on the ineffability of the conscious experience under mild assumptions, thus if the former is large, so is the latter (Box 7). While the representation of \(\mathbf{m}\) is shared amongst individuals who speak the same language, the representation of \(\mathbf{a}\) is unique to communicator Alice. Therefore, under the Kolmogorov formalism, there is complexity or information content in \(\mathbf{a}\) that requires adopting Alice's representation space to reconstruct.
An analogy can be made with word symbols and word embeddings (or representation vectors) in deep learning models of natural language, as initially proposed by Bengio et al. (2000) and the earlier ideas on distributed representations of symbols (Hinton et al., 1986). Essentially, every word in the system is associated with an arbitrary unique integer (the symbol) as well as a learnable vector (the embedding). As shown by Bengio et al. (2000), word embeddings can be used to represent semantics in a shared space, and can therefore help a model generalize to new sentences from training data comprising only a small subset of all possible sentences. Importantly, because the word symbols are arbitrary, they contain no information about the embeddings. In a similar vein, when communicating using a message that simply conveys an attractor using a symbolic description \(\mathbf{m}=f_{\phi}^{M}(\mathbf{a})\), we lose the rich representation of \(\mathbf{a}\) that provides information on Alice's subjective experience.
The significant magnitude of \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m})\) and \(\mathbb{E}_{p_{\phi}(\mathbf{a}|\mathbf{m})}K(\mathbf{a}|\mathbf{m})\) captures the blank-slate or tabula rasa case of the problem of ineffability: without assum
ing knowledge of the parameters of Alice's brain, experiences are highly ineffable using low dimensional descriptions such as typical verbal messages. Nonetheless, \(K(\mathbf{s}|\mathbf{m})\leq K(\mathbf{s})<\infty\); our experiences are describable _in principle_, even to a blank slate observer where no additional information is assumed. Using a numerical scale to quantify ineffability allows us to convey the dual sense in which our experiences are, to varying degrees, both communicable and ineffable.
**Box 7. Triangle inequalities for Kolmogorov complexity**
We have that \(K(\mathbf{a}|\mathbf{m}^{*})\stackrel{{+}}{{<}}K(\mathbf{a}| \mathbf{s}^{*})+K(\mathbf{s}|\mathbf{m}^{*})\)(Grunwald and Vitanyi, 2004, Theorem 4.1) where \(\mathbf{m}^{*}\) is the shortest prefix program that outputs \(\mathbf{m}\) and halts, and likewise for the other variables. Thus \(K(\mathbf{a}|\mathbf{m}^{*})-K(\mathbf{s}|\mathbf{m}^{*})\stackrel{{ +}}{{<}}K(\mathbf{a}|\mathbf{s}^{*})\). From a similar application of the triangle inequality, we have \(K(\mathbf{s}|\mathbf{m}^{*})-K(\mathbf{a}|\mathbf{m}^{*})\stackrel{{ +}}{{<}}K(\mathbf{s}|\mathbf{a}^{*})\). Assuming the complexity of conscious experience is at least as great as the complexity of the working memory attractor, \(K(\mathbf{s})\geq K(\mathbf{a})\), we obtain \(K(\mathbf{s}|\mathbf{a}^{*})\geq K(\mathbf{a}|\mathbf{s}^{*})\) from \(I(\mathbf{a}:\mathbf{s})=K(\mathbf{a})-K(\mathbf{a}|\mathbf{s}^{*})=K(\mathbf{ s})-K(\mathbf{s}|\mathbf{a}^{*})\). Therefore we have that the ceiling on relative ineffability of conscious experience \(\mathbf{s}\) is equal or higher than for working memory attractor \(\mathbf{a}\).
-----------------------------------------------
#### 3.4.2 A typical listener
Cognitive similarity and effability.In a realistic communication scenario, the cognitive parameters of listener Bob \(\tilde{\phi}\) are given by a high-dimensional vector that provides information about Alice's parameters \(\phi\) within the generic space of high-dimensional vectors, due to shared physical environment (including cultural experience) and shared evolutionary background, and thus may be used to reduce the description length of \(p_{\phi}\). Trivially, we have that the expected ineffability of Alice's conscious experience can only improve by conditioning on Bob's parameters \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m})\geq \mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},p_{\tilde{ \phi}})\). However, we also obtain that a ceiling on the disadvantage of using Bob's parameters compared to Alice's parameters scales with the difference between them (Box 8).
**Box 8. Cognitive dissimilarity and ineffability**
From Grunwald and Vitanyi (2004, Theorem 2.10) we obtain for given \(\mathbf{m},p_{\phi},p_{\tilde{\phi}}\) that \(0\leq\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}[K(\mathbf{s}|\mathbf{m},p_{ \tilde{\phi}})]-H_{\phi}(S|\mathbf{m})\leq K(p_{\phi}(\cdot|\mathbf{m})|p_{ \tilde{\phi}},\mathbf{m})+c\leq K(p_{\phi}|p_{\tilde{\phi}})+c\) where \(c\) is a constant, and \(0\leq\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}[K(\mathbf{s}|\mathbf{m},p_{ \phi})]-H_{\phi}(S|\mathbf{m})\leq K(p_{\phi}(\cdot|\mathbf{m})|p_{\phi}, \mathbf{m})+c=\epsilon+c\), where \(\epsilon\) is the negligible descriptional complexity of \(p_{\phi}(\cdot|\mathbf{m})\) given \(p_{\phi}\). Note \(H_{\phi}(S|\mathbf{m})\geq H_{\phi}(S|\mathbf{m},p_{\tilde{\phi}})\) where the underlying joint distribution includes the meta-distribution over \(p_{\tilde{\phi}}\), and likewise \(H_{\phi}(S|\mathbf{m})\geq H_{\phi}(S|\mathbf{m},p_{\phi})\). Then \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}[K(\mathbf{s}|\mathbf{m},p_{ \tilde{\phi}})]\leq H_{\phi}(S|\mathbf{m})+K(p_{\phi}|p_{\tilde{\phi}})+c\) and \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}[K(\mathbf{s}|\mathbf{m},p_{\phi})] \leq H_{\phi}(S|\mathbf{m})+\epsilon+c\). The difference between upper bounds
on ineffability is \(K(p_{\phi}|p_{\tilde{\phi}})-\epsilon\).
The mismatch between Alice and Bob's parameters, which is formalized by \(K(p_{\phi}|p_{\tilde{\phi}})\) or the minimum number of bits required to encode a program that produces Alice's parameters from Bob's, loosely corresponds to the difference between Bob and Alice's cognitive function (Box 9), which depends on the extent to which they differ in genetic biases and lived experiences. This result supports the common intuition that our experiences are more effable or communicable to people who are similar to ourselves. It also resonates with the empirical observation of greater inter-brain synchronization in related individuals (Goldstein et al., 2018) and how the brain's anatomical structure (i.e. \(\phi\) and \(\tilde{\phi}\)) affects the propensity to communicate at the inter-personal level (Dumas et al., 2012).
Consider a prototypical example of inter-personal ineffability, in which Bob has been blind from birth and Alice is attempting to convey her experience of seeing the color red. In this case, Bob's brain might be so different from Alice's that the distance between their cognitive parameters \(K(p_{\phi}|p_{\tilde{\phi}})\) is sufficiently high that the benefit of conditioning on his own parameters is negligible. In other words, since \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},p_{ \tilde{\phi}})\leq K(p_{\phi}|p_{\tilde{\phi}})+c+H_{\phi}(S|\mathbf{m})\) (Box 8), if \(K(p_{\phi}|p_{\tilde{\phi}})\) is large, then the ceiling on \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},p_{ \tilde{\phi}})\), the ineffability of Alice's conscious experience given the message from Bob's perspective, is also large. Intuitively, when \(K(p_{\phi}|p_{\tilde{\phi}})\) is small, the information required to communicate the functions \(f_{\phi}^{M}\), \(f_{\phi}^{A}\) and \(f_{\phi}^{S}\) in order to reconstruct \(\mathbf{s}\) from \(\mathbf{m}\) is offloaded to \(p_{\tilde{\phi}}\), which is given, thus reducing a ceiling on expected program length \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},p_{ \tilde{\phi}})\).
The cognitive dissimilarity factor \(K(p_{\phi}|p_{\tilde{\phi}})\) is also implicated in Frank Jackson's famous thought experiment, color scientist Mary who has lived her whole life in an entirely black and white room and has learned exhaustive knowledge about the process of color perception, but nonetheless possesses a brain that is incapable of understanding the experience of color (i.e., she does not know what it is like to see red) (Alter and Walter, 2006; Jackson, 1986). Since her knowledge is exhaustive, she knows everything that anyone could possibly tell her about the experience of seeing something red. Jackson argues that when she finally sees something red, she nevertheless learns something new ("what it is like to see red"). It has been argued that since she already knew all of the physical facts, what she learned must have been a non-physical fact (Chalmers, 2010; Jackson, 1986). Many philosophers have responded to this argument, developing different conceptions of how what Mary learns might be physical after all (Alter and Walter, 2006). Our model can be understood as offering support to the physicalist account. It highlights how the ineffability \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},p_{ \tilde{\phi}})\) of Alice describing her experience of color to Mary (who is playing the role of Bob), may be explained in part by the difference in their cognitive function. In other words, the ability to empathize with another person from a verbal report of their experience is aided by cognitive similarity or ease of reconstructing
their cognitive function based on knowledge of one's own cognitive function, but simply memorizing a description of how the brain behaves in response to color does not imply one's brain is capable of responding in that manner upon being exposed to it or its reference (i.e., hearing the word "red"), and it is similarity in cognitive behavior that is implicated in \(K(p_{\phi}|p_{\tilde{\phi}})\).
The result in Box 8 states that high ineffability of Alice's experience of color to Mary implies high cognitive dissimilarity between Alice and Mary. Cognitive dissimilarity is not equivalent to knowledge inadequacy; knowing how brain should respond does not imply being able to execute such a response. The view that Mary learns different cognitive behavior upon exposure to the color red is closest to the interpretation that she acquires a new ability (Lewis, 1990), as opposed to a new mode of presentation (Loar, 1990), a new relation of acquaintance (Conee, 1985) or a reminder of something that in principle she must have had access to all along (Dennett, 2006; Rabin, 2011).
**Box 9. Difference in functionality and difference in parameters**
For a scalar valued function \(h\) with bounded gradient magnitude, we have \(h(\mathbf{x},\tilde{\theta})=h(\mathbf{x},\theta)+(\tilde{\theta}-\theta)^{ \intercal}\nabla_{\theta}h(\mathbf{x},\theta)+\mathcal{O}(\|\tilde{\theta}- \theta\|^{2})\leq h(\mathbf{x},\theta)+\|\tilde{\theta}-\theta\|\|\nabla_{ \theta}h(\mathbf{x},\theta)\|+\mathcal{O}(\|\tilde{\theta}-\theta\|^{2})\) by the Taylor expansion. Assuming first order gradients are bounded by positive constant \(C\), then we have \(|h(\mathbf{x},\tilde{\theta})-h(\mathbf{x},\theta)|\leq C\|\tilde{\theta}- \theta\|+\mathcal{O}(\|\tilde{\theta}-\theta\|^{2})\), i.e. an upper bound on the mismatch in functional output given parameterizations \(\theta\) and \(\tilde{\theta}\) scales with the Euclidian distance between them.
--
Theory of mind.Evolution has optimized human beings to be skilled at inferring the thoughts of others, an ability termed "Theory of Mind" (Graziano and Kastner, 2011; Graziano and Webb, 2015; Kelly et al., 2014; Premack and Woodruff, 1978). In our model, there is a link between theory of mind and ineffability. If cognitive functions \(f_{\tilde{\phi}}^{X}\) and \(f_{\tilde{\phi}}^{S}\) that produce Bob's conscious experience \(\tilde{\mathbf{s}}\) are optimized for decoding \(\mathbf{m}\) into Alice's conscious experience \(\mathbf{s}\), then ineffability is reduced compared to reconstructing Alice's conscious experience from the raw message, \(K(\mathbf{s}|\mathbf{m},\tilde{\phi})\geq K(\mathbf{s}|\tilde{\mathbf{s}}, \tilde{\phi})\), because part of the computation of reconstructing \(\mathbf{s}\) is executed during inference of \(\tilde{\mathbf{s}}\), meaning that the smallest program from \(\tilde{\mathbf{s}}\) and \(\tilde{\phi}\) to \(\mathbf{s}\) would make use of \(\tilde{\mathbf{s}}\) to reduce its residual work, shortening the descriptive length of the program. In the extreme case, if \(K(\mathbf{s}|\tilde{\mathbf{s}},\tilde{\phi})\stackrel{{+}}{{=}}0\), then by definition Bob's cognitive function is optimal for inferring Alice's conscious experience, since no material additional information is required to determine \(\mathbf{s}\).
In turn, if Alice's parameters \(\phi\) contain information about Bob's cognitive function or parameters \(\tilde{\phi}\), she is capable of producing her message \(\mathbf{m}\) in a way that maximises effability and minimizes \(K(\mathbf{s}|\tilde{\mathbf{s}},\tilde{\phi})\), since her cognitive functionality, including verbal reporting function \(f_{\phi}^{M}\), depend on \(\phi\).
The grounding problem.Two individuals will generally understand the same word or sentence in different ways. For example, if a social group generally associates cats with femininity and dogs with masculinity, these associations may be inverted for someone who has a male cat and female dog. A reasonable model for ineffability would account for such differences in their experiences, regardless of whether the individuals detect such inter-personal discrepancies in their conscious thoughts or verbally express such thoughts. This is taken into account in 2 ways by our model. First, analogously to the case of \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\mathbf{m},\tilde{\phi})\), the ineffability of Alice's conscious experience given Bob's conscious experience \(\mathbb{E}_{p_{\phi}(\mathbf{s}|\mathbf{m})}K(\mathbf{s}|\tilde{\mathbf{s}}, \tilde{\phi})\) pays a penalty that scales with \(K(p_{\phi}|p_{\tilde{\phi}})\), which measures a mismatch between \(p_{\tilde{\phi}}\) and \(p_{\phi}\) where the latter includes all the parameters in Alice's computation graph, including those that parameterize functions on input data \(D\). This grounds Alice's \(\phi\) in a representation that is shared with Bob's \(\tilde{\phi}\); intuitively, if Bob's parameters implement a function that operates differently on inputs than Alice's, they do not inform on the latter and the ceiling on ineffability is increased via \(K(p_{\phi}|p_{\tilde{\phi}})\). In other words, the objective meaning of \(\mathbf{s}\) is largely determined by how \(\phi\) relates \(\mathbf{s}\) and the input \(\mathbf{d}\): for Bob to understand \(\mathbf{m}\) well requires him to know something about that relationship in Alice's brain, which is given by her parameters \(\phi\). Second, conscious experience \(\mathbf{s}\) depends on \(\phi\), which includes Alice's long-term knowledge, therefore \(\mathbf{s}\) is capable of containing information about the associations Alice makes in the process of generating her thoughts, and thus the latter may also be included in the reconstruction target of \(K(\mathbf{s}|\tilde{\mathbf{s}},\tilde{\phi})\).
### Phenomenal and access consciousness
Having provided an information theoretic dynamical systems perspective on richness and ineffability, we now turn explicitly to the question of whether rich phenomenal experience exists and why we self-report that it does. We first highlight ambiguities in the meaning of access before contrasting two hypotheses for explaining the report of phenomenal experience.
#### 3.5.1 Effability, accessibility, reportability
Notions such as "accessible", "reportable" and perhaps "effable" are somewhat ambiguous. A benefit of our framework is that it allows us to distinguish between (at least) three distinct notions in the vicinity.
First, as we have presented it above, the notion of "effability" refers to the ability to accurately describe one variable by another, which implies it can be formalized using mutual information (Section 3.2).
Second, "access" is interpretable in two different ways. Direct access is the notion of a variable \(X\) being a direct input to a function or process \(g\), meaning \(g\) is defined on variable \(X\), whereas informational access is the notion of \(g\)'s input variable \(A\) sharing mutual information with \(X\), \(I(A;X)>0\), corresponding to \(X\) being effable with respect to \(A\). A process that has no direct access to \(X\) may still have access to its information via inputs; if \(M=g(A)\), process \(g\) has access
to information about \(X\) if \(I(A;X)>0\). Thus a variable may be effable with respect to the input and output variables of a process without being directly accessible to the process.
Third, while a reporting process is in general a process or transformation that outputs to another process, we stipulate that "reporting process" may be understood to refer specifically to those that output to processes outside the cortex, such as cortical processes that encode speech or motor movements. We may then say that a variable is directly (or informationally) reportable if it is directly (or informationally) accessible by a reporting process, where the report corresponds to the output of the reporting process.
Note that so construed, we may dissociate the three notions. Variable \(X\) is effable to variable \(M\) if they share mutual information but may not be directly accessible to the function that produces \(M\), or alternatively variable \(X\) may be directly accessible by \(g\) but not directly reportable, if \(g\) does not output to processes outside of the cortex. These distinctions will be helpful in what follows.
#### 3.5.2 Existence and report of phenomenal experience
According to the Global Workspace Theory, information from diverse brain regions corresponding to a variety of perceptual or cognitive processes is selected for inclusion in the contents of a centralized processing workspace associated with working memory that coordinates and communicates with multiple subsystems, resulting in a rich space of "highly differentiated" states with "high complexity" (Tononi and Edelman, 1998).
The features of this global workspace system make it suitable as a framework for an analysis of consciousness (i.e., phenomenal consciousness), even if we do not assume that only items in workspace are conscious. The features of the global workspace system also make it a suitable target for modelling in terms of attractor dynamics, since by their nature, states amplified and sustained in a central processing workspace are attractors. Thus, our model allows for the refinement of theses concerning the relationship of consciousness to the global workspace.
Global workspace models of consciousness (Dehaene and Naccache, 2001) generally divide representations into three classes:
1. Those not computed by working memory processes (unconscious).
2. Those mobilized in the workspace via amplification and made accessible to downstream processing (conscious).
3. Those computed by working memory processes but not sufficiently amplified or attended to be released by the workspace.
The latter includes non-attractor transient states in an attractor model of working memory, and being rich and unreportable, are a clear candidate for the basis of phenomenal experience (Dehaene and Naccache, 2001). It is a point of
debate between adherements of the global workspace framework, whether or not items from the third class are indeed conscious. Some say no (Cohen et al., 2016; Naccache, 2018), others say yes (Prinz, 2012).
Working memory processes are represented by function \(f_{\phi}^{X}\) in our model. By allowing \(f_{\phi}^{S}\) to be abstract, our model only specifies that \(S\) is a deterministic projection of \(\phi\), \(X\) and \(A\), and therefore is compatible with both views. If one assumes that attractor states are included in the content of consciousness and that the physical basis of transient states and attractor states in working memory is the same (i.e. they are differentiated by duration of attentional amplification, not location of neural circuitry), it would be reasonable to believe that transient states are also included in conscious awareness. If this is the case, then transient states are rich states that are consciously experienced but not directly accessible or reportable by downstream processes, while being partially verbally effable because of shared information with attractors which are directly reportable. In this paradigm, the fleeting nature of transient states impacts their direct reportability but not their inclusion in conscious experience. This is an attractive position partly because it takes phenomenology seriously--people report their conscious experience being much richer than they are able to articulate.
Regardless of whether transient states are included in the contents of consciousness, the attractor model for working memory suggests a second explanation for the self-report of phenomenal experience: an attractor state may encode information about its basin of attraction and thus information loss. For example, point attractor states may include dimensions whose values estimate the size of its local basin, which is a measure of the information loss when going from transient states in trajectories within that basin to the attractor state itself. This posits that rich experience exists, whether inside or outside the delimitation of consciousness, and its properties - such as richness - would be reportable, even if the transient states that support them are not. It is plausible that conscious awareness of abstract attributes of transient states such as richness would be advantageous, for instance when reasoning about one's uncertainty, including for the purpose of anticipating the listener's uncertainty when engaging in theory of mind to minimize ineffability (Section 3.4.2).
Our model supports an interpretation for Sperling's experiments (Sperling, 1960), where subjects briefly exposed to a grid of characters were generally able to report character identities for _any_ prompted row (containing \(\sim 4\) characters) but subsequently not other rows, in addition to being able to report that they experienced observing more characters. An account for this behavior is that upon receiving the prompt to report a specific row, working memory contents represented by attractor state **a** contained the identities of characters in the prompted row, a summary over the grid (e.g. the number of characters and their arrangement) and an estimate of the information lost by the summary, whilst information sufficient to discriminate all characters existed in the processing pipeline but in upstream sensory state **v**, from which **x** and **a** were computed. Subsequently, as attractor state **a** is directly accessible to verbal
reporting process \(f_{\phi}^{M}\), the characters in the prompted row, grid details at summary level, and the presence of information loss were directly reportable, and full grid details (identities of all characters) were not. The latter holds irrespective of where the distinction between conscious and unconscious is drawn, i.e. whether \(\mathbf{x}\), which might have contained sufficient information from \(\mathbf{v}\) to discriminate all characters, is considered conscious or not.
These arguments suggest that Block's distinction between phenomenal and access consciousness is not due to a categorical difference between fundamentally different kinds of processing (Block, 1995) but rather to a difference in the representational stage of the same information processing function (Dehaene and Naccache, 2001), and that the existence of a rich phenomenological experience that exceeds our reporting abilities (Sperling, 1960) is both justifiable and veridically reportable. Unpacking the implications of the model is an important task for future work.
## 4 Conclusion
This paper characterizes the rich and ineffable nature of conscious experience from an information theoretic perspective. It connects the ordinary notion of ineffability with mathematical formalisms of information loss, describing how the latter arises as a result of computation in cognitive processing, how it is implemented by an attractor model for working memory, and how it may be increased by the compressed nature of language as well as differences in the cognitive processing functions of individuals.
Attractor dynamics may be considered an attentional process: out of many, one or a few states are selected. This connects our work not only to Global Workspace Theory but more broadly to research in machine learning on attention mechanisms. We generally observe that attention, e.g., as introduced in deep learning by (Bahdanau et al., 2014), may be used to name any function that incurs significant information loss and is present in both artificial and biological cognitive systems, where it is--at present--commonly modelled by the family of attention-based and transformer architectures (Bahdanau et al., 2014; Chorowski et al., 2015; Devlin et al., 2018; Khan et al., 2022) and dynamical systems (Khona and Fiete, 2022; Rolls, 2007a) respectively.
In this work we use a simple model to reason about emitter-receptor communication, where the past is conditioned on implicitly via parameters \(\phi\) and stochasticity in dynamics. An alternative would be to model more complex communication patterns explicitly. We have also not considered learning objectives for function parameters. Doing so would enable a discussion on the generalization benefits of the inductive bias (Goyal and Bengio, 2022) giving rise to this information loss: intuitively, how simpler representations support robustness (Mathis and Mozer, 1994) and the successful extrapolation of behavior beyond previously seen inputs. Information bottlenecks are a popular training regularizer in machine learning (Alemi et al., 2016; Tishby et al., 2000), but are understudied in the context of biologically plausible models, despite gen
eralization ability being a key difference between humans and current artificial learning systems. Considering the benefits of information loss may allow us to understand ineffability more deeply; not just how it arises, but why.
## Acknowledgements
The authors thank the following institutions for sources of funding: the Canada CIFAR AI Chair Program, the Canada Research Chair Program, UNIQUE, IVADO, NSERC, Samsung, and the Quebec government.
|
2302.10261 | Deep Reinforcement Learning for Cost-Effective Medical Diagnosis | Dynamic diagnosis is desirable when medical tests are costly or
time-consuming. In this work, we use reinforcement learning (RL) to find a
dynamic policy that selects lab test panels sequentially based on previous
observations, ensuring accurate testing at a low cost. Clinical diagnostic data
are often highly imbalanced; therefore, we aim to maximize the $F_1$ score
instead of the error rate. However, optimizing the non-concave $F_1$ score is
not a classic RL problem, thus invalidates standard RL methods. To remedy this
issue, we develop a reward shaping approach, leveraging properties of the $F_1$
score and duality of policy optimization, to provably find the set of all
Pareto-optimal policies for budget-constrained $F_1$ score maximization. To
handle the combinatorially complex state space, we propose a Semi-Model-based
Deep Diagnosis Policy Optimization (SM-DDPO) framework that is compatible with
end-to-end training and online learning. SM-DDPO is tested on diverse clinical
tasks: ferritin abnormality detection, sepsis mortality prediction, and acute
kidney injury diagnosis. Experiments with real-world data validate that SM-DDPO
trains efficiently and identifies all Pareto-front solutions. Across all tasks,
SM-DDPO is able to achieve state-of-the-art diagnosis accuracy (in some cases
higher than conventional methods) with up to $85\%$ reduction in testing cost.
The code is available at
[https://github.com/Zheng321/Deep-Reinforcement-Learning-for-Cost-Effective-Medical-Diagnosis]. | Zheng Yu, Yikuan Li, Joseph Kim, Kaixuan Huang, Yuan Luo, Mengdi Wang | 2023-02-20T19:47:25Z | http://arxiv.org/abs/2302.10261v2 | # Deep Reinforcement Learning for
###### Abstract
Dynamic diagnosis is desirable when medical tests are costly or time-consuming. In this work, we use reinforcement learning (RL) to find a dynamic policy that selects lab test panels sequentially based on previous observations, ensuring accurate testing at a low cost. Clinical diagnostic data are often highly imbalanced; therefore, we aim to maximize the F1 score instead of the error rate. However, optimizing the non-concave \(F_{1}\) score is not a classic RL problem, thus invalidates standard RL methods. To remedy this issue, we develop a reward shaping approach, leveraging properties of the \(F_{1}\) score and duality of policy optimization, to provably find the set of all Pareto-optimal policies for budget-constrained \(F_{1}\) score maximization. To handle the combinatorially complex state space, we propose a Semi-Model-based Deep Diagnosis Policy Optimization (SM-DDPO) framework that is compatible with end-to-end training and online learning. SM-DDPO is tested on diverse clinical tasks: ferritin abnormality detection, sepsis mortality prediction, and acute kidney injury diagnosis. Experiments with real-world data validate that SM-DDPO trains efficiently and identifies all Pareto-front solutions. Across all tasks, SM-DDPO is able to achieve state-of-the-art diagnosis accuracy (in some cases higher than conventional methods) with up to \(85\%\) reduction in testing cost. Core codes are available on GitHub1.
Footnote 1: Co-senior authors.
Contact information: Z. Yu, J. Kim, K. Huang and M. Wang:{zhengy, josephck, kaixuanh, mengdiw}@princeton.edu; Y. Li and Y. Luo:{yikuan.li, yuan.luo}@northwestern.edu
## 1 Introduction
In clinical practice, physicians usually order multiple panels of lab tests on patients and their interpretations depend on medical knowledge and clinical experience. Each test panel is associated with certain financial cost. For lab tests within the same panel, automated instruments will simultaneously provide all tests, and eliminating a single lab test without eliminating the entire panel may only lead to a small reduction in laboratory cost (Huck & Lewandrowski, 2014). On the other hand, concurrent lab tests have been shown to exhibit significant correlation with each other, which can be utilized to estimate unmeasured test results (Luo et al., 2016). Thus, utilizing the information redundancy among lab tests can be a promising way of optimizing which test panel to order when balancing comprehensiveness and cost-effectiveness. The efficacy of the lab test panel optimization can be evaluated by assessing the predictive power of optimized test panels on supporting diagnosis and predicting patient outcomes.
We investigate the use of reinforcement learning (RL) for lab test panel optimization. Our goal is to dynamically prescribe test panels based on available observations, in order to maximize diagnosis/prediction accuracy while keeping testing at a low cost. It is quite natural that sequential test panel selection for prediction/classification can be modeled as a Markov decision process (MDP).
However, application of reinforcement learning (RL) to this problem is nontrivial for practical considerations. One practical challenge is that clinical diagnostic data are often highly imbalanced, in some cases with <5% positive cases (Khushi et al., 2021; Li et al., 2010; Rahman & Davis, 2013). In supervised learning, this problem is typically addressed by optimizing towards accuracy metrics suitable for unbalanced data. The most prominent metric used by clinicians is the F1 score, i.e., the harmonic mean of a prediction model's recall and precision, which balances type I and type II errors in a single metric. However, the F1 score is not a simple weighted error rate - this makes designing the reward function hard for RL. Another challenge is that, for cost-sensitive diagnostics, one hopes to view this as a multi-objective optimization problem and fully characterize the cost-accuracy tradeoff, rather than finding an ad-hoc solution on the tradeoff curve. In this work, we aim to provide a tractable algorithmic framework, which provably identifies the set of all Pareto-front policies and trains efficiently. Our main contributions are summarized as follows:
* We formulate cost-sensitive diagnostics as a multi-objective policy optimization problem. The goal is to find all optimal policies on the Pareto front of the cost-accuracy tradeoff.
* To handle severely imbalanced clinical data, we focus on maximizing the \(F_{1}\) score directly. Note that \(F_{1}\) score is a nonlinear, nonconvex function of true positive and true negative rates. _It cannot be formulated as a simple sum of cumulative rewards, thus invalidating standard RL solutions._ We leverage monotonicity and hidden minimax duality of the optimization problem, showing that the Pareto set can be achieved via a reward shaping approach.
* We propose a Semi-Model-based Deep Diagnostic Policy Optimization (SM-DDPO) method for learning the Pareto solution set from clinical data. Its architecture comprises three modules and can be trained efficiently by combing pretraining, policy update, and model-based RL.
* We apply our approach to real-world clinical datasets. Experiments show that our approach exhibits good accuracy-cost trade-off on all tasks compared with baselines. Across the experiments, our method achieves state-of-the-art accuracy with up to \(80\%\) reduction in cost. Further, SM-DDPO is able to compute the set of optimal policies corresponding to the entire Pareto front. We also demonstrate that SM-DDPO applies not only to the \(F_{1}\) score but also to alternatives such as the AM score.
## 2 Related Work
Reinforcement learning (RL) has been applied in multiple clinical care settings to learn optimal treatment strategies for sepsis Komorowski et al. (2018), to customize antiepilepsy drugs for seizure control Guez et al. (2008) etc. See survey Yu et al. (2021) for more comprehensive summary. Guidelines on using RL for optimizing treatments in healthcare has also been proposed around the topics of variable availability, sample size for policy evaluation, and how to ensure learned policy works prospectively as intended Gottesman et al. (2019). However, using RL for simultaneously reducing the healthcare cost and improving patient's outcomes has been underexplored.
Our problem of cost-sensitive dynamic diagnosis/prediction is closely related to feature selection in supervised learning. The original static feature selection methods, where there exists a common subset of features selected for all inputs, were extensively discussed Guyon & Elisseeff (2003); Kohavi & John (1997); Bi et al. (2003); Weston et al. (2003, 2000). Dynamic feature selection methods He et al. (2012); Contardo et al. (2016); Karayev et al. (2013), were then proposed to take the difference between inputs into account. Different subsets of features are selected with respect to different inputs. By defining certain information value of the features Fahy & Yang (2019); Bilgic & Getoor (2007), or estimating the gain of acquiring a new feature would yield Chai et al. (2004). Reinforcement learning based approaches Ji & Carin (2007); Trapeznikov & Saligrama (2013); Janisch et al. (2019); Yin et al. (2020); Li & Oliva (2021); Nam et al. (2021) are also proposed to dynamically select features for prediction/classification. We give a more detailed discussion in Appendix A.
## 3 Pareto-Front Problem Formulation
### Markov Decision Process (MDP) Model
We model the dynamic diagnosis/prediction process for a new patient as an episodic Markov decision process (MDP) \(\mathcal{M}=(\mathcal{S},\mathcal{A},P,R,\gamma,\xi)\). As illustrated in Figure 1, the state of a patient is described by \(s=\mathbf{x}\odot M\), where \(\mathbf{x}\in\mathbb{R}^{d}\) denotes \(d\) medical tests of a patient, \(M\in\{0,1\}^{d}\) is a binary mask
indicating whether the entries of \(\mathbf{x}\) are observed or missing. Let there be \(D\) test panels, whose union is the set of all \(d\) tests. The action set \(\mathcal{A}=\{1,2,\cdots,D\}\sqcup\{\mathrm{P},\mathrm{N}\}\) contains two sets of actions - observation actions and prediction/diagnosis actions. At each stage, one can either pick an action \(a\in\{1,2,\cdots,D\}\) from any one of the available panels, indicating choosing a test panel \(a\) to observe, which will incur a corresponding observation cost \(c(a)\); Or one can terminate the episode by directly picking a prediction action \(a\in\{\mathrm{P},\mathrm{N}\}\), indicating diagnosing the patient as positive class (P) or negative class (N). A penalty will generated if the diagnosis does not match the ground truth \(y\). An example of this process in sepsis mortality prediction is illustrated in Figure 1. We considers the initial distribution \(\xi\) to be patients with only demographics panel observed and discount factor \(\gamma=1\).
### Multi-Objective Policy Optimization Formulation
Let \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) be the overall policy, a map from the set of states to the set of actions. We optimize \(\pi\) towards two objectives:
\(\bullet\) **Maximizing prediction accuracy.** Due to severely imbalanced data, we choose to maximize the \(F_{1}\) score2, denoted by \(F_{1}(\pi)\), as a function of policy \(\pi\). \(F_{1}\) score measures the performance of the diagnosis by considering both type I and type II errors, which is defined as:
Footnote 2: An alternative to the F1 score is the AM metric that measures the average of true positive rate and true negative rate for imbalanced data Natarajan et al. (2018); Menon et al. (2013). Our approach directly applies to such linear metric. Please refer to Appendix F for details.
\[F_{1}(\pi)=\frac{\mathrm{TP}(\pi)}{\mathrm{TP}(\pi)+\frac{1}{2}(\mathrm{FP}( \pi)+\mathrm{FN}(\pi))}=\frac{2\mathrm{TP}(\pi)}{1+\mathrm{TP}(\pi)-\mathrm{ TN}(\pi)}\]
where \(\mathrm{TP}(\pi),\mathrm{TN}(\pi),\mathrm{FP}(\pi),\mathrm{FN}(\pi)\) are normalized true positive, true negative, false positive and false negative that sum up to 1. Remark that \(\mathrm{TP}(\pi),\mathrm{TN}(\pi),\mathrm{FP}(\pi),\mathrm{FN}(\pi)\) can all be expressed as sum of rewards/costs over the MDP's state trajectories. _However, \(F_{1}(\pi)\) is nonlinear with respect to the MDP's state-action occupancy measure, thus it cannot be expressed as any cumulative sum of rewards._
\(\bullet\) **Lowering cost**. Define the testing cost by \(\text{Cost}(\pi)=\mathbb{E}^{\pi}[\sum_{t\geq 0}\sum_{k\in[D]}c(k)\cdot \mathbf{1}\{a_{t}=k\}]\), where \(\mathbb{E}^{\pi}\) denotes expectation under policy \(\pi\), \(c(k)\) is the cost of panel \(k\).
In this work, we hope to solve for cost-sensitive policies for all possible testing budget. In other words, we aim to find the cost-sensitive Pareto front as follows.
**Definition 3.1** (**Cost-\(F_{1}\) Pareto Front of Multi-Objective Policy Optimization)**.: _The Pareto front \(\Pi^{*}\) for cost-sensitive dynamic diagnosis/prediction is the set of policies such that_
\[\Pi^{*}=\cup_{B>0}\ \operatorname*{argmax}_{\pi}\{F_{1}(\pi)\ \text{ subject to Cost}(\pi)\leq B\} \tag{1}\]
Finding \(\Pi^{*}\) requires novel solutions beyond standard RL methods. Challenges are two-folded: **(1)** Even in the single-objective case, \(F_{1}(\pi)\) is a nonlinear, non-concave function of \(\mathrm{TP}(\pi),\mathrm{TN}(\pi)\). Although both \(\mathrm{TP}(\pi),\mathrm{TN}(\pi)\) can be formulated as expected sum of rewards in the MDP, the \(F_{1}\) score is never a simple sum of rewards. Standard RL methods do not apply to maximizing such a function. **(2)** We care about finding the set of all Pareto-optimal policies when there are two conflicting objectives, rather than an ad hoc point on the trade-off curve.
Figure 1: MDP model of dynamic diagnosis: illustration of state-action transitions in one episode.
## 4 Finding Cost-\(F_{1}\) Pareto Front via Reward Shaping
The \(F_{1}\) score is a nonlinear and nonconvex function of true positive, true negative, false positive, false negative rates. It cannot be expressed by sum of rewards. This invalidates all existing RL methods even in the unconstrained case, creating tremendous challenges.
Despite the non-concavity and nonlinearity of \(F_{1}\), we will leverage the mathematical nature of Markov decision process and properties of the F1 score to solve problem (1). In this section, we provide an optimization duality analysis and show how to find solutions to problem (1) via reward shaping and solving a reshaped cumulative-reward MDP.
**Step 1: utilizing monotonicity of \(F_{1}\) score** To start with, we note that \(F_{1}\) score is monotonically increasing in both TP and TN. Assume, for any given cost budget \(B\), the optimal policy \(\pi^{*}(B)\) achieves the highest \(F_{1}\) score. Then \(\pi^{*}(B)\) is also optimal to the following program:
\[\max_{\pi}\left\{\text{TN}(\pi)\text{ subject to Cost}(\pi)\leq B,\text{TP}( \pi)\geq\text{TP}(\pi^{*}(B))\right\},\]
indicating the Pareto front of \(F_{1}\) score is a subset of
\[\Pi^{*}\subseteq\cup_{B>0,K\in[0,1]}\ \operatorname*{argmax}_{\pi}\left\{ \text{TN}(\pi)\text{ subject to Cost}(\pi)\leq B,\text{TP}(\pi)\geq K\right\}. \tag{2}\]
**Step 2: reformulation using occupancy measures** Fix any specific pair \((B,B^{\prime})\). Consider the equivalent dual linear program form Zhang et al. (2020) of the above policy optimization problem (2). It is in terms of the cumulative state-action occupancy measure \(\mu:\Delta_{\mathcal{A}}^{\mathcal{S}}\to\mathbb{R}_{\geq 0}^{\mathcal{S} \times\mathcal{A}}\), defined as:
\[\mu^{\pi}(s,a):=\mathbb{E}^{\pi}\left[\sum_{t\geq 0}\mathbf{1}(s_{t}=s,a_{ t}=a)\right],\;\forall s\in\mathcal{S},a\in\mathcal{A}.\] Then the program ( 2 ) is equivalent to:
\[\max_{\mu}\text{TN}(\mu)\text{ subject to Cost}(\mu)\leq B,\;\text{TP}(\mu) \geq K,\;\sum_{a}\mu(s,a)=\sum_{s^{\prime},a^{\prime}\in[D]}\mu(s^{\prime},a ^{\prime})P(s|s^{\prime},a^{\prime})+\xi(s),\forall s\]
where \(\xi(\cdot)\) denotes initial distribution, and TP, TN and cost are reloaded in terms of occupancy \(\mu\) as:
\[\text{TP}(\mu)=\sum_{y=\text{P},a=\text{P}}\mu(s,a),\;\text{TN}(\mu)=\sum_{y= \text{N},a=\text{N}}\mu(s,a),\;\text{Cost}(\mu)=\sum_{k\in[D]}c(k)\cdot\sum_{ s,a=k}\mu(s,a).\]
**Step 3: utilizing hidden minimax duality** The above program can be equivalently reformulated as a max-min program:
\[\max_{\mu}\min_{\lambda\geq 0,\rho\leq 0}\text{ TN}(\mu)+\lambda\cdot(\text{TP}(\mu)-K)+\rho\cdot(\text{Cost}(\mu)-B)\] \[\text{ subject to }\sum_{a}\mu(s,a)=\sum_{s^{\prime},a^{\prime}\in[D]} \mu(s^{\prime},a^{\prime})P(s|s^{\prime},a^{\prime})+\xi(s),\forall s.\]
Note the max-min objective is linear in terms of \(\lambda,\rho\) and \(\mu\). Thus, minimax duality holds, then we can swap the min and max to obtain the equivalent form:
\[\min_{\lambda\geq 0,\rho\leq 0}\max_{\mu}\text{ TN}(\mu)+\lambda\cdot(\text{TP}(\mu)-K)+\rho\cdot(\text{Cost}(\mu)-B)\] \[\text{ subject to }\sum_{a}\mu(s,a)=\sum_{s^{\prime},a^{\prime}\in[D]} \mu(s^{\prime},a^{\prime})P(s|s^{\prime},a^{\prime})+\xi(s),\forall s.\]
For any fixed pair of \((\lambda,\rho)\), the inner maximization problem of the above can be rewritten equivalently into an unconstrained policy optimization problem: \(\max_{\pi}\text{ TN}(\pi)+\lambda\cdot\text{TP}(\pi)+\rho\cdot\text{Cost}(\pi)\). This is finally a standard cumulative-sum MDP problem, with reshaped reward: reward \(\rho\cdot c(t)\) for the action of choosing test panel \(t\), reward \(\lambda\) for the diagnosis action and get a true positive, reward 1 for getting a true negative. Putting together three steps, we can show the following theorem. The full proof can be found in Appendix E.
**Theorem 4.1**.: _The Cost-\(F_{1}\) Pareto front defined in (1) is a subset of the collection of all reward-shaped solutions, given by_
\[\Pi^{*}\subseteq\overline{\Pi}:=\cup_{\lambda\geq 0,\rho\leq 0}\ \operatorname*{argmax}_{\pi}\left\{ \text{TN}(\pi)+\lambda\cdot\text{TP}(\pi)+\rho\cdot\text{Cost}(\pi)\right\}.\]
Thus, to learn the full Pareto front, it suffices to solve a collection of unconstrained policy optimization problems with reshaped cumulative rewards.
## 5 Method
In this section, we propose a deep reinforcement learning pipeline for Pareto-optimal dynamic diagnosis policies. We use a modular architecture for efficient encoding of partially-observed patient information, policy optimization and reward learning.
### Architecture
Our Semi-Model-based Deep Diagnostic Policy Optimization (SM-DDPO) framework is illustrated in Figure 2. The complete dynamic testing policy \(\pi\) comprises three models: (1) a posterior state encoder for mapping partially-observed patient information to an embedding vector; (2) a state-to-diagnosis/prediction classifier which can be reviewed as a reward function approximator; (3) a test panel selector that outputs an action based on the encoded state. This modular architecture makes RL tractable via a combination of pre-training, policy update and model-based RL.
### Posterior State Encoder
We borrowed the idea of imputation to map the partially observed patient information to a posterior embedding vector. In this work, we consider a flow-based deep imputer named _EMFlow3_. Given the imputer \(\text{Imp}_{\theta}(\cdot)\) parameterized by \(\theta\), the RL agent observes tests \(\mathbf{x}\odot M\) and calculates \(\text{Imp}_{\theta}(\mathbf{x}\odot M)\in\mathbb{R}^{d}\) as a posterior state encoder. Unlike conventional imputation Lin and Tsai (2020); Austin et al. (2021); Osman et al. (2018), our posterior state encoder aims at resolving exponentially many possible missing patterns. Therefore, we pretrain it on unlabeled augmented data, constructed by repeatedly and randomly masking entries as additional samples.
Footnote 3: The _EMFlow imputation method is originally proposed in Ma and Ghosh (2021) that maps the data space to a Gaussian latent space via normalizing flows. We give a more detailed discussion of this method in Appendix C._
### End-to-end Training via Semi-Model-Based Policy Update
Training the overall policy using a standard RL algorithm alone (such as Q learning, or policy gradient) would suffer from the complex state and action spaces. To ease the heavy training, we design a semi-model-based _modular_ approach to train the panel selector and classifier concurrently but in different manners:
\(\bullet\) The classifier \(f_{\phi}(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{2}\), parameterized by \(\phi\), maps the posterior encoded state \(\text{Imp}_{\theta}(\mathbf{x}\odot M)\) to a probability distribution over labels. It is trained by directly minimizing the cross entropy loss \(\ell_{c}\) from collected data4. This differs from typical classification in that the data are collected adaptively by RL, rather than sampled from a prefixed source.
Footnote 4: The forms of the training objective \(\ell_{c}\) and \(\ell_{rl}\) of classifier and panel selector are given in Appendix D
\(\bullet\) The panel selector, a network module parameterized by \(\psi\), takes the following as input
\[s^{\text{emb}}_{\theta,\phi}(s)=s^{\text{emb}}_{\theta,\phi}(\mathbf{x}\odot M )=(\text{Imp}_{\theta}(\mathbf{x}\odot M),f_{\phi}(\text{Imp}_{\theta}( \mathbf{x}\odot M)),M), \tag{3}\]
Figure 2: Dynamic diagnostic policy learning via semi-model-based proximal policy optimization. The full policy \(\pi\) comprises of three modules: posterior state encoder, classifier, and panel selector.
and maps it to a probability distribution over actions. We train the panel selector by using the classical proximal policy updates (PPO) Schulman et al. (2017), which aims at maximizing a clipped surrogate objective regularized by a square-error loss of the value functions and an entropy bonus. We denote this loss function as \(\ell_{rl}\) and relegate its expanded form to Appendix D.
\(\bullet\) The full algorithm updates panel selector and classifier concurrently, given in Algorithm 1 and visualized in Figure 2. We call it "semi-model-based" because it maintains a running estimate of the classifier (which is a part of the reward model of the MDP) while making proximal policy update.
```
Initialize: Im\(\rho_{g}\), Classifier \(f_{\phi^{(0,0)}}\), Panel/Prediction Selection Policy \(\pi_{\psi^{(0,0)}}\), number of loops \(L,L_{1},L_{2}\), stepsize \(\eta\); for\(i=0,1,\cdots,L\)do\(\triangleright\) End-to-end training outer loop Construct RL environment using state embedding \(s^{\text{emb}}_{\theta,\phi^{(i,0)}}\) defined in (3) for\(j=1,2,\cdots,L_{1}\)do\(\triangleright\) Policy update inner loop Run RL policy in environment for \(T\) timesteps and save observations in \(Q\) Update panel selection policy by \(\psi^{(i,j)}=\operatorname*{argmax}_{\psi}\ell_{rl}(\psi;\psi^{(i,j-1)})\) endfor Set \(\psi^{(i+1,0)}=\psi^{(i,L_{1})}\) for\(j=1,2,\cdots,L_{2}\)do\(\triangleright\) Classifier update inner loop Sample minibatch \(B_{j}\) from \(Q\) Update classifier by \(\phi^{i,j}=\phi^{i,j-1}-\eta\cdot\nabla_{\phi}\frac{1}{|B_{j}|}\sum_{k\in B_{j }}\ell_{c}(\phi^{i-1};(\mathbf{x_{k}},M_{k}))\) endfor Set \(\phi^{(i+1,0)}=\phi^{(i,L_{2})}\) endfor Output: Classifier \(f_{\phi^{(L+1,0)}}\), Policy \(\pi_{\psi^{(L+1,0)}}\).
```
**Algorithm 1** Semi-Model-Based Deep Diagnosis Policy Optimization (SM-DDPO)
Such a hybrid RL technique of model learning and policy updates has been used for solving complex games, where a notable example is Deepmind's Muzero Schrittwieser et al. (2020). Further, we remark that Algorithm 1 does end-to-end training. Thus, it is compatible with on-the-fly learning. The algorithm can start with as little as zero knowledge about the prediction task, and it can keep improving on new incoming patients by querying test panels and finetuning the state encoder.
## 6 Experiments
We test the method on three clinical tasks using real-world datasets. See Table 1 for a summary. We split each dataset into 3 parts: training set, validation set, and test set. Training data is further split into two disjoint sets, one for pretraining the state encoder and the other one for end-to-end RL training. Validation set is used for tuning hyperparameters 5. During RL training6, we sample a random patient and a random subset of test results as initial observations at the beginning of an episode, for sufficient exploration. We evaluate the trained RL policy on patients from the test sets, initialized at a state with zero observed test result, and report F1 score and AUROC.
Footnote 5: Detailed data splitting, hyperparameter choices and searching ranges are presented in Appendix B.
### Clinical Tasks
We briefly describe three clinical tasks for our experiments. We refer to Appendix B for more details.
**Ferritin abnormality detection** Blood ferritin level can indicate abnormal iron storage, which is commonly used to diagnose iron deficiency anemia Short & Domagalski (2013) or hemochromatosis
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Dataset & \# of tests & \# test panels & \# patients & \% positive class & \#training & \#validation & \#held-out testing \\ \hline Ferritin & 39 & 6 & 43,472 & 8.9\% & 32,602 & 6,522 & 4,248 \\ AKI & 19 & 4 & 23,950 & 16.5\% & 17,964 & 3,600 & 2,386 \\ Sepsis & 28 & 4 & 5,783 & 14.5\% & 4,335 & 869 & 579 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary statistics of ferritin, AKI and sepsis datasets
(iron overload) Crownover & Covey (2013). Machine learning models can predict abnormal ferritin levels using concurrent laboratory measurements routinely collected in primary care Luo et al. (2016); Kurstjens et al. (2022). These predictive models achieve promising results, e.g. around 0.90 AUC using complete blood count and C-reactive protein Kurstjens et al. (2022), and around 0.91 AUC using common lab tests Luo et al. (2016). However, both studies required the full observation of all selected predictors without taking the financial costs into consideration.
We applied our proposed models to a ferritin dataset from a tertiary care hospital (approved by Institutional Review Board), following the steps described in Luo et al. (2016). Our dataset includes 43,472 patients, of whom 8.9% had ferritin levels below the reference range that should be considered abnormal. We expected to predict abnormal ferritin results using concurrent lab testing results and demographic information. These lab tests were ordered through and can be dynamically selected from 6 lab test panels, including basic metabolic panel (BMP, n=9, [estimated national average] cost=536), comprehensive metabolic panel (CMP, n=16, cost=548), basic blood count (BBC, n=10, cost=526), complete blood count (CBC, n=20, cost=544), transferrin saturation (TSAT, n=2, cost=540) and Vitamin B-12 test (n=1, cost=566).
**Acute Kidney Injury Prediction** Acute kidney injury (AKI) is commonly encountered in adults in the intensive care unit (ICU), and patients with AKI are at risk for adverse clinical outcomes such as prolonged ICU stays and hospitalization, need for renal replacement therapy, and increased mortality Kellum & Lameire (2013). AKI usually occurs over the course of a few hours to days and the efficacy of intervention greatly relies on the early identification of deterioration Kellum & Lameire (2013). Prior risk prediction models for AKI based on EHR data yielded modest performance, e.g., around 0.75 AUC using a limited set of biomarkers Perazella (2015) or a specific group of patients Sanchez-Pinto & Khemani (2016), or around 0.8 AUC using comprehensive lab panels of general adult ICU populations such as from the MIMIC datasetZimmerman et al. (2019); Sun et al. (2019).
In this experiment, we followed steps in Zimmerman et al. (2019) to extract 23,950 ICU visits of 19,811 patients from the MIMIC-III dataset Johnson et al. (2016), among which 16.5% patients develop AKI during their ICU stay. We aimed at predicting the AKI onset within 72 hours of ICU admission using a total of 31 features including demographics, physiologic measurements, and lab testing results extracted within 24 hours of ICU admission. The lab tests were categorized into 4 panels, i.e. CBC (n=3, cost=544), CMP (n=8, cost=548), the arterial blood gas panel (ABG, n=2, cost=5473) and the activated partial thromboplastin time panel (APTT, n=6, cost=526). The demographic information and physiologic measurements were collected before test panel selection, thus they are considered visible.
**Sepsis Mortality Prediction for ICU Patients** Sepsis is a life-threatening organ dysfunction, and is a leading cause of death and cost overruns in ICU patients Angus & Van der Poll (2013) Early identification of risk of mortality in septic patients is important to evaluate the patients' status and improve their clinical outcomes Moreno et al. (2008). Most of the previous sepsis mortality prediction models use all available test results as predictors, without balancing the cost of ordering all the associated test panels Lee et al. (2020); Ding & Luo (2021); Shin et al. (2021); Moreno et al. (2008).
We followed steps in Shin et al. (2021) to collect 5,783 septic patients from the MIMIC-III dataset Johnson et al. (2016) according to the Sepsis-3 criteria Singer et al. (2016). The in-hospital mortality rate of this cohort is 14.5%. We focused on predicting in-hospital mortality for these sepsis patients using demographics information, medical histories, mechanical ventilation status, the first present lab testing results and physiologic measurements within 24 hours of ICU admission, and the Sequential Organ Failure Assessment (SOFA) score. Similarly to the setup in the AKI experiment, the lab tests were also categorized into 4 panels of CBC (n=5), CMBP (n=15), ABG (n=6) and APTT (n=2). The components of SOFA score may be based on the lab testing results in CBC, CMP or ABG panels Singer et al. (2016). Demographic features are considered visible.
### Performance Results
Our method is tested with comparison to a number of baselines that use either full/partial test results or statically/dynamically selected tests for prediction. They include logistic regression, random forest Ho (1995), XGBoost Chen & Guestrin (2016), LightGBM Ke et al. (2017), a 3-layer multi-layer perceptron, as well as RL-based approach Janisch et al. (2019). Experiments results, such as F1 score, AUROC and testing costs are reported in Table 2. We emphasize that these baselines are incapable to handle the task of finding the Pareto front. Thus, we only test them under no budget constraints.
\(\bullet\)**Comparisons with baseline models using full observation of data.** The results are presented in Table 2. Across all three clinical tasks, our proposed model can achieve comparable or even state-of-the-art performance, while significantly reducing the financial cost. On sepsis dataset, SM-DDP\({}_{\text{on}\_2\text{end}}\) yielded better results (F\({}_{1}\)=0.562, AUROC=0.845) than the strongest baseline models, LightGBM (F\({}_{1}\)=0.517, AUROC=0.845)), when saving up to 84% in test cost. On ferritin dataset, LightGBM (F\({}_{1}\)=0.627, AUROC=0.948) performed slightly better than our model (F\({}_{1}\)=0.624, AUROC=0.928), however, by using 5x testing cost. On AKI dataset, SM-DDP\({}_{\text{on}\_2\text{end}}\) (F\({}_{1}\)=0.495, AUROC=0.795) achieved comparable results to the optimal full observation model, 3-layer MLP (F\({}_{1}\)=0.494, AUROC=0.802), while saving the testing cost from $591 to $90.
\(\bullet\)**Comparisons with other test selection strategies.** Our proposed SM-DDP\({}_{\text{on}\_2\text{end}}\), using RL-inspired dynamic selection strategy, consistently yielded better performance and required less testing cost across all three datasets, when compared to the models using fixed or random selection strategy. For fixed test selection strategy, we first tested the classification methods using the two most relevant panels: CBC and CMP. These baselines with reduced testing cost still behaved much worse than our approach in both \(F_{1}\) score and AUROC. We also tested another fixed selection (FS) baseline, where we chose to always observe 2 most selected test panels reported in our approach for all patients, while keeping other modules the same. Our approach outperformed FS on both \(F_{1}\) score and AUROC while having a similar testing cost. The random selection (RS) baseline selected test panels uniformly at random and had a worse performance. Q-learning for classification with costly features (CWCF) Janisch et al. (2019) performed poorly on all three clinical datasets. We believe this is because the model uses the same network for selecting tests and learning rewards. For such imbalanced datasets, this may make the training unstable and difficult to optimize.
\(\bullet\)**Efficiency and accuracy of end-to-end training.** The classifier and panel selector of SM-DDP\({}_{\text{on}\_2\text{end}}\) are **both trained from the scratch**, using Algorithm 1. As in Table 2, this end-to-end training scheme gives comparable accuracy with policies that rely on a heavily pretrained classifier (SM-DDP\({}_{\text{pretrain}}\)). If a brand-new diagnostic/predictive task is given, our algorithm can be trained without prior data/knowledge about the new disease. It can adaptively prescribe lab tests to learn the disease model and test selection policy in an online fashion. End-to-end training is also more data-efficient and runtime efficient.
\(\bullet\)**Interpretability.** Our algorithm is able to select test panels that are clinically relevant. For ferritin prediction, our algorithm identifies TSAT as a most important panel, which is indeed useful for detecting iron deficiency. For AKI prediction, our algorithm recommends serum creatinine level test as an important predictor for 95% of subjects, i.e., current and past serum creatinine is indicative of future AKI, expanding its utility as a biomarker de Geus et al. (2012).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Models & \multicolumn{3}{c}{Ferritin} & \multicolumn{3}{c}{AKI} & \multicolumn{3}{c}{Sepsis} & \multicolumn{3}{c}{Test Selection} \\ \cline{3-10} \multicolumn{1}{c}{_Metrics_} & \(F_{i}\) & _AUC_ & _Cost_ & \(F_{i}\) & _AUC_ & _Cost_ & \(F_{i}\) & _AUC_ & _Cost_ & Strategy \\ \hline LR & 0.539 & 0.935 & 5290 & 0.452 & 0.797 & 5591 & 0.506 & 0.825 & 5591 & Full \\ RF & 0.605 & 0.938 & 5290 & 0.439 & 0.764 & 5591 & 0.456 & 0.801 & 5591 & Full \\ XGBoost & 0.617 & 0.938 & 5290 & 0.404 & 0.785 & 5591 & 0.431 & 0.828 & 5591 & Full \\ LightGBM & **0.627** & **0.941** & **5290** & 0.474 & 0.790 & 5591 & 0.500 & 0.844 & 5591 & Full \\
3-layer DNN & 0.616 & 0.938 & 5290 & 0.494 & 0.802 & 5591 & 0.517 & 0.845 & 5591 & Full \\ LR (2 panels) & 0.401 & 0.859 & 592 & 0.473 & 0.797 & 592 & 0.488 & 0.811 & 592 & Fixed \\ RF (2 panels) & 0.504 & 0.887 & 592 & 0.425 & 0.768 & 592 & 0.478 & 0.828 & 592 & Fixed \\ XGBoost (2 panels) & 0.519 & 0.895 & 592 & 0.410 & 0.781 & 592 & 0.459 & 0.877 & 592 & Fixed \\ LightGBM (2 panels) & 0.571 & 0.901 & 592 & 0.491 & 0.792 & 592 & 0.502 & 0.864 & 592 & Fixed \\ FS & 0.585 & 0.927 & 574 & 0.434 & 0.787 & 598 & 0.500 & 0.837 & 590 & Fixed \\ RS & 0.437 & 0.845 & 5145 & 0.424 & 0.748 & 5295 & 0.473 & 0.789 & 5295 & Random \\ CWCF & 0.554 & 0.718 & 5256 & 0.283 & 0.510 & 5326 & 0.112 & 0.503 & 5301 & Dynamic \\ SM-DDP\({}_{\text{on}\_2\text{end}}\) & 0.607 & 0.925 & 580 & **0.519** & **0.789** & **590** & **0.567** & **0.836** & **585** & Dynamic \\ SM-DDP\({}_{\text{on}\_2\text{end}}\) & 0.624 & 0.928 & 562 & 0.495 & 0.795 & 597 & 0.562 & 0.845 & 590 & Dynamic \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model performance, measured by F\({}_{1}\) score, area under ROC (AUC), and testing cost, for three real-world clinical datasets. The tested models include logistic regression (LR), random forests (RF), gradient boosted regression trees (XGBoost Chen & Guestrin (2016) and LightGBM Ke et al. (2017) ), 3-layer multi-layer perceptron, Q-learning for classification with costly features (CWCF)Janisch et al. (2019), Random Selection (RS), Fixed Selection (FS). All models were fine-tuned to maximize the F\({}_{1}\) score. The model yielded the highest F\({}_{1}\) score is in bold. The model required the least testing cost is underlined. More detailed results of this table with more dynamic baselines and standard deviations reported in Appendix B.
### Training Curves
We present the training curves on AKI dataset in Figure 3. We refer more results to Appendix B.
\(\bullet\)**SM-DDPO learns the disease model.** In end-to-end training, the diagnostic classifier is trained from the scratch. It maps any partially-observed patient state to a diagnosis/prediction. We evaluate this classifier on static data distributions, in order to eliminate the effect of dynamic test selection and focus on classification quality. Figure 3 shows that the classifier learns to make the high-quality prediction with improved quality during RL, via training only on data selected by the RL algorithm.
### Cost-\(F_{1}\) Pareto Front
Figure 4 illustrates the Pareto fronts learned on all three datasets. We trained for optimal policies on 190 MDP instances specified by different value pairs of \((\lambda,\rho)\) in Theorem 4.1, and present the corresponding performance on \(F_{1}\) score (red) and AUROC (blue) evaluated on the test sets. We identify the Pareto front as the upper envelope of these solutions, which are the yellow curves in Figure 4. These results present the full tradeoff between testing cost and diagnostic/predictive accuracy. As a corollary, given any cost budget \(B\), one is able to obtain the best testing strategy with the optimal \(F_{1}\) performance directly from Figure 4. We present a zoom-in version in Appendix B.
## 7 Summary
In this work, we develop a Semi-Model-based Deep Diagnosis Policy Optimization (SM-DDPO) method to find optimal cost-sensitive dynamic policies and achieve state-of-art performances on real-world clinical datasets with up to \(85\%\) reduction in testing cost.
Figure 4: Cost-\(F_{1}\) Pareto Front for maximizing \(F_{1}\)-score on Ferritin, AKI and Sepsis Datasets
Figure 3: Classifier improvement during RL training on AKI Dataset. Accuracy of the learned classifier is evaluated on static patient distributions, with 1) random missing pattern, where we uniformly at random augment the test data; 2) missing pattern of the optimal policy’s state distribution. During the end-to-end RL training, the classifier gradually improves and has higher accuracy with the second missing pattern.
#### Acknowledgments
Mengdi Wang acknowledges the support by NSF grants DMS-1953686, IIS-2107304, CMMI-1653435, ONR grant 1006977, and [http://C3.AI](http://C3.AI).
Yuan Luo acknowledges the support by NIH grants U01TR003528 and R01LM013337.
Yikuan Li acknowledges the support by AHA grant 23PRE1010660.
|
2301.09260 | Stochastic six-vertex models, Hall-Littlewood positivity and
$t$-deformed Schensted insertions | We prove a positivity theorem for a certain family of operators defined in
terms of the stochastic six-vertex model. We explore connections of this result
with other vertex models and $t$-deformed Schensted insertions. | Konstantin Matveev | 2023-01-23T04:13:45Z | http://arxiv.org/abs/2301.09260v1 | # Stochastic Six-Vertex Models, Hall-Littlewood Positivity and \(t\)-Deformed Schensted Insertions
###### Abstract.
We prove a positivity theorem for a certain family of operators defined in terms of the stochastic six-vertex model. We explore connections of this result with other vertex models and \(t\)-deformed Schensted insertions.
## 1. Introduction
### Positivity phenomena
The main result of this paper is theorem 1.6. It belongs to the following class of theorems. Suppose \(P(h_{1},h_{2},\ldots,h_{m})\) is some polynomial with many positive and negative coefficients. Suppose each \(h_{i}\) is itself a polynomial in \(a_{1},a_{2},\ldots,a_{n}\). Then a priori there is no reason to expect that \(P\) as a polynomial in \(a_{1},a_{2},\ldots,a_{n}\) will have positive coefficients. If it happens to be the case, it might be an indication that there is some structure underlying this phenomenon. Here is a relevant example. Denote by \(\Lambda_{n}\) the commutative algebra of symmetric polynomials in \(a_{1},a_{2},\ldots,a_{n}\) over \(\mathbb{R}\). Define the \(r\)-th _complete symmetric polynomial_
\[h_{r}:=\sum_{1\leq i_{1}\leq i_{2}\leq\cdots\leq i_{r}\leq n}a_{i_{1}}a_{i_{2} }\cdots a_{i_{r}}\in\Lambda_{n},\qquad h_{0}=1,\qquad h_{r}=0\quad\text{for $r<0$.} \tag{1.1}\]
**Proposition 1.1**.: _For a partition \(\lambda=\{\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{\ell}\}\), \(\lambda_{i}\in\mathbb{N}\), take polynomial \(P\) to be \(\det\left[h_{\lambda_{i}-i+j}\right]_{i,j=1}^{\ell}\). Then \(P\) as a polynomial in \(a_{1},a_{2},\ldots,a_{n}\) has positive coefficients._
In this case \(P\) turns out to be the Schur polynomial \(S_{\lambda}(a_{1},a_{2},\ldots,a_{n})\) due to the Jacobi-Trudi formula. It has representation as a summation over semistandard tableaux \(\boldsymbol{\lambda}\):
\[S_{\lambda}=\sum_{\boldsymbol{\lambda}\text{ of shape }\lambda}a^{ \boldsymbol{\lambda}}=\sum_{\boldsymbol{\lambda}\text{ of shape }\lambda}\prod_{i=1}^{\infty}a_{i}^{\boldsymbol{\lambda}(i)}, \tag{1.2}\]
where \(\boldsymbol{\lambda}(i)\) is the number of entries in \(\boldsymbol{\lambda}\) equal to \(i\). See [12] for more details. So coefficients of \(P\) in this case are positive integers. Theorem 1.6 is a generalization of proposition 1.1. It comes from the following "commutative diagram" of generalizations.
\[\begin{CD}\text{Propostion 1.1}@>{t\text{-deformation}}>{}>\text{Propostion 1.2} \\ @V{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
3. Using the angle of \(t\)-deformed plactic algebra actions to find the right generalization of the summation formula (1.2) from which theorem 1.6 follows.
4. Realizing that extended vertex models provide the right framework for expressing and proving such generalization.
### Hall-Littlewood symmetric polynomials
Hall-Littlewood symmetric polynomial \(P_{\lambda}\) is a \(t\)-deformation of the Schur polynomial \(S_{\lambda}\). It becomes \(S_{\lambda}\) for \(t=0\). See [10] for more details and [11], [12] for the origin of the concept. One possible definition of \(P_{\lambda}(a_{1},a_{2},\ldots,a_{n})\) is the following. Denote by \(m_{k}(\lambda)\) the number of parts \(\lambda_{i}\), which are equal to \(k\). Then
\[P_{\lambda}:=\frac{1}{v_{\lambda}(t)}\sum_{w\in S_{n}}w\left(a_{1}^{\lambda_{1 }}a_{2}^{\lambda_{2}}\cdots a_{n}^{\lambda_{n}}\prod_{1\leq i<j\leq n}\frac{a_ {i}-ta_{j}}{a_{i}-a_{j}}\right),\quad\text{where }v_{\lambda}(t):=\prod_{k \geq 0}\prod_{i=1}^{m_{k}(\lambda)}\frac{1-t^{i}}{1-t}.\]
**Proposition 1.2**.: _For \(0\leq t<1\) all the coefficients of \(P_{\lambda}\) are non-negative._
This result follows from the following summation formula
\[P_{\lambda}=\sum_{\boldsymbol{\lambda}\text{ of shape }\lambda}\psi_{ \boldsymbol{\lambda}}(t)a^{\boldsymbol{\lambda}} \tag{1.3}\]
Here tableau weight \(\psi_{\boldsymbol{\lambda}}(t)\) can be defined as the product \(\prod_{s\text{\'{e} Boxes of }\boldsymbol{\lambda}}(1-e(s))\), where for a box \(s\) with entry \(p\) in the \(i\)-th row and the \(j\)-th column of \(\boldsymbol{\lambda}\) we define
\[e(s)\qquad:=\qquad\begin{cases}0,&\text{if }j=1\text{ or the }(j-1)\text{-st column}\\ &\text{of }T\text{ also contains }p,\\ \end{cases}\]
All product terms are clearly non-negative for \(0\leq t<1\). Proposition 1.1 follows from proposition 1.2 by substituting \(t=0\).
Let \(g_{k}=(1-t)P_{(k)}\). For \(t=0\) polynomial \(g_{k}\) becomes \(h_{k}\). Then it is easy to show that \(\Lambda_{n}=\mathbb{R}[g_{1},g_{2},\ldots,g_{n}]\) and factorization of the generating function for \(g_{k}^{\prime}s\)
\[1+\sum_{k=1}^{\infty}g_{k}\alpha^{k}=\prod_{i=1}^{n}\frac{1-t\alpha a_{i}}{1- \alpha a_{i}} \tag{1.4}\]
### Plactic algebra action
_Plactic monoid_ \(Pl_{n}\) of rank \(n\) is a monoid generated by letters \(1,2,\ldots,\mathsf{n}\) modulo the _Knuth relations_:
\[xzy\equiv zxy\quad(x\leq y<z),\qquad yxz\equiv yzx\quad(x<y\leq z).\]
See [13] for more details.
Figure 1. \(\lambda=(3,2,1)\), \(n=5\). _Left:_ A tableau with weight \(\psi(t)=(1-t^{2})^{2}\). _Right:_ A tableau with weight \(\psi(t)=1-t\).
**Definition 1.3**.: \(Pl_{n}\) _acts on the set of subsets of \(\{1,2,\ldots,n\}\) via the following formula. For \(S\subseteq\{1,2,\ldots,n\}\) define_
\[\mathfrak{i}\cdot S=\begin{cases}S,\quad\text{if $i\in S$},\\ S\cup\{i\},\quad\text{if $i\notin S$ and $S\cap\{i+1,\ldots,n\}=\varnothing$},\\ S\cup\{i\}-\left\{\text{First element of $S\cap\{i+1,\ldots,n\}$}\right\},\quad \text{if $i\notin S$ and $S\cap\{i+1,\ldots,n\}\neq\varnothing$}.\end{cases} \tag{1.5}\]
_Consider the plactic algebra \(\mathbb{R}[a_{1},a_{2},\ldots,a_{n}][Pl_{n}]\) spanned by the monoid \(Pl_{n}\). By linearly extending the previous action we get that \(\mathbb{R}[a_{1},a_{2},\ldots,a_{n}][Pl_{n}]\) acts on the space of formal linear combinations of subsets of \(\{1,2,\ldots,n\}\)._
Consider the following commuting elements of \(\mathbb{R}[a_{1},a_{2},\ldots,a_{n}][Pl_{n}]\).
\[H_{r}:=\sum_{1\leqslant i_{1}\leqslant i_{2}\leq\cdots\leqslant i _{r}\leqslant n}(a_{i_{1}}a_{i_{2}}\cdots a_{i_{r}})\ \mathfrak{i_{1}}\cdot\mathfrak{i_{2}}\cdots\cdot\mathfrak{i_{r}}\in \mathbb{R}[a_{1},a_{2},\ldots,a_{n}][Pl_{n}],\\ H_{0}=Id,\qquad H_{r}=0\quad\text{for $r<0$}. \tag{1.6}\]
**Proposition 1.4**.: _For any partition \(\lambda\) element \(\det\left[H_{\lambda_{i}-i\cdot j}\right]_{i,j=1}^{\ell}\in\mathbb{R}[a_{1}, a_{2},\ldots,a_{n}][Pl_{n}]\) sends any subset \(S\subseteq\{1,2,\ldots,n\}\) to a linear combination of subsets of \(\{1,2,\ldots,n\}\) with each coefficient being a polynomial in \(a_{1},a_{2},\ldots,a_{n}\) with positive coefficients._
Note that \(\mathfrak{i}\cdot=\{1,2,\ldots,n\}=\{1,2,\ldots,n\}\) for any \(1\leq i\leq n\). Hence \(\det\left[H_{\lambda_{i}-i\cdot j}\right]_{i,j=1}^{\ell}\cdot\{1,2,\ldots,n\}= S_{\lambda}(a_{1},a_{2},\ldots,a_{n})\cdot\{1,2,\ldots,n\}\). So proposition 1.1 is a special case of proposition 1.4 for \(S=\{1,2,\ldots,n\}\). The reason behind proposition 1.4 is combinatorics of the Schensted insertion algorithm. This connection is explained in more detail in section 3.
### Stochastic six-vertex model
We will work with inhomogeneous transfer matrices of a stochastic six-vertex model. Let \(V\) be a two-dimensional real vector space spanned by elements \(\mathbf{1}\) and \(\mathbf{2}\). Given two parameters \(a\), \(t\) one can define an operator \(R=R(a,t):V^{\otimes 2}\to V^{\otimes 2}\) by
\[R(2\otimes 2)=2\otimes 2,\qquad R(2\otimes 1)=\frac{1-a}{1-ta} \cdot 2\otimes 1+\frac{(1-t)a}{1-ta}\cdot 1\otimes 2,\\ R(1\otimes 2)=\frac{1-t}{1-ta}\cdot 2\otimes 1+\frac{t(1-a)}{1-ta} \cdot 1\otimes 2,\qquad R(1\otimes 1)=1\otimes 1. \tag{1.7}\]
For \(0\leq a\leq 1\) and \(0\leq t<1\) the matrix of \(R\) with respect to basis \(\{2\otimes 2,2\otimes 1,1\otimes 2,1\otimes 1\}\) of \(V^{\otimes 2}\) is stochastic. This \(R\)-matrix gives rise to a stochastic six-vertex model with weights as depicted on Fig. 2.
For parameters \(\alpha,t,a_{1},a_{2},\ldots,a_{n}\) we use \(R\) to define a transfer operator of a six-vertex model
\[T(\alpha)=T(\alpha,t;a_{1},a_{2},\ldots,a_{n}):V^{\otimes n}\to V^{\otimes n}\]
Figure 2. Stochastic six-vertex model with weights coming from \(R(a,t)\). Fat lines correspond to \(2\)’s, normal lines correspond to \(1\)’s.
as specified on Fig. 3. Note that there is fixed input \(\mathbf{1}\) on the left, while boundary condition on the right is free (it can be either \(\mathbf{1}\) or \(\mathbf{2}\)).
For \(0\leq\alpha a_{1},\alpha a_{2},\ldots,\alpha a_{n}\leq 1\) and \(0\leq t<1\) the matrix of \(T(\alpha)\) with respect to basis \(\{1,2\}^{\otimes n}\) of \(V^{\otimes n}\) is stochastic. \(V^{\otimes n}=\bigoplus_{k=0}^{n}V_{n,k}\), where each subspace \(V_{n,k}\) is defined as the span of vectors \(e_{1}\otimes e_{2}\otimes\cdots\otimes e_{n}\) with exactly \(k\)\(1\)'s and \(n-k\)\(2\)'s among the \(e_{i}\)'s. Then clearly \(T(\alpha)(V_{n,k})\subset V_{n,k}\oplus V_{n,k+1}\) for \(0\leq k<n\). Operators \(T(\alpha)\) and \(T(\beta)\) commute via a standard argument of pulling an extra vertex through due to the Yang-Baxter equation for \(R\) specified on Fig. 4.
Let
\[\widetilde{T}(\alpha):=\left(\prod_{i=1}^{n}\frac{1-t\alpha a_{i}}{1-\alpha a _{i}}\right)T(\alpha). \tag{1.8}\]
Then \(\widetilde{T}(\alpha)\) is the transfer operator for the inhomogeneous six-vertex model with weights specified on Fig. 5.
Consider the power expansion \(\widetilde{T}(\alpha)=Id+\sum_{k=1}^{\infty}T_{k}\alpha^{k}\). Operators \(\widetilde{T}(\alpha)\) and \(\widetilde{T}(\beta)\) commute with each other, hence operators \(T_{k}\) and \(T_{\ell}\) commute for all positive integers \(k,\ell\).
Figure 4. Yang-Baxter equation for matrix \(R\).
Figure 5. Six-vertex model weights for \(\widetilde{T}(\alpha)\).
**Definition 1.5**.: _Define representation \(\Theta:\Lambda_{n}\to End(V^{\otimes n})\) by \(\Theta(g_{k})=T_{k}\) for \(1\leq k\leq n\). We will later check that this equality also holds for \(k>n\)._
### Statement of the main result
Our main result is the following
**Theorem 1.6**.: _For \(0\leq t<1\) and any partition \(\lambda\) all matrix elements of \(\Theta(P_{\lambda})\) with respect to basis \(\{1,2\}^{\otimes n}\) are polynomials in \(a_{1},a_{2},\ldots,a_{n}\) with non-negative coefficients._
**Corollary 1.7**.: _For \(a_{1},a_{2},\ldots,a_{n}>0\) and any partition \(\lambda\) with \(\lambda_{1}^{\prime}\leq n\) the matrix of \(\frac{\Theta(P_{\lambda})}{P_{\lambda}(a_{1},a_{2},\ldots,a_{n})}\) with respect to basis \(\{1,2\}^{\otimes n}\) is stochastic._
Proof of the corollary 1.6.: Take \(v\in\{1,2\}^{\otimes n}\). Then sum of basis coefficients of \(T(\alpha)(v)\) is \(1\), since weights of \(T(\alpha)\) are stochastic. So sum of basis coefficients of \(\widetilde{T}(\alpha)(v)\) is \(\prod_{i=1}^{n}\frac{1-taa_{i}}{1-\alpha a_{i}}=1+\sum_{k=1}^{\infty}g_{k}(a_ {1},a_{2},\ldots,a_{n})\alpha^{k}\) due to (1.4). So sum of basis coefficients of \(T_{k}(v)\) is \(g_{k}(a_{1},a_{2},\ldots,a_{n})\). Hence sum of basis coefficients of \(T_{k}(v)\) is \(P_{\lambda}(a_{1},a_{2},\ldots,a_{n})\). Conditions \(\lambda_{1}^{\prime}\leq n\) and \(0\leq t<1\) guarantee that \(P_{\lambda}(a_{1},a_{2},\ldots,a_{n})>0\) due to formula (1.3), so we can divide by it.
To relate theorem 1.6 to proposition 1.4 associate \(v_{1}\otimes v_{2}\otimes\cdots\otimes v_{n}\) to \(S\subseteq\{1,2,\ldots,n\}\) by including \(1\leq i\leq n\) in \(S\) if and only if \(v_{i}=1\). Then one can check that for \(t=0\) operator \(\widetilde{T}(\alpha)\) turns into action by \(Id+\sum_{k=0}\alpha^{k}H_{k}\). Hence \(T_{k}^{t=0}=H_{k}\) and proposition 1.4 becomes a special case of theorem 1.6. See also [1].
### Paper outline
In section 2 we explore connection with a problem of Kerov on classifying homomorphisms from the algebra of symmetric functions to \(\mathbb{R}\) with non-negative values on Macdonald functions. In section 3 we recall the background on plactic algebra, Schensted's insertions and prove proposition 1.4. In section 4 we explore \(t\)-deformations of Schensted insertions and introduce extended vertex models into the picture. In section 5 we prove theorem 1.6.
## 2. Positive homomorphisms
Denote by \(\Lambda\) the algebra of symmetric power series of bounded degree (called _symmetric functions_) in countably many variables \(x_{1},x_{2},x_{3},\ldots\) over \(\mathbb{R}\). For fixed parameters \(-1<q,t<1\) algebra \(\Lambda\) admits two special linear bases of _Macdonald functions_: \(\left\{P_{\lambda}(x_{1},x_{2},x_{3},\ldots;q,t)\right\}_{\lambda\in\mathcal{P}}\) and \(\left\{Q_{\lambda}(x_{1},x_{2},x_{3},\ldots;q,t)\right\}_{\lambda\in\mathcal{P}}\). Here \(\mathcal{P}\) denotes the set of _partitions_, and \(Q_{\lambda}(q,t)=b_{\lambda}(q,t)P_{\lambda}(q,t)\) for some constant \(b_{\lambda}(q,t)>0\). See [10] for more background on Macdonald functions. In particular, the one-row Macdonald functions are
\[g_{r}:=Q_{(r)}=\sum_{r_{1},r_{2},r_{3},\ldots:0:\ r_{1}+r_{2}+r_{3}+\cdots=r} \prod_{i\geq 1}\frac{(t;q)_{r_{i}}}{(q,q)_{r_{i}}}x_{i}^{r_{i}},\quad\text{where} \quad(a;q)_{k}:=\prod_{m=1}^{k}\left(1-aq^{m-1}\right) \tag{2.1}\]
is the \(q\)-_Pochhammer symbol_. Also \(e_{r}=\sum_{1<i_{1}<i_{2}<\cdots<i_{r}}x_{i_{1}}x_{i_{2}}\cdots x_{i_{r}}\), the \(r\)-th _elementary symmetric function_ is the same as \(P_{1^{r}}\), the one-column Macdonald function. Note that for \(q=0\) Macdonald functions \(P_{\lambda}\) become the Hall-Littlewood functions, which turn into Hall-Littlewood polynomials \(P_{\lambda}\) of subsection 1.2 after setting \(x_{i}\to a_{i}\) for \(1\leq i\leq n\) and \(x_{i}\to 0\) for \(i>n\). Then \(g_{r}\) of equality (2.1) becomes \(g_{r}\) of equality (1.4). Any element of \(\Lambda\) can be uniquely represented as a polynomial in \(g_{r}\)'s. The following result was conjectured by S.V. Kerov in [11, Sec. 7.3] (see also [11, p. 106]) and proved by the author in [12].
**Theorem 2.1** ([12]).: _For fixed \(-1<q,t<1\) a homomorphism \(\theta:\Lambda\to\mathbb{R}\) has the property that \(\theta(P_{\lambda})\geq 0\) for any partition \(\lambda\in\mathcal{P}\) (is Macdonald-positive) if and only if it is defined by the
generating function_
\[1+\sum_{r=1}^{\infty}\theta(g_{r})z^{r}=e^{\gamma z}\cdot\prod_{i=1}^{\infty} \frac{(t\alpha_{i}z;q)_{\infty}}{(\alpha_{i}z;q)_{\infty}}\cdot\prod_{j=1}^{ \infty}\left(1+\beta_{j}z\right) \tag{2.2}\]
_for some \(\alpha_{i},\beta_{j},\gamma\geq 0\), such that \(\sum_{i=1}^{\infty}\alpha_{i}+\sum_{j=1}^{\infty}\beta_{j}<\infty\)._
For \(q=t\) both functions \(P_{\lambda}\) and \(Q_{\lambda}\) become the Schur function \(S_{\lambda}\). The corresponding special case of Theorem 2.1 is known as the _Edrei-Thoma theorem_. It was first conjectured in [10] and then proved in a series of papers [1], [12], [13] in the language of classifying infinite totally non-negative upper triangular Toeplitz matrices. It was independently discovered and proved in [14] in the context of classifying characters of the _infinite symmetric group_\(S_{\infty}\). See [1] for more details on the representation theory of the infinite symmetric group.
**Corollary 2.2**.: _Suppose \(0\leq t<1\). Homomorphism \(\theta:\Lambda_{n}\to\mathbb{R}\) has non-negative values on all Hall-Littlewood symmetric polynomials if and only if it is given by setting \(\theta(a_{i}):=\alpha_{i}\geq 0\) and restricting to \(\Lambda_{n}\)._
Proof of corollary 2.2.: \(\theta\) defined by by setting \(\theta(a_{i}):=\alpha_{i}\geq 0\) is non-negative on all Hall-Littlewood polynomials due to equality (1.3). For the reverse direction first define \(\pi:\Lambda\to\Lambda_{n}\) by setting \(\pi(x_{i})=a_{i}\) for \(1\leq i\leq n\), \(\pi(x_{i})=0\) for \(i>n\) and restricting to symmetric functions. \(\pi(e_{r})=e_{r}(a_{1},a_{2},\ldots,a_{n})\) for \(1\leq r\leq n\), \(\pi(e_{r})=0\) for \(r>n\). There is a homomorphism \(w_{t,0}:\Lambda\to\Lambda\) sending the \(t\)-Whittaker function \(P_{\lambda}(t,0)\) to the Hall-Littlewood function \(Q_{\lambda^{\prime}}(0,t)=b_{\lambda^{\prime}}(0,t)P_{\lambda^{\prime}}(0,t)\) for a conjugate partition \(\lambda^{\prime}\), as well as sending \(Q_{\lambda}(t,0)\) to \(P_{\lambda}(t,0)\), see [15] for details. Suppose a homomorphism \(\theta:\Lambda_{n}\to\mathbb{R}\) has non-negative values on all Hall-Littlewood symmetric polynomials. Define \(\tilde{\theta}:\Lambda\to\mathbb{R}\) by \(\tilde{\theta}=\theta\circ\pi\circ w_{t,0}\). Then \(\tilde{\theta}\) takes non-negative values on all \(P_{\lambda}(t,0)\), so by theorem 2.1 it can be defined by the generating function
\[1+\sum_{r=1}^{\infty}\tilde{\theta}(g_{r}(t,0))z^{r}=e^{\gamma z}\cdot\prod_{i =1}^{\infty}\frac{1}{(\alpha_{i}z;t)_{\infty}}\cdot\prod_{j=1}^{\infty}\left( 1+\beta_{j}z\right)\]
The left hand side of this equality is \(1+\sum_{r=1}^{\infty}\tilde{\theta}(g_{r}(t,0))z^{r}=1+\sum_{r=1}^{n}\theta(e _{r}(a_{1},a_{2},\ldots,a_{n}))z^{r}\). In particular, the coefficient of \(z^{m}\) in it is \(0\) for \(m>n\). Note that we have
\[\frac{1}{(\alpha_{i}z;t)_{\infty}}=\sum_{k=0}^{\infty}\frac{\alpha_{i}^{k}z^ {k}}{(t;t)_{k}}\]
by the \(t\)-Gauss summation formula. So if either \(\gamma>0\), or at least one \(\alpha_{i}>0\), or there were more than \(n\) non-zero \(\beta_{j}\)'s, then \(1+\sum_{r=1}^{\infty}\tilde{\theta}(g_{r}(t,0))z^{r}\) would contain powers of \(z\) higher than \(n\) with strictly positive coefficients. That would be a contradiction. Hence
\[1+\sum_{r=1}^{n}\theta(e_{r}(a_{1},a_{2},\ldots,a_{n}))z^{r}=\prod_{j=1}^{n} \left(1+\beta_{j}z\right).\]
Relabel \(\beta_{j}\) as \(\alpha_{j}\). Then \(\theta(e_{r}(a_{1},a_{2},\ldots,a_{n}))=e_{r}(\alpha_{1},\alpha_{2},\ldots, \alpha_{n})\). So \(\theta\) is defined by setting \(\theta(a_{i}):=\alpha_{i}\geq 0\) and restricting to \(\Lambda_{n}\).
So theorem 1.6 can be viewed as a multidimensional generalization of the easier direction of corollary 2.2. This raises a question: what would be the right multidimensional generalization of the harder direction?
## 3. Plactic algebra and Schensted's insertions
### Signatures, tableaux and particle arrays
A _signature_1 of length \(n\geq 1\) is a non-increasing collection of integers \(\lambda=(\lambda_{1}\geq\ldots\geq\lambda_{n})\in\mathbb{Z}^{n}\). We will work with signatures which have only nonnegative parts, i.e., \(\lambda_{n}\geq 0\) (in which case they are also called _partitions_). Denote the set of all such objects by \(\mathbb{GT}_{n}^{+}\). Let also \(\mathbb{GT}^{+}:=\bigcup_{n=1}^{\infty}\mathbb{GT}_{n}^{+}\), with the understanding that we identify \(\lambda\cup 0=(\lambda_{1},\ldots,\lambda_{n},0,0,\ldots,0)\in\mathbb{GT}_{n+m}^ {+}\) (\(m\) zeros) with \(\lambda\in\mathbb{GT}_{n}^{+}\) for any \(m\geq 1\).
Footnote 1: These objects are also sometimes called _highest weights_ as they are the highest weights of irreducible representations of the unitary group \(U(n)\).
We will use two ways to depict signatures (see Fig. 6):
1. Any signature \(\lambda\in\mathbb{GT}_{n}^{+}\) can be identified with a _Young diagram_ (having at most \(n\) rows) as in [10, I.1].
2. A signature \(\lambda\in\mathbb{GT}_{n}^{+}\) can also be represented as a configuration of \(n\) particles on \(\mathbb{Z}_{\geq 0}\) (with the understanding that there can be more than one particle at a given location).
We denote by \(|\lambda|:=\sum_{i=1}^{n}\lambda_{i}\) the number of boxes in the corresponding Young diagram, and by \(\ell(\lambda)\) the number of nonzero parts in \(\lambda\) (which is finite for all \(\lambda\in\mathbb{GT}^{+}\)). For \(\mu,\lambda\in\mathbb{GT}^{+}\) we will write \(\mu\subseteq\lambda\) if \(\mu_{i}\leq\lambda_{i}\) for all \(i\in\mathbb{Z}_{\geq 0}\). In this case, the set difference of Young diagrams \(\lambda\) and \(\mu\) is denoted by \(\lambda/\mu\) and is called a _skew Young diagram_.
Two signatures \(\mu,\lambda\in\mathbb{GT}^{+}\) are said to _interlace_ if one can append them by zeros such that \(\mu\in\mathbb{GT}_{n-1}^{+}\) and \(\lambda\in\mathbb{GT}_{n}^{+}\) for some \(N\), and
\[\lambda_{1}\geq\mu_{1}\geq\lambda_{2}\geq\mu_{2}\geq\ldots\geq\lambda_{n-1} \geq\mu_{n-1}\geq\lambda_{n}. \tag{3.1}\]
In terms of Young diagrams, this means that \(\lambda\) is obtained from \(\mu\) by adding a _horizontal strip_ (or, equivalently, that _the skew diagram \(\lambda/\mu\) is a horizontal strip_ which is, by definition, a skew Young diagram having at most one box in each vertical column), and we denote this by \(\mu\prec_{\mathsf{h}}\lambda\).
Let \(\lambda^{\prime}\) denote the transposition of the Young diagram \(\lambda\). For the diagram on Fig. 6, we have \(\lambda^{\prime}=(4,4,3,1,1)\). If \(\lambda/\mu\) is a horizontal strip, then \(\lambda^{\prime}/\mu^{\prime}\) is called a _vertical strip_. We will denote the corresponding relation by \(\mu^{\prime}\prec_{\mathsf{v}}\lambda^{\prime}\).
**Definition 3.1**.: _A Gelfand-Tsetlin array (sometimes also referred to as scheme, or pattern) of depth \(n\) is a sequence of interlacing signatures \(\boldsymbol{\lambda}=(\lambda^{(1)}\prec_{\mathsf{h}}\lambda^{(2)}\prec_{ \mathsf{h}}\ldots\prec_{\mathsf{h}}\lambda^{(n)})\), where \(\lambda^{(j)}\in\mathbb{GT}_{j}^{+}\)._
Such sequences first appeared in connection with representation theory of unitary groups [12].2 We will depict sequences \(\boldsymbol{\lambda}\) as interlacing integer arrays, and also associate to them configurations of particles \(\{(\lambda_{j}^{(k)},k)\colon k=1,\ldots,n,\ j=1,\ldots,k\}\) on \(n\) horizontal copies of \(\mathbb{Z}_{\geq 0}\). See Fig. 7.
Footnote 2: This justifies the notation “\(\mathbb{GT}\)” we are using. Tsetlin and Cetlin are different English spellings of the same last name.
Let us denote the set of all interlacing arrays \(\boldsymbol{\lambda}\) of depth \(n\) with top level \(\lambda\) by \(\mathbb{GT}^{(n)}(\lambda)\). Let \(\mathbb{GT}^{(n)}:=\bigcup_{\lambda\in\mathbb{GT}_{n}^{+}}\mathbb{GT}^{(n)}(\lambda)\).
Figure 6. Young diagram \(\lambda=(5,3,3,2)\in\mathbb{GT}_{4}^{+}\), and the corresponding particle configuration. Note that there are two particles at location \(3\).
**Definition 3.2**.: _A semistandard Young tableau of shape \(\lambda\) is a filling in the boxes of the Young diagram \(\lambda\) with positive integers, which increase weakly along rows, and strictly down columns._
There is a natural correspondence between the Gelfand-Tsetlin arrays of depth \(n\) and the semistandard Young tableaux filled with numbers from \(1\) to \(n\). Indeed, given \(\boldsymbol{\lambda}\in\mathbb{GT}^{(n)}\) we can produce a semistandard Young tableau of shape \(\lambda^{(n)}\) by filling \(\lambda^{(j)}/\lambda^{(j-1)}\) with numbers equal to \(j\), see Fig. 8. Thus, by a slight abuse of notation we will also use \(\mathbb{GT}^{(n)}(\lambda)\) to denote the set of semistandard Young tableaux of shape \(\lambda\) filled with numbers from \(1\) to \(n\).
### Schensted's insertions and interacting particle systems
Schensted's row and column insertions ([1]) are combinatorial constructions serving as the building blocks of the RSK algorithms, see [10], [11]. Each insertion can be described in the language of semistandard Young tableaux as a sequence of row and column bumpings. In the language of interlacing arrays these bumpings correspond to elementary operations of deterministic long-range pulling and pushing, which involve only two consecutive levels of an array.
**Definition 3.3**.: _(Deterministic long-range pulling, Fig. 9)_
_Let \(j=2,\ldots,n\), and signatures \(\bar{\lambda},\bar{\nu}\in\mathbb{GT}^{+}_{j-1}\), \(\lambda\in\mathbb{GT}^{+}_{j}\) satisfy \(\bar{\lambda}\prec_{\mathsf{h}}\lambda\), \(\bar{\nu}=\bar{\lambda}+\bar{\mathrm{e}}_{i}\), where \(\bar{\mathrm{e}}_{i}=(0,0,\ldots,0,1,0,\ldots,0)\) (for some \(i=1,\ldots,j-1\)) is the \(i\)th basis vector of length \(j-1\). Define \(\nu\in\mathbb{GT}^{+}_{j}\) to be_
\[\nu=\mathsf{pull}(\lambda\mid\bar{\lambda}\to\bar{\nu}):=\begin{cases} \lambda+\mathrm{e}_{i},&\text{if }\bar{\lambda}_{i}=\lambda_{i};\\ \lambda+\mathrm{e}_{i+1},&\text{otherwise}.\end{cases}\]
_Here \(\mathrm{e}_{i}\) and \(\mathrm{e}_{i+1}\) are basis vectors of length \(j\)._
_In words, the particle \(\bar{\lambda}_{i}\) at level \(j-1\) which moved to the right by one generically pulls its upper left neighbor \(\lambda_{i+1}\), or pushes it upper right neighbor \(\lambda_{i}\) if the latter operation is needed to
Figure 8. A semistandard Young tableau corresponding to the array on Fig. 7, right.
preserve the interlacing. Note that the long-range pulling mechanism does not encounter any blocking issues._
**Definition 3.4**.: _(Deterministic long-range pushing, Fig. 10) As in the previous definition, let \(j=2,\ldots,n\), \(\bar{\lambda},\bar{\nu}\in\mathbb{GT}_{j-1}^{+}\), \(\lambda\in\mathbb{GT}_{j}^{+}\) be such that \(\bar{\lambda}\prec_{\mathsf{h}}\lambda\) and \(\bar{\nu}=\bar{\lambda}+\bar{\mathrm{e}}_{i}\). Define \(\nu\in\mathbb{GT}_{j}^{+}\) to be_
\[\nu=\mathsf{push}(\lambda\mid\bar{\lambda}\to\bar{\nu}):=\lambda+\mathrm{e}_{ m},\qquad\text{where }m=\max\{p\colon 1\leq p\leq i\text{ and }\lambda_{p}<\bar{\lambda}_{p-1}\}.\]
_In words, the particle \(\bar{\lambda}_{i}\) at level \(j-1\) which moved to the right by one, pushes its first upper right neighbor \(\lambda_{m}\) which is not blocked (and therefore is free to move without violating the interlacing). Generically (when all particles are sufficiently far apart) \(\lambda_{m}=\lambda_{i}\), so the immediate upper right neighbor is pushed._
_Remark 3.5_ (Move donation).: It is useful to equivalently interpret the mechanism of Definition 3.4 in a slightly different way. Namely, let us say that when the particle \(\bar{\lambda}_{i}\) at level \(j-1\) moves, it gives the particle \(\lambda_{i}\) at level \(j\) a _moving impulse_. If \(\lambda_{i}\) is blocked (i.e., if \(\lambda_{i}=\bar{\lambda}_{i-1}\)), this moving impulse is _donated_ to the next particle \(\lambda_{i-1}\) to the right of \(\lambda_{i}\). If \(\lambda_{i-1}\) is blocked, too, then the impulse is donated further, and so on. Note that the particle \(\lambda_{1}\) cannot be blocked, so this moving impulse will always result in an actual move.
**Definition 3.6**.: _The Schensted's row insertion is an algorithm that takes a semistandard tableau \(\boldsymbol{\lambda}\in\mathbb{GT}^{(n)}\), and an integer \(1\leq x\leq n\), and constructs a new tableau \(\boldsymbol{\lambda}\gets x\) according to the following procedure:_
\(\bullet\) _If \(x\) is at least as large as all the entries in the first row of \(\boldsymbol{\lambda}\), add \(x\) in a new box to the end of the first row. In this case the algorithm terminates._
\(\bullet\) _Otherwise find the leftmost entry \(y\) in the first row that is strictly larger than \(x\) and replace it by \(x\)._
Figure 10. An example of pushing mechanism for \(i=3\) at levels \(4\) and \(5\) (i.e., \(j=5\)). Since the particles \(\lambda_{3}=\bar{\lambda}_{2}\) and \(\lambda_{2}=\bar{\lambda}_{1}\) are blocked, the first particle which can be pushed is \(\lambda_{1}\).
Figure 9. An example of pulling mechanism for \(i=2\) at levels \(2\) and \(3\) (i.e., \(j=3\)). Left: \(\bar{\lambda}_{2}=\lambda_{2}\), which forces the pushing of the upper right neighbor. Right: in the generic situation \(\bar{\lambda}_{2}<\lambda_{2}\) the upper left neighbor is pulled.
\(\bullet\) _Repeat the same steps with \(y\) and the second row, then with the replaced entry \(z\) and the third row,..., and so on until the replaced entry can be put in the end of the next row, possibly by forming a new row of one entry. Then the algorithm terminates._
In terms of arrays and long-range pulling we can describe this insertion in the following way:
\(\bullet\) Levels \(\lambda^{(1)},\ldots,\lambda^{(x-1)}\) remain unchanged.
\(\bullet\) Rightmost particle on the level \(x\) moves by \(1\) to the right, i.e \(\lambda^{(x)}\to\lambda^{(x)}+\bar{\mathrm{e}}_{1}\).
\(\bullet\) Then pull operations are consecutively performed for \(j=x+1,\ldots,n\).
In words, this push-pull chain of movements starts on the right edge of the array and progresses upwards until it reaches the top level. This is the same as saying that shape of a tableau is augmented by one cell after row insertion of a single entry. One can also row-insert a word \(X=x_{1}x_{2}\ldots x_{\ell}\) in a tableau \(\boldsymbol{\lambda}\) by consecutively inserting its entries one by one:
\[\boldsymbol{\lambda}\gets X:=\boldsymbol{\lambda}\gets x_{1} \gets x_{2}\leftarrow\cdots\gets x_{\ell}\]
**Definition 3.7**.: _The Schensted's column insertion is an algorithm that takes a semistandard tableau \(\boldsymbol{\lambda}\in\mathbb{GT}^{(n)}\), and an integer \(1\leq x\leq n\), and constructs a new tableau \(x\to\boldsymbol{\lambda}\) according to the following procedure:_
\(\bullet\) _If \(x\) is strictly larger than all the entries in the first column of \(\boldsymbol{\lambda}\), add \(x\) in a new box at the bottom of the first column. In this case the algorithm terminates._
\(\bullet\) _Otherwise find the topmost entry \(y\) in the first column that is at least large as \(x\) and replace it by \(x\)._
\(\bullet\) _Repeat the same steps with \(y\) and the second column, then with the replaced entry \(z\) and the third column,..., and so on until the replaced entry can be put at the bottom of the next column, possibly by forming a column of one entry. Then the algorithm terminates._
In terms of arrays and long-range pushing we can describe this insertion in the following way:
\(\bullet\) Levels \(\lambda^{(1)},\ldots,\lambda^{(x-1)}\) remain unchanged.
\(\bullet\) Leftmost particle on the level \(x\) moves by \(1\) to the right, i.e \(\lambda^{(x)}\to\lambda^{(x)}+\bar{\mathrm{e}}_{x}\).
\(\bullet\) Then push operations are consecutively performed for \(j=x+1,\ldots,n\).
As in the case of the row insertion, on each of the levels from \(x\)-th to \(N\)-th precisely one particle moves to the right by \(1\). The sequence of moves progresses upwards and to the right until it reaches the top level. One can also column-insert a word \(X=x_{1}x_{2}\ldots x_{\ell}\) in a tableau \(\boldsymbol{\lambda}\) by consecutively inserting its entries one by one in reverse order:
\[X\to\boldsymbol{\lambda}:=x_{1}\to x_{2}\to\cdots\to x_{\ell}\to \boldsymbol{\lambda}\]
Figure 11. An example of Schensted’s row insertion in terms of semistandard tableaux and particle arrays for \(n=5\).
### Plactic Algebra
To a semistandard tableau \(\boldsymbol{\lambda}\in\mathbb{GT}^{(n)}\) one can associate an element \(w(\boldsymbol{\lambda})\in Pl_{n}\) represented by a word obtained by reading entry letters of \(\boldsymbol{\lambda}\) first in the bottom row from left to right, then in the second row from the bottom from left to right, and so on. For instance, for a tableau on Fig. 8 the corresponding word will be \(554433322255551111344\). The following proposition explains basic connections between the plactic monoid and Schensted insertions.
**Proposition 3.8**.: _(see [10])._
1. _For every_ \(a\in Pl_{n}\) _there exists a unique tableau_ \(\boldsymbol{\lambda}\)_, such that_ \(a=w(\boldsymbol{\lambda})\)_._
2. \(w(\boldsymbol{\lambda}\gets x)=w(\boldsymbol{\lambda})x\)_._
3. \(w(x\to\boldsymbol{\lambda})=xw(\boldsymbol{\lambda})\)_._
Hence consecutively inserting \(x_{1},x_{2},\ldots,x_{r}\) via the Schensted's row insertion into a tableau \(\boldsymbol{\lambda}\) leads to the the same result as multiplication of \(w(\boldsymbol{\lambda})\) by \(X=x_{1}x_{2}\cdots x_{r}\) on the right, while multiplication of \(w(\boldsymbol{\lambda})\) by \(X\) on the left amounts to the same result as consecutively inserting \(x_{r},\ldots,x_{2},x_{1}\) in \(\boldsymbol{\lambda}\) via the Schensted's column insertion.
The plactic algebra is noncommutative for \(n\geq 2\), but it contains nice families of commuting elements. More precisely, for \(\lambda\in\mathbb{GT}_{n}^{+}\) and variables \(a_{1},a_{2},\ldots,a_{n}\) define the _plactic Schur polynomial_
\[S_{\lambda}^{Pl}(a_{1}\boldsymbol{1},a_{2}\boldsymbol{2},\ldots,a_{n} \boldsymbol{n}):=\sum_{\boldsymbol{\lambda}\in\mathbb{GT}^{(n)}(\lambda)}w( \boldsymbol{\lambda})\cdot a_{1}^{\left\lfloor\lambda^{(1)}\right\rfloor}a_{ 2}^{\left\lfloor\lambda^{(2)}\right\rfloor\left\lfloor\lambda^{(1)}\right \rfloor}\cdots a_{n}^{\left\lfloor\lambda^{(n)}\right\rfloor\left\lfloor \lambda^{(n-1)}\right\rfloor}. \tag{3.2}\]
**Proposition 3.9**.: _(see [10]). \(S_{\lambda}(a_{1}\boldsymbol{1},\ldots,a_{n}\boldsymbol{n})\) and \(S_{\mu}(a_{1}\boldsymbol{1},\ldots,a_{n}\boldsymbol{n})\) commute for arbitrary \(\lambda\) and \(\mu\), and their product can be expressed as_
\[S_{\lambda}^{Pl}(a_{1}\boldsymbol{1},\ldots,a_{n}\boldsymbol{n})S_{\mu}^{Pl} (a_{1}\boldsymbol{1},\ldots,a_{n}\boldsymbol{n})=\sum_{\nu}c_{\lambda,\mu}^{ \nu}S_{\nu}^{Pl}(a_{1}\boldsymbol{1},\ldots,a_{n}\boldsymbol{n}), \tag{3.3}\]
_where \(c_{\lambda,\mu}^{\nu}\) is the Littlewood-Richardson coefficient, i.e the coefficient of \(S_{\nu}\) in the expansion of \(S_{\lambda}S_{\mu}\) in the basis of Schur functions._
_Remark 3.10_.: A homomorphism from the plactic algebra to \(\Lambda_{n}\) defined by sending every generator \(\mathsf{k}\) to \(1\) sends (3.3) to
\[S_{\lambda}(a_{1},\ldots,a_{N})S_{\mu}(a_{1},\ldots,a_{N})=\sum_{\nu}c_{ \lambda,\mu}^{\nu}S_{\nu}(a_{1},\ldots,a_{N}),\]
which is the defining relation for the Littlewood-Richardson coefficients (for \(\ell(\lambda),\ell(\mu),\ell(\nu)\leq n\)).
Figure 12. An example of Schensted’s column insertion in terms of semistandard tableaux and particle arrays for \(n=5\). Only steps that change the tableau are shown.
We will now simplify notation by identifying \(\mathbf{\lambda}\) with \(w(\mathbf{\lambda})\).
### Plactic algebra action continued
We are now ready to continue subsection 1.3 and prove proposition 1.4.
Proof of proposition 1.4.: For a tableau \(\mathbf{\lambda}\) denote by \(\mathcal{S}(\mathbf{\lambda})\) the set of entries of the first column of \(\mathbf{\lambda}\). Extend this function linearly to the whole plactic algebra. It follows from the description of Schensted's insertion that \(\mathcal{S}(\tilde{\mathbf{\lambda}}\mathbf{\lambda})\) depends on \(\tilde{\mathbf{\lambda}}\) and \(S(\mathbf{\lambda})\), but not on the whole \(\mathbf{\lambda}\). Identify \(S\subseteq\{1,2,\ldots,n\}\) with a one-column tableau with \(S\) as the set of entries. Then it follows from the description of Schensted's insertion that \(H_{r}\cdot S=\mathcal{S}(S^{Pl}_{(r)}\cdot S)\). Here \(S^{Pl}_{(r)}\) is the corresponding one-row plactic Schur polynomial. Since \(S^{Pl}_{(r)}\)'s commute with each other, we can apply the Jacobi-Trudi formula to get
\[\det\left[H_{\lambda_{i}-i+j}\right]^{\ell}_{i,j=1}\cdot S=S^{Pl}_{\lambda} \cdot S=\sum_{\mathbf{\lambda}}a^{|\lambda^{(1)}|}_{1}a^{|\lambda^{(2)}|-|\lambda^ {(1)}|}_{2}\cdots a^{|\lambda^{(n)}|-|\lambda^{(n-1)}|}_{n}\mathcal{S}(\mathbf{ \lambda}S) \tag{3.4}\]
Note that here \(\cdot\) denotes plactic algebra action as defined in subsection 1.3, while multiplication in the plactic algebra itself is written without a dot. Equality (3.4) implies positivity in proposition 1.4.
Note that for the special case \(S=\{1,2,\ldots,n\}\) equality (3.4) becomes equality (1.2).
## 4. Searching for a \(t\)-deformation of plactic action
### Towards proving theorem 1.6
Proof of proposition 1.4 together with equality (1.3) suggest the following plan to prove theorem 1.6. For a semistandard tableau \(\mathbf{\lambda}\in\mathbb{GT}^{(n)}\) find a linear operator \(T_{\mathbf{\lambda}}:V^{\otimes n}\to V^{\otimes n}\) such that
1. \(T_{\mathbf{\lambda}}^{t=0}(S)=\mathcal{S}(\mathbf{\lambda}S)\)
2. Matrix elements of \(T_{\mathbf{\lambda}}\) with respect to basis \(\{1,2\}^{\otimes n}\) are positive for \(0\leq t<1\).
3. \[\Theta(P_{\lambda})=\sum_{\mathbf{\lambda}\text{ of shape }\lambda}\psi_{\mathbf{ \lambda}}(t)a^{\mathbf{\lambda}}T_{\mathbf{\lambda}}\] (4.1)
If we were to find such operators, the positivity in theorem 1.6 would follow, just as proposition 1.2 follows from equality (1.3). For a one-row \(\mathbf{\lambda}\) with entries \(i_{1}\leq i_{2}\leq\cdots\leq i_{r}\) we must take the coefficient of \(a_{i_{1}}a_{i_{2}}\cdots a_{i_{r}}\) in \(\frac{1}{1-t}T_{r}\) in order to satisfy condition (4.1). Similarly, for a one-column \(\mathbf{\lambda}\) with entries \(i_{1}<i_{2}<\cdots<i_{r}\) we must take the coefficient of \(a_{i_{1}}a_{i_{2}}\cdots a_{i_{r}}\) in \(\Theta(e_{r})\). However, for \(\mathbf{\lambda}\) with more rows and columns the choice and existence of \(T_{\mathbf{\lambda}}\) are not clear. The idea is think in line with some of the previous works on deformations of RSK algorithms, i.e. [10], [2], [11]. A feature of this algorithms is that at intermediate stages we need to store additional information. This and experimentation suggests that \(T_{\mathbf{\lambda}}\) should act by \(t\)-inserting columns of \(\mathbf{\lambda}\) in the reverse order and preserving information about previous insertions. This idea together with guess and check leads to the following
### Extended vertex models enter the picture
To prove theorem 1.6 we will derive representation of \(\Theta(P_{\lambda})\) from which the desired positivity is evident. This is accomplished by introducing the following ("extended") 3-colored vertex model. Let \(W\) be a three-dimensional real vector space spanned by elements \(\mathbf{0},\mathbf{1},\mathbf{2}\). Let \(i:V\hookrightarrow W\) be the natural inclusion and \(\pi:W\to V\) be a projection defined via \(\pi(\mathbf{0})=\mathbf{1}\). These maps, respectively, induce inclusion \(i^{\otimes n}:V^{\otimes n}\hookrightarrow W^{\otimes n}\) and projection \(\pi^{\otimes n}:W^{\otimes n}\to V^{\otimes n}\). \(W^{\otimes n}=\bigoplus_{k=0}^{n}W_{n,k}\), where each subspace \(W_{n,k}\) is defined as a span of vectors \(e_{1}\otimes e_{2}\otimes\cdots\otimes e_{n}\) with each \(e_{i}\in\{\mathbf{0},\mathbf{1},\mathbf{2}\}\) and exactly \(k\)\(\mathbf{0}\)'s
among the \(e_{i}\)'s. Let \(U_{2}\) be an (infinite-dimensional) real vector space of finite formal linear combinations of pairs \(\big{\{}\{x,y\}\in\mathbb{Z}_{\geq 0}^{2}\big{\}}\). Given two parameters \(a,t\) we define an operator \(R_{ext}=R_{ext}(a,t):W\otimes U_{2}\to W\otimes U_{2}\) by
\[R_{ext}(\mathbb{0}\otimes\{x,y\}) =a\cdot\mathbb{0}\otimes\{x,y\}+(t^{y}-t^{x+y})\cdot 1\otimes \{x-1,y\}+(1-t^{y})\cdot 2\otimes\{x,y-1\},\] \[R_{ext}(1\otimes\{x,y\}) =a\cdot\mathbb{0}\otimes\{x+1,y\}+t^{y}\cdot 1\otimes\{x,y\}+(1 -t^{y})\cdot 2\otimes\{x+1,y-1\},\] \[R_{ext}(2\otimes\{x,y\}) =a\cdot\mathbb{0}\otimes\{x,y+1\}+2\otimes\{x,y\}. \tag{4.2}\]
\(R_{ext}\) gives rise to a vertex model with weights as specified on Fig. 13.
Define an operator \(H:W^{\otimes n}\to W^{\otimes n}\) as the inhomogeneous transfer operator of \(R_{ext}\) as specified on Fig. 14. Note that there is fixed input \(\{0,0\}\) on the left, while boundary condition on the right is free. \(H\) depends on parameters \(t,a_{1},a_{2},\ldots,a_{n}\). If a pair \(\{x,y\}\in\mathbb{Z}_{\geq 0}^{2}\) appears on the right boundary of the non-zero weight configuration with bottom row \(e_{1}\otimes e_{2}\otimes\cdots\otimes e_{n}\) and top row \(e_{1}^{\prime}\otimes e_{2}^{\prime}\otimes\cdots\otimes e_{n}^{\prime}\), then it is clear from the model that
\[x=\text{number of $1$'s among the $e_{i}$'s - number of $1$'s among the $e_{i}^{\prime}$'s;}\] \[y=\text{number of $2$'s among the $e_{i}$'s - number of $2$'s among the $e_{i}^{\prime}$'s.}\]
As a corollary,
number of \(\mathbb{0}\)'s among the \(e_{i}^{\prime}\)'s - number of \(\mathbb{0}\)'s among the \(e_{i}\)'s = \(x+y\geq 0\).
Figure 14. \(H\) is a transfer operator for the inhomogeneous vertex model defined via \(R_{ext}\). Boundary condition is fixed to be \(\{0,0\}\) on the left and is free on the right.
Figure 13. Weights of the vertex model defined via \(R_{ext}(a,t)\). All configurations not appearing on this picture are assumed to have weight \(0\).
For \(0\leq k_{1},k_{2}\leq n\) define an operator \(H_{k_{1},k_{2}}:W_{n,k_{1}}\to W_{n,k_{2}}\) via restriction of \(H\). Then \(H_{k_{1},k_{2}}=0\) unless \(k_{1}\leq k_{2}\). Then theorem 1.6 would follow from the following
**Theorem 4.1**.: _Let \(\lambda\) be a partition with \(\lambda_{1}=m\). Then_
\[\Theta(P_{\lambda})=\pi^{\otimes n}\circ H_{\lambda^{\prime}_{2},\lambda^{ \prime}_{1}}\circ H_{\lambda^{\prime}_{3},\lambda^{\prime}_{2}}\circ\cdots \circ H_{\lambda^{\prime}_{m},\lambda^{\prime}_{m-1}}\circ H_{0,\lambda^{ \prime}_{m}}\circ i^{\otimes n} \tag{4.3}\]
_Remark 4.2_.: Note that although \(\Theta(P_{\lambda})\) is itself an operator \(V^{\otimes n}\to V^{\otimes n}\), the right hand side of representation (4.3) utilizes a bigger space \(W^{\otimes n}\) in its intermediate steps.
## 5. Proof of the main result
Proof.: Hall-Littlewood polynomials (as well as more general Macdonald polynomials) satisfy the Pieri formulas (see [10], pp.340-341):
\[P_{\lambda}g_{r}=\sum_{\lambda\prec_{0}\mu,\ |\mu|-|\lambda|=r}\phi_{\mu/ \lambda}(0,t)P_{\mu},\qquad P_{\lambda}e_{r}=\sum_{\lambda\prec_{\mu},\ |\mu|-|\lambda|=r}\psi^{\prime}_{\mu/ \lambda}(0,t)P_{\mu}. \tag{5.1}\]
We will adopt the conventions that \(\lambda^{\prime}_{m}=0\) for \(m>\lambda_{1}\), \(\mu^{\prime}_{m}=0\) for \(m>\mu_{1}\), and \(\lambda^{\prime}_{0}=\mu^{\prime}_{0}\). The multiplicities \(\phi_{\mu/\lambda}=\phi_{\mu/\lambda}(0,t)\) and \(\psi^{\prime}_{\mu/\lambda}=\psi^{\prime}_{\mu/\lambda}(0,t)\) in (5.1) can be expressed as
\[\phi_{\mu/\lambda}=\prod_{i=1}^{\mu_{1}}\mathbf{1}_{\mu^{\prime}_ {i}=\lambda^{\prime}_{i}+1,\ \mu^{\prime}_{i+1}=\lambda^{\prime}_{i+1}}\cdot\left(\mathbf{1}-t^{\mu^{\prime} _{i}-\lambda^{\prime}_{i+1}}\right)=\] \[=\prod_{i=1}^{\mu_{1}}\mathbf{1}_{\mu^{\prime}_{i}=\lambda^{ \prime}_{i+1}}\cdot\left(\mathbf{1}_{\mu^{\prime}_{i-1}=\lambda^{\prime}_{i-1 }}\cdot\left(1-t^{\mu^{\prime}_{i}-\lambda^{\prime}_{i+1}}\right)+\mathbf{1}_{ \mu^{\prime}_{i-1}=\lambda^{\prime}_{i-1}+1}\cdot\left(\frac{1-t^{\mu^{\prime} _{i}-\lambda^{\prime}_{i+1}}}{1-t^{\mu^{\prime}_{i-1}-\lambda^{\prime}_{i}}} \right)\right)=\] \[=(1-t\cdot\mathbf{1}_{\mu_{1}>\lambda_{1}})\cdot\prod_{i=1}^{ \lambda_{1}}\mathbf{1}_{\mu^{\prime}_{i}=\lambda^{\prime}_{i}+1}\cdot\left( \mathbf{1}_{\mu^{\prime}_{i-1}=\lambda^{\prime}_{i-1}}\cdot\left(1-t^{\mu^{ \prime}_{i}-\lambda^{\prime}_{i+1}}\right)+\mathbf{1}_{\mu^{\prime}_{i-1}= \lambda^{\prime}_{i-1}+1}\cdot\left(\frac{1-t^{\mu^{\prime}_{i}-\lambda^{ \prime}_{i+1}}}{1-t^{\mu^{\prime}_{i-1}-\lambda^{\prime}_{i}}}\right)\right) \tag{5.3}\] \[\psi^{\prime}_{\mu/\lambda}=\prod_{i=1}^{\lambda_{1}}\frac{(t;t)_ {\mu^{\prime}_{i}-\mu^{\prime}_{i+1}}}{(t;t)_{\mu^{\prime}_{i}-\lambda^{\prime} _{i}}(t;t)_{\lambda^{\prime}_{i}-\mu^{\prime}_{i+1}}}=\prod_{i=1}^{\mu_{1}} \left(\frac{(t;t)_{\mu^{\prime}_{i}-\lambda^{\prime}_{i+1}}}{(t;t)_{\mu^{ \prime}_{i}-\lambda^{\prime}_{i}}(t;t)_{\lambda^{\prime}_{i}-\lambda^{\prime }_{i+1}}}\cdot\frac{(t;t)_{\mu^{\prime}_{i-1}-\mu^{\prime}_{i}}(t;t)_{\lambda^ {\prime}_{i-1}-\lambda^{\prime}_{i}}}{(t;t)_{\mu^{\prime}_{i-1}-\lambda^{ \prime}_{i}}(t;t)_{\lambda^{\prime}_{i-1}-\mu^{\prime}_{i}}}\right) \tag{5.2}\]
Denote by \(\Pi_{\lambda}\) the right hand side of (4.3). To prove theorem 4.1 we need to show that \(\Pi(P_{\lambda})=\Pi_{\lambda}\) for any partition \(\lambda\). It is enough to prove that for any partition \(\lambda\) we have
\[T(\alpha)\Pi_{\lambda}=\left(\prod_{i=1}^{n}\frac{1-\alpha a_{i}}{1-t\alpha a_ {i}}\right)\sum_{\lambda\prec_{\mu}\mu}\alpha^{|\mu|-|\lambda|}\phi_{\mu/ \lambda}\Pi_{\mu}. \tag{5.4}\]
Indeed, suppose we know that relation 5.4 holds. Multiply both sides of this relation by \(\prod_{i=1}^{n}\frac{1-t\alpha a_{i}}{1-\alpha a_{i}}\), then take coefficient of \(\alpha^{r}\) to get
\[T_{r}\Pi_{\lambda}=\sum_{\lambda\prec_{0}\mu,\ |\mu|-|\lambda|=r}\phi_{\mu/ \lambda}\Pi_{\mu}\qquad\text{for any }r\in\mathbb{Z}_{\geq 0}. \tag{5.5}\]
On the other hand, it follows from the definition of \(\Theta(P_{\lambda})\) that
\[T_{r}\Theta(P_{\lambda})=\sum_{\lambda\prec_{0}\mu,\ |\mu|-|\lambda|=r}\phi_{\mu/ \lambda}\Theta(P_{\mu})\qquad\text{for any }r\in\mathbb{Z}_{\geq 0}. \tag{5.6}\]
We can now show that \(\Theta(P_{\lambda})=\Pi_{\lambda}\) by induction on \((\lambda^{\prime}_{1},\lambda_{-1})\). Base for \((0,0)\) is clear: \(H_{k,k}(v)=v\) for any \(v\in\langle 1,2\rangle^{\otimes n}\), hence \(\Pi_{\varnothing}=Id=\Theta(P_{\varnothing})\). Suppose the equality \(\Theta(P_{\lambda})=\Pi_{\lambda}\) has been established for all \(\lambda\) with either \(\lambda^{\prime}_{1}\prec\nu^{\prime}_{1}\) or both \(\lambda^{\prime}_{1}=\nu^{\prime}_{1}\) and \(\lambda_{-1}\prec\nu_{-1}\). We would like to also establish it for \(\nu\). Let \(\chi\) be the partition obtained from \(\nu\) by deleting its last nonzero row. Then by (5.5) and the inductive assumption we get
\[\Theta(P_{\nu})=\frac{1}{\phi_{\nu/\chi}}\left(T_{\nu_{-1}}\Theta(P_{ \chi})-\sum_{\chi<_{\mathfrak{h}}\lambda,\ |\lambda|-|\chi|=\nu_{-1},\ \lambda\neq\nu}\phi_{\lambda/\chi}\Theta(P_{\lambda})\right)=\\ =\frac{1}{\phi_{\nu/\chi}}\left(T_{\nu_{-1}}\Pi_{\chi}-\sum_{\chi <_{\mathfrak{h}}\lambda,\ |\lambda|-|\chi|=\nu_{-1},\ \lambda\neq\nu}\phi_{\lambda/\chi}\Pi_{ \lambda}\right)=\Pi_{\nu}.\]
So it remains to establish (5.4). Consider a stochastic vertex model \(R_{3}(a,t)\) with weights as depicted on Fig. 15.
Denote by \(\mathcal{A}_{k}\) the inhomogeneous transfer operator \(W_{n,k}\to W_{n,k}\) of \(R_{3}\) with both left and right boundary conditions fixed to be \(\mathfrak{0}\) (as specified on top of Fig. 16). Denote by \(\mathcal{B}_{k}\) the inhomogeneous transfer operator \(W_{n,k}\to W_{n,k+1}\) of \(R_{3}\) with the left boundary condition fixed to be \(\mathfrak{0}\) and the right boundary condition being \(\mathfrak{1}\) or \(\mathfrak{2}\) (as specified on bottom Fig. 16).
**Lemma 5.1**.: _For \(0\leq k_{1}\leq k_{2}\leq n\) we have_
\[\mathcal{A}_{k_{2}}H_{k_{1},k_{2}}=H_{k_{1},k_{2}}\mathcal{A}_{k _{1}}+H_{k_{1}+1,k_{2}}\mathcal{B}_{k_{1}}; \tag{5.8}\] \[\mathcal{B}_{k_{2}}H_{k_{1},k_{2}}=\alpha\left(1-t^{k_{2}-k_{1}+1 }\right)H_{k_{1},k_{2}+1}\mathcal{A}_{k_{1}}+\alpha H_{k_{1}+1,k_{2}+1} \mathcal{B}_{k_{1}}. \tag{5.7}\]
Proof of lemma 5.1.: To prove relations (5.7) and (5.8) we will utilize a Yang-Baxter type equation relating \(R_{3}\), \(R_{ext}\) and another (auxiliary) vertex model \(R_{aux}(t):U_{2}\otimes\langle\mathfrak{0},1,2\rangle\to U_{2}\otimes \langle\mathfrak{0},1,2\rangle\) with weights as specified on Fig. 17. The Yang-Baxter type equation we need is an equality of operators \(\langle\mathfrak{0},1,2\rangle\otimes U_{2}\otimes\langle\mathfrak{0},1,2\rangle \rightarrow\langle\mathfrak{0},1,2\rangle\otimes U_{2}\otimes\langle\mathfrak{0 },1,2\rangle\) as specified on Fig. 18.
Note that \(R_{aux}((0,0)\otimes\mathfrak{0})=(0,0)\otimes\mathfrak{0}\) and for any \((x,y)\in\mathbb{Z}_{\geq 0}^{2}\) we have
\[R_{aux}((x,y),\mathfrak{0},(x,y),\mathfrak{0})+R_{aux}((x,y), \mathfrak{0},(x-1,y),\mathfrak{1})+R_{aux}((x,y),\mathfrak{0},(x,y-1),2)=1,\] \[R_{aux}((x-1,y),\mathfrak{1},(x,y),\mathfrak{0})+R_{aux}((x-1,y ),\mathfrak{1},(x-1,y),\mathfrak{1})+R_{aux}((x-1,y),\mathfrak{1},(x,y-1),2)=0,\] \[R_{aux}((x,y-1),2,(x,y),\mathfrak{0})+R_{aux}((x,y-1),2,(x-1,y),\mathfrak{1})+R_{aux}((x,y-1),2,(x,y-1),2)=0. \tag{5.9}\]
Figure 15. Stochastic vertex model with weights given by \(R_{3}(a,t)\). Fat lines correspond to \(\mathfrak{2}\)’s, normal lines correspond to \(\mathfrak{1}\)’s, dotted lines correspond to \(\mathfrak{0}\)’s.
Multiply both sides of (5.7) by \(\alpha^{k_{2}}\). Then this equality becomes a corollary of a repeated application of the Yang-Baxter type equation from Fig. 18 together with (5.9). More precisely, see the chain of equalities (5.10)
Figure 16. Top: Transfer operator \(\mathcal{A}_{k}\). Bottom: Transfer operator \(\mathcal{B}_{k}\).
Figure 17. Weights of the vertex model defined via \(R_{aux}(t)\). All configurations not appearing on this picture are assumed to have weight \(0\).
(5.10) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.11) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.12) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.13) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.14) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.15) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.16) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.17) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.18) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.19) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.21) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.22) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.23) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.24) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.25) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.26) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.27) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.28) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.29) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.29) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.30) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.21) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.22) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.23) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.24) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.25) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.26) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.27) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.28) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.29) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.21) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.22) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.23) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.24) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.25) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.26) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.27) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.28) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.29) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.21) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.22) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.23) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.24) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.25) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.26) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.27) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.28) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.29) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.21) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.22) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.23) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.24) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.25) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.26) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.27) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.28) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.29) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.210) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.22) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.23) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.21) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.23) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.24) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.25) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.26) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.27) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.28) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.29) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.20) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.21) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.22) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.23) \[\begin{array}{c}\includegraphics[]{figure/1-3.pdf}\end{array}\] (5.24) \[\begin{array}{
Note that for any \((x,y)\in\mathbb{Z}_{\geq 0}^{2}\) we have
\[\begin{split} R_{aux}((x,y),\mathsf{0},(x,y),\mathsf{0})\cdot(1-t^ {x+y})+R_{aux}((x,y),\mathsf{0},(x-1,y),\mathsf{1})+R_{aux}((x,y),\mathsf{0},( x,y-1),\mathsf{2})=0,\\ R_{aux}((x-1,y),\mathsf{1},(x,y),\mathsf{0})\cdot(1-t^{x+y})+R_{ aux}((x-1,y),\mathsf{1},(x-1,y),\mathsf{1})+\\ +R_{aux}((x-1,y),\mathsf{1},(x,y-1),\mathsf{2})=1,\\ R_{aux}((x,y-1),\mathsf{2},(x,y),\mathsf{0})\cdot(1-t^{x+y})+R_{ aux}((x,y-1),\mathsf{2},(x-1,y),\mathsf{1})+\\ +R_{aux}((x,y-1),\mathsf{2},(x,y-1),\mathsf{2})=1.\end{split} \tag{5.11}\]
Multiply both sides of (5.8) by \(\alpha^{k_{2}}\). Then this equality becomes a corollary of a repeated application of the Yang-Baxter type equation from Fig. 18 together with (5.11). More precisely, see the chain of equalities (5.12)
It remains to show that (5.4) can be proved for any given \(\lambda\) using Lemma 5.1. Consider the operator \(\left(\mathcal{A}_{\lambda^{\prime}_{1}}+\mathcal{B}_{\lambda^{\prime}_{1}} \right)\circ H_{\lambda^{\prime}_{2},\lambda^{\prime}_{1}}\circ\cdots\circ H_{ \lambda^{\prime}_{-1},\lambda^{\prime}_{-2}}\circ H_{0,\lambda^{\prime}_{-1}}\) and repeatedly use the commutation relations (5.7) and (5.8) to move \(\mathcal{A}\)'s and \(\mathcal{B}\)'s through all the \(H\)'s. Note that (5.7) for \(k_{1}=k_{2}=k\) becomes \(\mathcal{A}_{k}H_{k,k}=H_{k,k}\mathcal{A}_{k}\). Then the resulting sum is
\[\sum_{\mu:\ \lambda\sim_{\mathfrak{h}}\mu,\ \mu_{1}=\lambda_{1}} \left(\left[\alpha^{|\mu|-|\lambda|}\prod_{1\leq i\leq\lambda_{1} :\ \mu^{\prime}_{i}=\lambda^{\prime}_{i}+1,\ \mu^{\prime}_{i+1}=\lambda^{\prime}_{i+1}}\left(1-t^{\mu^{\prime}_{i}- \lambda^{\prime}_{i+1}}\right)\right]H_{\mu^{\prime}_{2},\mu^{\prime}_{1}} \circ\cdots\circ H_{\mu^{\prime}_{-1},\mu^{\prime}_{-2}}\circ H_{0,\mu^{ \prime}_{-1}}\circ\mathcal{A}_{0}\right.\] \[\left.+\left[\alpha^{|\mu|-|\lambda|}\prod_{1\leq i\leq\lambda_{1} -1:\ \mu^{\prime}_{i}=\lambda^{\prime}_{i}+1,\ \mu^{\prime}_{i+1}=\lambda^{\prime}_{i+1}}\left(1-t^{\mu^{\prime}_{i}- \lambda^{\prime}_{i+1}}\right)\right]H_{\mu^{\prime}_{2},\mu^{\prime}_{1}} \circ\cdots\circ H_{\mu^{\prime}_{-1},\mu^{\prime}_{-2}}\circ H_{1,\mu^{ \prime}_{-1}}\circ\mathcal{B}_{0}\right). \tag{5.13}\]
Note that \(\mathcal{A}_{0}=\left(\prod_{i=1}^{n}\frac{1-\alpha a_{i}}{1-t \alpha a_{i}}\right)Id\). Substitute \(k_{1}=k_{2}=0\) in (5.8) to get
\[\mathcal{B}_{0}H_{0,0}=\alpha(1-t)H_{0,1}\mathcal{A}_{0}+\alpha H_{1,1} \mathcal{B}_{0}.\]
\(H_{0,0}=Id\), so
\[\mathcal{B}_{0}=\alpha(1-t)\left(\prod_{i=1}^{n}\frac{1-\alpha a_{i}}{1-t\alpha a _{i}}\right)\left(Id-\alpha H_{1,1}\right)^{-1}H_{0,1}=\left(1-t\right)\left( \prod_{i=1}^{n}\frac{1-\alpha a_{i}}{1-t\alpha a_{i}}\right)\sum_{k=1}^{ \infty}\alpha^{k}H_{1,1}^{k-1}H_{0,1}.\]
Substituting this equality in (5.13) gives us
\[\left(\mathcal{A}_{\lambda_{1}^{\prime}}+\mathcal{B}_{\lambda_{1 }^{\prime}}\right)H_{\lambda_{1}^{\prime},\lambda_{2}^{\prime}}\circ\cdots \circ H_{\lambda_{-2}^{\prime},\lambda_{-1}^{\prime}}\circ H_{\lambda_{-1}^{ \prime},0}\\ =\left(\prod_{i=1}^{n}\frac{1-\alpha a_{i}}{1-t\alpha a_{i}} \right)\sum_{\lambda\prec_{n}\mu}\alpha^{|\mu|-|\lambda|}\phi_{\mu/\lambda}H_ {\mu_{2}^{\prime},\mu_{1}^{\prime}}\circ\cdots\circ H_{\mu_{-1}^{\prime},\mu _{-2}^{\prime}}\circ H_{0,\mu_{-1}^{\prime}}. \tag{5.14}\]
Finally, note that \(T(\alpha)\pi^{\otimes n}=\pi^{\otimes n}\left(\mathcal{A}_{\lambda_{1}^{ \prime}}+\mathcal{B}_{\lambda_{1}^{\prime}}\right)\) as operators \(W_{n,\lambda_{1}^{\prime}}\to W_{n,0}\). Together with (5.14) this implies (5.4).
|
2303.07434 | Discovering Multiple Algorithm Configurations | Many practitioners in robotics regularly depend on classic, hand-designed
algorithms. Often the performance of these algorithms is tuned across a dataset
of annotated examples which represent typical deployment conditions. Automatic
tuning of these settings is traditionally known as algorithm configuration. In
this work, we extend algorithm configuration to automatically discover multiple
modes in the tuning dataset. Unlike prior work, these configuration modes
represent multiple dataset instances and are detected automatically during the
course of optimization. We propose three methods for mode discovery: a post hoc
method, a multi-stage method, and an online algorithm using a multi-armed
bandit. Our results characterize these methods on synthetic test functions and
in multiple robotics application domains: stereoscopic depth estimation,
differentiable rendering, motion planning, and visual odometry. We show the
clear benefits of detecting multiple modes in algorithm configuration space. | Leonid Keselman, Martial Hebert | 2023-03-13T19:21:59Z | http://arxiv.org/abs/2303.07434v1 | # Discovering Multiple Algorithm Configurations
###### Abstract
Many practitioners in robotics regularly depend on classic, hand-designed algorithms. Often the performance of these algorithms is tuned across a dataset of annotated examples which represent typical deployment conditions. Automatic tuning of these settings is traditionally known as algorithm configuration. In this work, we extend algorithm configuration to automatically discover multiple modes in the tuning dataset. Unlike prior work, these configuration modes represent multiple dataset instances and are detected automatically during the course of optimization. We propose three methods for mode discovery: a post hoc method, a multi-stage method, and an online algorithm using a multi-armed bandit. Our results characterize these methods on synthetic test functions and in multiple robotics application domains: stereoscopic depth estimation, differentiable rendering, motion planning, and visual odometry. We show the clear benefits of detecting multiple modes in algorithm configuration space.
## I Introduction
Autonomous integrated systems often depend on a multitude of algorithms interacting with each other and their external environment. Despite the recent popularity of deep, end-to-end trained models [1], robotic systems often depend on hand-designed algorithms in several parts of the processing stack. They include motion planning [2], algorithms involved in sensing [3] and simultaneous location and mapping [4]. As systems become more sophisticated, they often accumulate more methods and with them, more parameters that need to be set and configured by the system designers.
Often the developers of these methods can discover viable configurations by hand but leave many settings open to configuration by eventual users. Intuitively, these can include settings can control smoothing, performance and run-time. Ideal settings in noise-free environments can vary dramatically from those required in noisy settings. Likewise, different configurations may exist for optimal online and offline performance. Tuning such settings to work well in a deployment environment remains a challenge for many autonomous systems. Without proper tuning, components that are expected to be reliable may unexpectedly fail.
While it is possible to tune settings by hand, it is also possible to use automated methods to find potential configurations. In classic computer science literature, this was done to optimize runtime performance [5] and was known as the algorithm selection (or configuration) problem. Researchers have shown how automated tuning can quickly and robustly improve the performance of even common tools such as compilers [6]. In an era with many benchmarks [7, 8, 9] and challenge competitions [10], algorithm tuning is performed on a validation set made to approximate the performance of the final test set, and ensure the best possible performance for proposed techniques. To demonstrate consistency and generalizability, researchers often report performance under a single configuration.
Considering multiple configurations can greatly expand the applicability of a particular method [11]. In the case of multi-objective optimization, a Pareto set exists, where progress on any one objective regresses performance on another objective. As such, providing multiple configurations is often done by hardware vendors [3], and selecting between multiple configurations in robotics is also well studied [12, 13, 14], along with selecting from ensembles of solvers [15] and motion-planners [16].
In this work, we propose to discover multiple viable configurations while an algorithm configuration is being automatically tuned over a dataset by some black-box optimizer [17, 18]. By noticing correlations between data in response to new configurations, we detect multiple algorithm modes. Working in algorithm configuration space enables generalization across several problem domains, as they do not depend on domain specific features. This allows our method, with fixed settings, to show benefits in multiple application areas (see Section IV). We show how multiple modes can be found online (Section III-E) and how they can be used to guard against outliers (Section IV-B).
Fig. 1: **Partitioned Configurations.** Instead of finding a single algorithm configuration for an entire dataset, we partition the dataset during the process of optimization and find a different configuration for each partition.
## II Related Work
There is a long history of work in algorithm configuration and related areas. Originally studied for performance optimization [5], algorithm configuration has been well studied in the SAT solver community [19]. These extensions include large evaluations [20], taxonomies of methods [21] and methods which adapt the algorithm configuration over a series of time-steps [22, 23, 24], assuming the algorithms used have different temporal properties.
Since robotics applications and datasets are reasonably expensive operations (compared to purely synthetic tasks), our work is related to work on extremely few function evaluations, typically on the order of 100 [25]. Typically this is studied in the Machine Learning community as hyperparameter search, finding configurations for ideal neural network configuration and training [26, 18].
Our work is most closely related to portfolio-based algorithm configuration literature [27, 28, 29]. These methods often design a different configuration for each instance of the problem in their dataset (e.g. each data example) [30, 31, 32]. Similar to our work, they cluster methods into groups. However, their clusters are derived from domain-specific feature spaces, while we use the response of the instances to new configurations. Our use of domain-specific features is limited to test time deployment, using supervised training to target our obtained algorithm configuration clusters.
In the Machine Learning community, the problem of coreset discovery [33] is related to our approach. Coreset discovery finds representative examples from a dataset to focus training time on a subset of the data. Most related, methods exist for online discovery of such sets [34]. Of note, our contribution is orthogonal to coreset research, as our method could benefit from coreset methods, which would serve to give us a smaller subset of data to evaluate at each iteration. Specifically, coreset methods for clustering [35] could enable more function evaluation steps in a fixed budget of time and give better resulting minima.
In computer vision, some recent work has explored estimating algorithm configurations for classic algorithms on a local level, operating on patches within an image [36].
## III Method
Our approach consists of evaluating algorithm configurations from a black box optimizer (Section III-B) across a dataset of examples for the given algorithm (Algorithm 1). Building upon this baseline, we propose three methods of partitioning the data during optimization: Post hoc (Algorithm 2), Staged (Algorithm 3), and Online (Algorithm 4).
For our experiments, we always use two partitions, even when there are more known modes (Section IV-A). This avoids exploring the area of instance-specific algorithm configuration and minimizes our risk of overfitting to the dataset and overstating our performance. We typically report results between the initialization (the known defaults for an algorithm) and the oracle. Our oracle is defined as awarding the best known configuration for each individual datum across all optimization runs.
### _Partitioning_
The optimal partition for a given number of partitions \(K\), with \(M\) algorithm configurations over \(N\) datapoints can be formulated via 0-1 Integer Linear Programming where \(c_{i,j}\) corresponds to the quality of datum \(i\) with configuration \(j\).
\[\min c_{i,j}\,x_{i,j}\] subject to \[x_{i,j} \in\{0,1\}\] \[\sum_{j=1}^{M}x_{i,j} =1\] \[\sum_{j=1}^{M}\mathbb{1}\bigg{[}(\sum_{i=1}^{N}x_{i,j})>0\bigg{]} \leq K\]
One can also exhaustively evaluate all \(\binom{M}{K}\) partitions. Our experiments do exhaustive evaluation for \(K=2\) (as with all results in this paper) and use the optimization formulation with larger numbers of partitions. We solve the optimization problem with a recent solver [37] and implement the indicator variables using the BigM modeling trick.
As an alternative, we evaluate using a clustering method such as k-Means [38] on a normalized matrix \(\tilde{X}\), where each row has zero mean and unit variance. Cluster centers are in the space of the history of evaluated configurations and each row is a datum's response to the history of evaluated configurations. Clustering approaches treat the algorithm configuration history as a feature and group results which behave similarly, but may not be optimally partitioned.
### _Black Box Optimizer_
Our method is generic to the choice of black box optimizer, also known as gradient-free optimization. For example, one could use random search, which is known to be a strong baseline in higher dimensional optimization [39]. On the other extreme, if one has expensive function evaluations, one could fit a surrogate function to the data and optimize its expected minima, as is done in Bayesian Optimization [40]. Effective black box optimizers in practice often combine a plethora of optimizers [6, 41].
Fig. 2: **Partitioning vs K-Means Clustering on the stereoscopic depth experiments described in Section IV-B.**
We use CMA-ES, an evolutionary method with a multivariate Gaussian model [17]. CMA-ES is known for algorithm configuration search in robotics [42] and is widely used in other algorithm configuration comparisons [22]. CMA-ES is convenient in only requiring an initial configuration and a \(\sigma\) in parameter space. When tuning existing algorithms, reasonable prior configurations are often available. Approaches which focus on modeling a bounded volume [40, 41]) can be wasteful. All of our parameter search is for non-negative parameters, so we transform our search space with \(\log(x)\) to perform unconstrained optimization, making our search operate on order-of-magnitude scale for all parameters.
### _Post hoc Partitioning_
The post hoc method is simple and straightforward: perform black box on the dataset as a whole, noting each datum's response to each configuration. Afterwards, partition the data following Section III-A. This approach allows clearest evaluation against the non-partitioned CMA-ES baseline, which it outperforms in all of our experiments. Algorithm 2 outlines the _Post hoc_ method in detail. As a method with no change in exploration, it can be run on existing single mode optimization or simply using coherently evaluated random configurations [39], as in Section IV-F.
### _Staged Partitioning_
In staged partitioning, we spend half of the function evaluations exploring the space to find adequate minimia, and we spend half of the function evaluations exploiting the discovered partitions in isolation. While the scale of a particular problem may suggest different balance of exploration and exploitation stages, we use a ratio of \(\frac{1}{2}\) for our experiments. Algorithm 3 performs partitioning in the middle of optimization and tunes the results for each partition. This enables more explicit exploitation of the partitions, at the expense of less exploration time to find good partitions.
```
1:\(x_{0}\) Initial Configuration
2:\(f_{1\ldots N}(x)\) dataset queries for algorithm
3:\(M\) Maximum number of function evaluations
4:procedureOptimize(\(x_{0},f_{1\ldots N}(x),M\))
5:for\(i\gets 1\) to \(N\)do
6:\(x_{i}\leftarrow\)Optimizer Candidate()
7:for\(j\gets 1\) to \(N\)do\(\triangleright\) Evaluate all data
8:\(X_{i,j}\gets f_{j}(x_{i})\)
9:endfor
10:\(g_{i}\leftarrow\)Mean(\(X_{i}\))
11:Optimizer Tell(\(g_{i}\))\(\triangleright\) Report average
12:endfor
13:\(Y_{i}\leftarrow\)Mean(\(X_{i,j}\))\(\triangleright\) Per configuration scores
14:\(x\gets x_{\mathrm{argmin}}(Y_{i})\)\(\triangleright\) Best configuration
15:endprocedure
```
**Algorithm 2** Finding modes with post hoc partitioning
```
1:procedureStaged(\(x_{0},f_{1\ldots N},M,K\))
2:Post hoc(\(x_{0},f_{1\ldots N}(x),\frac{M}{2},K\))
3:for\(k\gets 1\) to \(K\)do\(\triangleright\) Separate optimization
4:Optimize(\(x_{0},f_{c_{j}=k}(x),\frac{M}{2}\))
5:endfor
6:endprocedure
```
**Algorithm 3** Finding modes with staged partitioning
```
1:procedureOnline(\(x_{0},f_{1\ldots N},M,K\))
2:\(B_{k}\leftarrow\)Bandit(\(N\))\(\triangleright\)\(K\) arms for each datum
3:\(OPT_{k}\leftarrow\)Optimizer()\(\triangleright\)\(K\) optimizers
4:for\(i\gets 1\) to \(M\)do
5:for\(j\gets 1\) to \(N\)do
6:\(b_{j}\leftarrow\)Bandit Pull(\(B_{j}\))\(\triangleright\) Sample bandit
7:endfor
8:for\(m\gets 1\) to \(K\)do\(\triangleright\) Separate Evaluation
9:\(x_{i,k}\gets OPT_{k}\)Candidate()
10:\(y_{i,k}\leftarrow\)Mean(\(f_{b_{j}=k}(x_{i,k})\))
11:\(OPT_{k}\)Tell(\(y_{i,k}\))
12:endfor
13:endfor
14:\(c_{j}\leftarrow\)Best Arm(\(B_{j}\))
15:\(y_{k}\leftarrow\)Best Config(\(OPT_{k}\))
16:endprocedure
```
**Algorithm 4** Finding modes with online partitioning
### _Online Partitioning_
To balance exploration and exploitation in an online fashion, one could use a multi-armed bandit. Algorithm 4 dynamically assigns data points to partitions during the course of optimization. Since CMA-ES only evaluates relative order, we can readily switch data assigned to each partition during the course of evaluation. For the online method, we setup a multi-armed bandit (MAB) for each datum. Since our function evaluations are unbounded, we use classic Thompson Sampling [43, 44] with a Gaussian distribution. We perform one CMA-ES step to sample the space, and use that to initialize the distributions for each arm of the bandits identically. Each iteration, we sample a partition assignment for each bandit. That datum then evaluates the configuration given by that optimizer and records its result for that arm. The optimizers record the mean cost of the data assigned to them that iteration.
This approach allows us to simultaneously perform multiple optimizations and partition assignments on the fly.
```
1:\(x_{0}\) Initial Configuration
2:\(f_{1\ldots N}(x)\) dataset queries for algorithm
3:\(M\) Maximum number of function evaluations
4:procedureOptimize(\(x_{0},f_{1\ldots N}(x),M\))
5:for\(i\gets 1\) to \(N\)do
6:\(x_{i}\leftarrow\)Optimizer Candidate()
7:for\(j\gets 1\) to \(N\)do\(\triangleright\) Evaluate all data
8:\(X_{i,j}\gets f_{j}(x_{i})\)
9:endfor
10:\(g_{i}\leftarrow\)Mean(\(X_{i}\))
11:Optimizer Tell(\(g_{i}\))\(\triangleright\) Report average
12:endfor
13:\(Y_{i}\leftarrow\)Mean(\(X_{i,j}\))\(\triangleright\) Per configuration scores
14:\(x\gets x_{\mathrm{argmin}}(Y_{i})\)\(\triangleright\) Best configuration
15:endprocedure
```
**Algorithm 5** Finding modes with staged partitioning
```
1:procedureStaged(\(x_{0},f_{1\ldots N},M,K\))
2:Post hoc(\(x_{0},f_{1\ldots N}(x),\frac{M}{2},K\))
3:for\(k\gets 1\) to \(K\)do\(\triangleright\) Separate optimization
4:Optimize(\(x_{0},f_{c_{j}=k}(x),\frac{M}{2}\))
5:endfor
6:endprocedure
```
**Algorithm 6** Finding modes with online partitioning
```
1:procedureOnline(\(x_{0},f_{1\ldots N},M,K\))
2:\(B_{k}\leftarrow\)Bandit(\(N\))\(\triangleright\)\(K\) arms for each datum
3:\(OPT_{k}\leftarrow\)Optimizer()\(\triangleright\)\(K\) optimizers
4:for\(i\gets 1\) to \(M\)do
5:for\(j\gets 1\) to \(N\)do
6:\(b_{j}\leftarrow\)Bandit Pull(\(B_{j}\))\(\triangleright\) Sample bandit
7:endfor
8:for\(m\gets 1\) to \(K\)do\(\triangleright\) Separate Evaluation
9:\(x_{i,k}\gets OPT_{k}\)Candidate()
10:\(y_{i,k}\leftarrow\)Mean(\(f_{b_{j}=k}(x_{i,k})\))
11:\(OPT_{k}\)Tell(\(y_{i,k}\))
12:endfor
13:endfor
14:\(c_{j}\leftarrow\)Best Arm(\(B_{j}\))
15:\(y_{k}\leftarrow\)Best Config(\(OPT_{k}\))
16:endprocedure
```
**Algorithm 7** Finding modes with online partitioning
```
1:procedureOnline(\(x_{0},f_{1\ldots N},M,K\))
2:\(B_{k}\leftarrow\)Bandit(\(N\))\(\triangleright\)\(K\) arms for each datum
4:\(OPT_{k}\leftarrow\)Optimizer()\(\triangleright\)\(K\) optimizers
5:for\(i\gets 1\) to \(M\)do
6:for\(j\gets 1\) to \(N\)do
7:\(b_{j}\leftarrow\)Bandit Pull(\(B_{j}\))\(\triangleright\) Sample bandit
8:endfor
9:for\(m\gets 1\) to \(K\)do\(\triangleright\) Separate Evaluation
10:\(x_{i,k}\gets OPT_{k}\)Candidate()
11:\(y_{i,k}\leftarrow\)Mean(\(y_{i,k}\))
12:\(OPT_{k}\)Tell(\(y_{i,k}\))
13:endfor
14:endfor
15:\(c_{j}\leftarrow\)Best Arm(\(B_{j}\))
16:\(y_{k}\leftarrow\)Best Config(\(OPT_{k}\))
17:endprocedure
```
**Algorithm 8** Finding modes with online partitioning
```
1:procedureOnline(\(x_{0},f_{1\ldots N},M,K\))
2:\(B_{k}\leftarrow\)Bandit(\(N\))\(\triangleright\)\(K\) arms for each datum
3:\(OPT_{k}\leftarrow\)Optimizer()\(\triangleright\)\(K\) optimizers
4:for\(i\gets 1\) to \(M\)do
5:for\(j\gets 1\) to \(N\)do
6:\(b_{j}\leftarrow\)Bandit Pull(\(B_{j}\))\(\triangleright\) Sample bandit
7:endfor
8:for\(m\gets 1\) to \(K\)do\(\triangleright\) Separate Evaluation
9:\(x_{i,k}\gets OPT_{k}\)Candidate()
10:\(y_{i,k}\leftarrow\)Mean(\(y_{i,k}\))
11:\(OPT_{k}\)Tell(\(y_{i,k}\))
12:endfor
13:endfor
14:\(c_{j}\leftarrow\)Best Arm(\(B_{j}\))
15:\(y_{k}\leftarrow\)Best Config(\(OPT_{k}\))
16:endprocedure
```
**Algorithm 9** Finding modes with online partitioning
```
1:procedureOnline(\(x_{0},f_{1\ldots N},M,K\))
2:\(B_{k}\leftarrow\)Bandit(\(N\))\(\triangleright\)\(K\) arms for each datum
4:\(OPT_{k}\leftarrow\)Optimizer()\(\triangleright\)\(K\) optimizers
5:for\(i\gets 1\) to \(M\)do
6:for\(j\gets 1\) to \(N\)do
7:\(b_{j}\leftarrow\)Bandit Pull(\(B_{j}\))\(\triangleright\) Sample bandit
8:endfor
19:\(x_{i,k}\gets OPT_{k}\)Candidate()
20:\(y_{i,k}\leftarrow\)Mean(\(y_{i,k}\))
21:\(OPT_{k}\)Tell(\(y_{i,k}\))
22:endfor
23:endfor
24:\(c_{j}\leftarrow\)Best Arm(\(B_{j}\))
25:\(y_{k}\leftarrow\)Best Config(\(OPT_{k}\))
26:endprocedure
```
**Algorithm 10** Finding modes with online partitioning
```
1:procedureOnline(\(x_
## IV Experimental Results
We evaluate our approach on several application domains. We start with a synthetic function whose structure and modes are known and is quick to evaluate. This enables us to characterize our different methods of finding partitions. We then proceed to show successful benefits to robotics methods like stereoscopic depth generation [3], differentiable rendering [45], motion planning [46], and visual odometry [4].
### _Synthetic Function_
A synthetic function allows us to characterize our methods across arbitrary many dimensions and modes. Our synthetic function has \(K\) modes, each the sum of \(N\) hard-to-optimize functions, leading to a simulation of \(KN\) data points. We use four hard-to-optimize functons: Ackley, Griewank, Rastrigin, Zakharov (for details of these functions and their visualizations see [47]), and rescale them to have a minima of value zero, a random rotation, and to have a maximum value of around one near the minima. This paradigm and these functions can be generated in arbitrary many dimensions, allowing us to understand how these partitioning methods scale as algorithm hyper-parameters scale from two to forty dimensions.
For the synthetic function optimization, the staged method works best across most dimensions and number of function evaluations. Close behind, especially with fewer evaluations, is the post hoc method. In contrast, our online bandit method is typically only slightly better than the single mode baseline. Of note is that all methods begin to perform better with hundreds of function evaluations, suggesting that the improved performance of the partitioning may come from improved efficiency in low numbers of evaluations, and not the multi-modal nature of the synthetic function.
### _Dense Stereo Matching_
Robotics applications often use stereoscopic depth sensors. Here we optimize the performance a classic Dense Stereo Matching method, namely Semi-Global Block Matching (SGBM) [48] as implemented by OpenCV [49]. We obtain 47 image pairs by combining the Middlebury 2014 and 2021 Stereo datasets [9]. We split the data into 23 training examples and 24 test examples, shown in Section IV-B. The algorithm settings control the regularization of the SGBM algorithm, the post-processing filters used to cleanup the data, and the block size used for initial matching. Results of the four methods on the training set are shown in Fig. 4.
In deploying the discovered configurations to new data, we show the efficacy of a simple supervised classifier.The classifier used is \(k\)-nearest neighbors with a \(k=1\), returning the partition index to be used. We use a pre-trained neural network's top level feature space as the feature space. Specifically we use SqueezeNet 1.1 [50] pre-trained on ImageNet in PyTorch [51] and its 512 dimensional space for images.
The test set performance is improved with partitioning, as shown in Fig. 4(c) with quantitative estimates and Fig. 4(b) with two qualitative examples from the test set.
We find that the optimal hyperparameter configuration typically focuses on regularization and filtering. The first configuration usually has less regularization, but a more aggressive filter to discard bad matches, while the second configuration has more regularization and less aggressive filters to discard bad data.
### _Differentiable Rendering_
We optimize the parameters of a recent differentiable renderer [45]. Our dataset includes 20 sequences from the KITTI odometry dataset [7] and 20 synthetic shapes. The KITTI sequences use k-Means to build a quick model of 10 consecutive LIDAR frames, from the center of the scene looking out. In contrast, the synthetic sequences all have the object densely in front camera. The differentiable render has four hyperparameters, two controlling the sharpness of the silhouettes, one controlling surface smoothness and one controlling how opaque objects are. We optimize all four for depth and silhouette accuracy, similar to the original paper.
Fig. 4: **Dense Stereo Matching Partitioning quality on the training set. The posthoc and staged methods perform well while the online method is indistinguishable from CMA-ES.**
Fig. 3: **Synthetic Function Partitioning (Section IV-A). Graphs shows the quality of the best found minima for all methods, between the initial configuration and an oracle. Shaded regions indicate standard error of the mean.**
In rendering, the optimizer is unable to find a better single mode configuration than the initialization. However, all proposed methods show a statistically significant improvement over the baseline, with the staged method performing the best. In addition, we report the ability of the methods to properly partition the disparate datasets in Fig. 5(a).
### _Motion Planning_
We evaluate a popular motion planning method, Informed RRT* [52] in the Sampling-Based Motion Planner Testing Environment [53]. We setup three start-goal pairs for three testing environments. We use the geometric mean of runtime (as estimated by the number of expanded notes) and performance (as estimated by the quality of the first found solution). This balance of runtime and quality is essential to obtaining an interested configuration. Results are shown in Fig. 7, with only the post hoc method outperforming the baseline. Other methods perform poorly, and we suspect the problem is insufficient samples and lack of exploration in the online and staged methods; especially as RRT*-based planners are stochastic, making evaluations noisy.
We find that the optimal partition finds different parameters focused on the goal sampling frequency (0.2 and 0.3; single mode 0.26) and the rewiring radius (1500 vs 7500; single mode 6000). Of our three start/goal pairs in each of three different environments, optimal partitioning typically grouped one environment together.
We also performed some experiments with RRdT* [54], which we report briefly report. Often, one partition would focus on a configuration that frequently spawned new trees, while the other focused on expanding existing trees.
As it is unclear how to parameterize motion planning goals and environments for supervised classification, we were unable to do experiments on a hold-out test set.
Fig. 5: **Dense Stereo Matching Test Set Performance**.
Fig. 6: **Differentiable Renderer Experiments**
Fig. 7: **Motion Planner Partitioning on a set of environments and planar planning tasks. Section IV-D for details.**
### _Visual Odometry_
We perform experiments on a subset of the TUM VI Visual-Inertial Dataset [55] using DM-VIO [56]. DM-VIO has many parameters but we focus on five (points, immature points, min frames, max frames, max optimization steps). TUM VI has 5 environments, and we select the third sequence from each environment as our dataset.
We prioritize a geometric mean of runtime (frame time) and quality (best-aligned absolute pose error [57]), while penalizing trajectories which do not complete successfully. Results are shown in Fig. 8. Reliably, the algorithm partitions a separate configuration for all but the _slide3_ sequence, which includes fast motion through a closed pipe. The slides partition is the best single mode, while the alternative partition uses fewer points (100 vs 350) and fewer frames (2-4 vs 3-5) as it does not need to handle the difficult high-velocity, highly occluded sequence.
As our VO partitions depend on properties of the sequence, we were unable to construct a reasonable test set based on the first frame. Instead, our multiple configurations may be used in on-the-fly configuration selection [13],
### _Commercial Depth Sensor_
Lastly, we demonstrate our partitioning method on a Intel RealSense D435 [3] and its 35 parameters for estimating depth. We generate a set of 500 randomly configurations. We evaluate all configurations on 10 scenes, for which we have collected their pseudo ground truths using a moving laser pattern [3]. We partition the configurations using the post hoc method. Results for four scenes are shown in Fig. 9.
The optimal \(K=2\) partition included the best single mode configuration as well, allowing us to show it and the alternative configuration. The single mode configuration produced small holes, but dense results outdoors. While the alternative configuration produced smoother, denser walls in indoor environments in exchange for more artifacts outdoors.
## V Discussion
Many algorithms in robotics operate in environments with multiple modes. These natural partitions are easy to understand, and can be discovered naturally by analyzing how different datums respond to different algorithm configurations. The modes were found because they affected algorithm response, not because they happened to be grouped together in some domain-specific feature space.
All the proposed methods for partitioning show some efficacy. Across the board, the post hoc method works well. This is likely due our extremely small number of evaluations for algorithm configuration [25], leading to more benefits for exploration. The online method typically performs poorly in this setting and it is possible that more sophisticated bandit algorithms [58] could perform better.
Our experiments focused on two partitions for all methods. Even when problems had more modes by construction, two partitions were able to clearly improve performance.
## VI Conclusion
Automatically finding modes during the course of algorithm configuration is a viable way to improve algorithm performance in several different areas of robotics. More study is needed to understand what typical algorithm modes exist and how such modal configurations might be used long-term deployed autonomous systems.
Fig. 8: **Visual-Inertial Odometry Experiments**
Fig. 9: **Intel RealSense D435 Partitioning with 500 randomly generated configurations on 10 scenes. Two are shown, with both configurations (and its error). Section IV-F.** |
2304.12920 | Sharp bounds of logarithmic coefficients for a class of univalent
functions | Let $\mathcal{U(\alpha, \lambda)}$, $0<\alpha <1$, $0 < \lambda <1$ be the
class of functions $f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\cdots$ satisfying
$$\left|\left(\frac{z}{f(z)}\right)^{1+\alpha}f'(z)-1\right|<\lambda$$ in the
unit disc ${\mathbb D}$. For $f\in \mathcal{U(\alpha, \lambda)}$ we give sharp
bounds of its initial logarithmic coefficients
$\gamma_{1},\,\gamma_{2},\,\gamma_{3}.$ | Milutin Obradović, Nikola Tuneski | 2023-04-24T06:32:33Z | http://arxiv.org/abs/2304.12920v1 | # Sharp bounds of logarithmic coefficients for a class of univalent functions
###### Abstract.
Let \(\mathcal{U}(\alpha,\lambda)\), \(0<\alpha<1\), \(0<\lambda<1\) be the class of functions \(f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\cdots\) satisfying
\[\left|\left(\frac{z}{f(z)}\right)^{1+\alpha}f^{\prime}(z)-1\right|<\lambda\]
in the unit disc \(\mathbb{D}\). For \(f\in\mathcal{U}(\alpha,\lambda)\) we give sharp bounds of its initial logarithmic coefficients \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{3}\).
Key words and phrases:univalent functions, logarithmic coefficients, sharp bounds 2010 Mathematics Subject Classification: 30C45, 30C50
## 1. Introduction and definitions
Let \(\mathcal{A}\) be the class of functions \(f\) which are analytic in the open unit disc \(\mathbb{D}=\{z:|z|<1\}\) of the form
\[f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\cdots, \tag{1}\]
and let \(\mathcal{S}\) be the subclass of \(\mathcal{A}\) consisting of functions that are univalent in \(\mathbb{D}\).
For a function \(f\in\mathcal{S}\) we define its logarithmic coefficients, \(\gamma_{n}\), \(n=1,2,\ldots\), by
\[\log\frac{f(z)}{z}=2\sum_{n=1}^{\infty}\gamma_{n}z^{n}. \tag{2}\]
Relatively little exact information is known about those coefficients. The natural conjecture \(|\gamma_{n}|\leq 1/n\), inspired by the Koebe function (whose logarithmic coefficients are \(1/n\)) is false even in order of magnitude (see Duren [2]). For the class \(\mathcal{S}\) the sharp estimates of single logarithmic coefficients are known only for \(\gamma_{1}\) and \(\gamma_{2}\), namely,
\[|\gamma_{1}|\leq 1\quad\text{and}\quad|\gamma_{2}|\leq\frac{1}{2}+\frac{1}{e}=0.6 35\ldots,\]
and are unknown for \(n\geq 3\). The best known estimate \(|\gamma_{3}|\leq 0.55661\ldots\) was given by the authors (see [7]). For the subclasses of univalent functions the situation is not a great deal better. Only the estimates of the initial logarithmic coefficients are available. For details see [1].
In the paper [3] the class \(\mathcal{U}(\alpha,\lambda)\) (\(0<\alpha<1\), \(0<\lambda<1\)) of functions \(f\in\mathcal{A}\) was introduced by the condition
\[\left|\left(\frac{z}{f(z)}\right)^{1+\alpha}f^{\prime}(z)-1\right|<\lambda, \quad z\in\mathbb{D}. \tag{3}\]
There is shown that functions from \(\mathcal{U}(\alpha,\lambda)\) are starlike, i.e., belong to the class \(\mathcal{S}^{\star}\) of functions that map the unit disk onto a starlike domain, if
\[0<\lambda\leq\frac{1-\alpha}{\sqrt{(1-\alpha)^{2}+\alpha^{2}}}\equiv\lambda_{ \star}. \tag{4}\]
In the limiting cases when \(\lambda=1\), and either \(\alpha=0\) or \(\alpha=1\), functions in the classes \(\mathcal{U}(0,1)\) and \(\mathcal{U}(1,1)\) satisfy
\[\left|\frac{zf^{\prime}(z)}{f(z)}-1\right|<1,\quad\text{and}\quad\left|\left( \frac{z}{f(z)}\right)^{2}f^{\prime}(z)-1\right|<1,\]
respectively. The former is a subclass of \(\mathcal{S}^{\star}\) since the analytical characterisation of starlike functions is \(\mathrm{Re}\,\frac{zf^{\prime}(z)}{f(z)}>0\) (\(z\in\mathbb{D}\)), while functions in the latter class are univalent (see [5, 6]).
In this paper we consider estimates of three initial logarithmic coefficients for the class \(\mathcal{U}(\alpha,\lambda)\), where \(0<\alpha<1\), \(0<\lambda\leq\lambda_{\star}\) and \(\lambda_{\star}\) is defined by (4).
For our consideration we need the next lemma.
**Lemma 1**.: _[_4_]_ _Let \(f\in\mathcal{U}(\alpha,\lambda),\,0<\alpha<1,\,0<\lambda<1.\) Then there exists a function \(\omega\), analytic in \(\mathbb{D}\), such that \(\omega(0)=0\), \(|\omega(z)|<1\) for all \(z\in\mathbb{D}\), and_
\[\left[\frac{z}{f(z)}\right]^{\alpha}=1-\alpha\lambda z^{\alpha}\int_{0}^{z} \frac{\omega(t)}{t^{\alpha+1}}dt. \tag{5}\]
By \(\Omega\) we denote the class of analytic functions in \(\mathbb{D}\):
\[\omega(z)=c_{1}z+c_{2}z^{2}+c_{3}z^{3}+\cdots, \tag{6}\]
with \(\omega(0)=0\), and \(|\omega(z)|<1\) for all \(z\in\mathbb{D}\).
In their paper [8] Prokhorov and Szynal obtained sharp estimates on the functional
\[\Psi(\omega)=|c_{3}+\mu c_{1}c_{2}+\nu c_{1}^{3}|\]
within the class of all \(\omega\in\Omega.\) For our application we need only a part of those results.
**Lemma 2**.: _[_8_]_ _Let \(\omega(z)=c_{1}z+c_{2}z^{2}+c_{3}z^{3}+\cdots\in\Omega.\) For \(\mu\) and \(\nu\) real numbers, let_
\[\Psi(\omega)=\left|c_{3}+\mu c_{1}c_{2}+\nu c_{1}^{3}\right|,\]
_and_
\[D_{1} = \left\{(\mu,\nu):|\mu|\leq\frac{1}{2},|\nu|\leq 1\right\},\] \[D_{2} = \left\{(\mu,\nu):\frac{1}{2}\leq|\mu|\leq 2,\frac{4}{27}(|\mu|+1)^{ 3}-(|\mu|+1)\leq\nu\leq 1\right\},\] \[D_{3} = \left\{(\mu,\nu):|\mu|\leq 2,|\nu|\geq 1\right\}.\]
_Then, the sharp estimate \(\Psi(\omega)\leq\Phi(\mu,\nu)\) holds, where_
\[\Phi(\mu,\nu)=\left\{\begin{array}{cc}1,&(\mu,\nu)\in D_{1}\cup D_{2}\cup \{(2,1)\};\\ |\nu|,&(\mu,\nu)\in D_{3}.\end{array}\right.\]
## 2. Main results
**Theorem 1**.: _Let \(f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\cdots\) belongs to the class \(\mathcal{U}(\alpha,\lambda)\) and \(\lambda_{*}\) is defined by (4). Then the following results are best possible._
1. \(|\gamma_{1}|\leq\frac{\lambda}{2(1-\alpha)}\) _when_ \(0<\lambda\leq\lambda_{*}\) _and_ \(0<\alpha<1\)_._
2. _Let_ \(\lambda_{1}=\frac{2(1-\alpha)^{2}}{\alpha(2-\alpha)}\) _and let_ \(\alpha_{1}=0.4825\ldots\) _be the unique real root of the equation_ \[7\alpha^{4}-20\alpha^{3}+24\alpha^{2}-16\alpha+4=0\] _on the interval_ \((0,1)\)_. Then_ \[|\gamma_{2}|\leq\frac{\lambda}{2(2-\alpha)}\quad\text{if}\quad 0<\lambda\leq \begin{cases}\lambda_{1},\,\alpha\in[\alpha_{1},1),\\ \lambda_{*},\,\alpha\in(0,\alpha_{1}],\end{cases}\] _and_ \[|\gamma_{2}|\leq\frac{\alpha\lambda^{2}}{4(1-\alpha)^{2}}\quad\text{if}\quad \lambda_{1}\leq\lambda\leq\lambda_{*},\,\alpha\in[\alpha_{1},1).\]
3. _Let_ \(\lambda_{1/2}=\frac{(1-\alpha)(2-\alpha)}{2\alpha(3-\alpha)}\)_,_ \(\lambda_{\nu}=\sqrt{\frac{3(1-\alpha)^{3}}{\alpha^{2}(3-\alpha)}}\) _and_ \(\alpha_{1/2}=0.2512\ldots\) _and_ \(\alpha_{\nu}=0.5337\ldots\) _are the unique roots of equations_ \[4-12\alpha-19\alpha^{2}+14\alpha^{3}-2\alpha^{4}=0\] _and_ \[3-9\alpha+9\alpha^{2}-5\alpha^{3}=0,\] _on the interval_ \((0,1)\)_, respectively. Then_ \[|\gamma_{3}|\leq\frac{\lambda}{2(3-\alpha)}\quad\text{if}\quad 0<\lambda\leq \begin{cases}\begin{array}{cc}\lambda_{*},&\alpha\in(0,\alpha_{1/2}],\\ \lambda_{1/2},&\alpha\in[\alpha_{1/2},\alpha_{2}],\\ \lambda_{\nu},&\alpha\in[\alpha_{2},1),\end{array}\end{cases}\] _where_ \(\alpha_{2}=0.9555\ldots\) _is the unique real root of equation_ \(11\alpha^{2}-44\alpha+32=0\) _on_ \((0,1)\)_. Also,_ \[|\gamma_{3}|\leq\frac{\alpha^{2}\lambda^{3}}{6(1-\alpha)^{3}}\quad\text{if} \quad\lambda_{\nu}\leq\lambda\leq\lambda_{*},\,\alpha\in[\alpha_{\nu},1).\]
Proof.: Let \(f\in\mathcal{U}(\alpha,\lambda)\) and \(\omega\in\Omega\) are given by (1) and (6), respectively. Then, from (5), upon integration, we have
\[\left[\frac{z}{f(z)}\right]^{\alpha}=1-\alpha\lambda\sum_{n=1}^{\infty}\frac{ c_{n}}{n-\alpha}z^{n},\]
that is,
\[\frac{f(z)}{z}=\left(1-\alpha\lambda\sum_{n=1}^{\infty}\frac{c_{n}}{n-\alpha}z ^{n}\right)^{-\frac{1}{\alpha}} \tag{7}\]
(the principal value is used here). Further, from (7), having in mind that
\[(1-\alpha z)^{-1/\alpha}=1+z+\frac{1+\alpha}{2}z^{2}+\frac{(1+\alpha)(1+2 \alpha)}{6}z^{3}+\cdots,\]
after some calculations, we obtained
\[\sum_{n=1}^{\infty}a_{n+1}z^{n} =\sum_{n=1}^{\infty}\frac{\lambda c_{n}}{n-\alpha}z^{n}+\frac{1+ \alpha}{2}\left(\sum_{n=1}^{\infty}\frac{\lambda c_{n}}{n-\alpha}z^{n}\right) ^{2}\] \[+\frac{(1+\alpha)(1+2\alpha)}{6}\left(\sum_{n=1}^{\infty}\frac{ \lambda c_{n}}{n-\alpha}z^{n}\right)^{3}+\cdots.\]
By comparing the coefficients we receive
\[a_{2} =\frac{\lambda}{1-\alpha}c_{1},\] \[a_{3} =\frac{\lambda}{2-\alpha}c_{2}+\frac{(1+\alpha)\lambda^{2}}{2(1- \alpha)^{2}}c_{1}^{2},\] \[a_{4} =\frac{\lambda}{3-\alpha}c_{3}+\frac{(1+\alpha)\lambda^{2}}{(1- \alpha)(2-\alpha)}c_{1}c_{2}+\frac{(1+\alpha)(1+2\alpha)\lambda^{3}}{6(1- \alpha)^{3}}c_{1}^{3}. \tag{8}\]
On the other hand, by comparing the coefficients in the relation (2), for the logarithmic coefficients we obtain
\[\gamma_{1}=\frac{1}{2}a_{2},\quad\gamma_{2}=\frac{1}{4}(2a_{3}-a_{2}^{2}), \quad\gamma_{3}=\frac{1}{2}(a_{4}-a_{2}a_{3}+\frac{1}{3}a_{2}^{3}). \tag{9}\]
Using the relations (8) and (9), after some calculations, we have
\[\gamma_{1} =\frac{\lambda}{2(1-\alpha)}c_{1},\] \[\gamma_{2} =\frac{1}{4}\left[\frac{2\lambda}{2-\alpha}c_{2}+\frac{\alpha \lambda^{2}}{(1-\alpha)^{2}}c_{1}^{2}\right],\] \[\gamma_{3} =\frac{\lambda}{2(3-\alpha)}\left(c_{3}+\mu c_{1}c_{2}+\nu c_{1 }^{3}\right), \tag{10}\]
where
\[\mu=\frac{\alpha(3-\alpha)\lambda}{(1-\alpha)(2-\alpha)}\quad\text{and}\quad \nu=\frac{\alpha^{2}(3-\alpha)\lambda^{2}}{3(1-\alpha)^{3}}. \tag{11}\]
Since logarithmic coefficients are defined for univalent functions, in order to guarantee univalence of \(f\) in all cases we need \(0<\lambda\leq\lambda_{\star}\), where \(\lambda_{\star}\) is defined in (4).
* From (10) we have \(|\gamma_{1}|\leq\frac{\lambda}{2(1-\alpha)}\), where \(0<\lambda\leq\lambda_{\star}\) and \(0<\alpha<1\). The result is the best possible as the function \(f_{1}\) defined by \[f_{1}(z)=z\left(1-\frac{\alpha\lambda}{1-\alpha}z\right)^{-1/\alpha}=z+\frac{ \lambda}{1-\alpha}z^{2}+\ldots\] shows.
* Using the inequalities \(|c_{1}|\leq 1\), \(|c_{2}|\leq 1-|c_{1}|^{2}\) for \(\omega\in\Omega\) and (10), we have \[|\gamma_{2}| \leq\frac{1}{4}\left[\frac{2\lambda}{2-\alpha}|c_{2}|+\frac{ \alpha\lambda^{2}}{(1-\alpha)^{2}}|c_{1}|^{2}\right]\] \[\leq\frac{1}{4}\left[\frac{2\lambda}{2-\alpha}(1-|c_{1}|^{2})+\frac{ \alpha\lambda^{2}}{(1-\alpha)^{2}}|c_{1}|^{2}\right]\] \[\leq\frac{1}{4}\left[\frac{2\lambda}{2-\alpha}+\left(\frac{\alpha \lambda^{2}}{(1-\alpha)^{2}}-\frac{2\lambda}{2-\alpha}\right)|c_{1}|^{2} \right]\equiv H_{1}(|c_{1}|).\] If \(\frac{\alpha\lambda^{2}}{(1-\alpha)^{2}}-\frac{2\lambda}{2-\alpha}\leq 0\), or equivalently, \[\lambda\leq\frac{2(1-\alpha)^{2}}{\alpha(2-\alpha)}\equiv\lambda_{1},\] then \(|\gamma_{2}|\leq H_{1}(0)=\frac{\lambda}{2(2-\alpha)}\). It is also necessary that \[\lambda\leq\lambda_{\star}=\frac{1-\alpha}{\sqrt{(1-\alpha)^{2}+\alpha^{2}}}.\]
The last inequality will hold if \(\lambda_{1}\leq\lambda_{\star}\), or equivalently, if \[7\alpha^{4}-20\alpha^{3}+24\alpha^{2}-16\alpha+4\leq 0,\] i.e., if \(\alpha\in[\alpha_{1},1)\), where \(\alpha_{1}=0.4825\ldots\) is the unique real root of equation \[7\alpha^{4}-20\alpha^{3}+24\alpha^{2}-16\alpha+4=0\] on the interval \((0,1)\). If \(\alpha\in(0,\alpha_{1}]\), then \(\lambda_{1}\geq\lambda_{\star}\) and we have that \(0<\lambda\leq\lambda_{\star}\) will imply the same result. Finally, if \(\alpha\in[\alpha_{1},1)\), i.e., \(\lambda_{1}\leq\lambda_{\star}\), and \(\lambda_{1}\leq\lambda\leq\lambda_{\star}\), then, from the previous consideration we obtain \[|\gamma_{2}|\leq H_{1}(1)=\frac{\alpha\lambda^{2}}{4(1-\alpha)^{2}}.\] Those results are the best possible as the functions given by (7) for \(c_{2}=1\) (\(c_{1}=c_{3}=\cdots=0\)) or for \(c_{1}=1\) (\(c_{2}=c_{3}=\cdots=0\)), show.
3. From (10) we have (12) \[|\gamma_{3}|\leq\frac{\lambda}{2(3-\alpha)}\left|c_{3}+\mu c_{1}c_{2}+\nu c_{ 1}^{3}\right|=\frac{\lambda}{2(3-\alpha)}\Psi(\omega),\] where \(\mu\) and \(\nu\) are given by (11). Next, we want to apply the results of Lemma 2, and for that we need to distinguish the cases in the definitions of the sets \(D_{1}\), \(D_{2}\), and \(D_{3}\). First, we note that \(\mu\) and \(\nu\) are both positive. Further, \(\mu=\frac{\alpha(3-\alpha)\lambda}{(1-\alpha)(2-\alpha)}\leq\frac{1}{2}\) is equivalent to \[0<\lambda\leq\frac{(1-\alpha)(2-\alpha)}{2\alpha(3-\alpha)}\equiv\lambda_{1/2}.\] It is necessary that \(\lambda\leq\lambda_{\star}\), where \(\lambda_{\star}\) is defined by (4). After some calculations, \(\lambda_{1/2}\leq\lambda_{\star}\) is equivalent to \[4-12\alpha-19\alpha^{2}+14\alpha^{3}-2\alpha^{4}\leq 0,\] i.e., to \(\alpha\in[\alpha_{1/2},1)\), where \(\alpha_{1/2}=0.2512\ldots\) is the unique real root of the equation \[4-12\alpha-19\alpha^{2}+14\alpha^{3}-2\alpha^{4}=0\] on the interval \((0,1)\). In that sense we have (13) \[0<\mu\leq\frac{1}{2}\quad\Leftrightarrow\quad\lambda\leq\left\{\begin{array}[ ]{cc}\lambda_{1/2},&\alpha\in[\alpha_{1/2},1),\\ \lambda_{\star},&\alpha\in(0,\alpha_{1/2}].\end{array}\right.\] On the other hand, by (11), \(\nu=\frac{\alpha^{2}(3-\alpha)\lambda^{2}}{3(1-\alpha)^{3}}\leq 1\) is equivalent to \[0<\lambda\leq\sqrt{\frac{3(1-\alpha)^{3}}{\alpha^{2}(3-\alpha)}}\equiv\lambda _{\nu}.\] It is again necessary that \(\lambda\leq\lambda_{\star}\). Next, \(\lambda_{\nu}\leq\lambda_{\star}\) after some calculations is equivalent to \[3-9\alpha+9\alpha^{2}-5\alpha^{3}\leq 0,\] which is true when \(\alpha\in[\alpha_{\nu},1)\), where \(\alpha_{\nu}=0.5337\ldots\) is the unique real root of equation \[3-9\alpha+9\alpha^{2}-5\alpha^{3}=0.\]
It means that
(14) \[0<\nu\leq 1\quad\Leftrightarrow\quad\lambda\leq\left\{\begin{array}{ll}\lambda_{ \nu},&\alpha\in[\alpha_{\nu},1),\\ \lambda_{\star},&\alpha\in(0,\alpha_{\nu}].\end{array}\right.\] Also, \(\lambda_{1/2}\leq\lambda_{\nu}\) is equivalent to \(11\alpha^{2}-44\alpha+32\geq 0\), i.e., to \(\alpha\in[\alpha_{2},1)\), where \(\alpha_{2}=0.9555\ldots\) is the unique real root of equation \[11\alpha^{2}-44\alpha+32=0\] on the interval \((0,1)\).
Using all those previous facts, we can conclude that if
\[0<\lambda\leq\left\{\begin{array}{ll}\lambda_{\star},&\alpha\in(0,\alpha_{1 /2}],\\ \lambda_{1/2},&\alpha\in[\alpha_{1/2},\alpha_{2}],\\ \lambda_{\nu},&\alpha\in[\alpha_{2},1),\end{array}\right.\] then \(0<\mu\leq\frac{1}{2}\) and \(0<\nu\leq 1\). By Lemma 2 (case \(D_{1}\)) it means that \(\Psi(\omega)\leq 1\) and so, by (12): \[|\gamma_{3}|\leq\frac{\lambda}{2(3-\alpha)}.\] The result is best possible as the function obtained for \(c_{3}=1\) (\(c_{1}=c_{2}=c_{4}=\cdots=0\)) in (7) shows.
If \(\lambda_{1/2}\leq\lambda\leq\lambda_{\nu}\)\(\alpha_{\nu}\leq\alpha\leq\alpha_{2}\), then \(0<\nu\leq 1\) and
\[\frac{1}{2}\leq\mu =\frac{\alpha(3-\alpha)\lambda}{(1-\alpha)(2-\alpha)}\leq\frac{ \alpha(3-\alpha)\lambda_{\nu}}{(1-\alpha)(2-\alpha)}\] \[=\frac{\sqrt{3(1-\alpha)(3-\alpha)}}{2-\alpha}\leq 1.2667\ldots.\] The last is obtained for \(\alpha=\alpha_{\nu}=0.5337\ldots\) since \(\frac{\sqrt{3(1-\alpha)(3-\alpha)}}{2-\alpha}\) is a decreasing function on \((\alpha_{\nu},\alpha_{2})\).
For the study of the set \(D_{2}\), we note that the function
\[\phi(\mu)\equiv\frac{4}{27}(1+\mu)^{3}-(1+\mu)\]
is an increasing function for \(\frac{1}{2}\leq\mu\leq 2\), and
\[\phi(\mu)\leq\phi(1.2667\ldots)=-0,541\ldots<0<\nu\leq 1.\] That implies \(\Psi(\omega)\leq 1\) (by Lemma 2, case \(D_{2}\)), and follows the same sharp estimate \(|\gamma_{3}|\leq\frac{\lambda}{2(3-\alpha)}\) as in previous case.
Finally, since for all \(0<\lambda\leq\lambda_{\star}\) we have \(0<\mu\leq 2\) (easy to check) and if \(\lambda_{\nu}\leq\lambda\leq\lambda_{\star}\), \(\alpha\in[\alpha_{\nu},1)\), then by Lemma 2 (case \(D_{3}\)): \(\Psi(\omega)\leq\nu\), which by (12) implies
\[|\gamma_{3}|\leq\frac{\lambda}{2(3-\alpha)}\frac{\alpha^{2}(3-\alpha)\lambda^ {2}}{3(1-\alpha)^{3}}=\frac{\alpha^{2}\lambda^{3}}{6(1-\alpha)^{3}}.\]
The result is the best possible as the function given by (7) and \(c_{1}=1\) (\(c_{2}=c_{3}=\cdots=0\)) shows. |
2303.10148 | Generative Machine Learning for Detector Response Modeling with a
Conditional Normalizing Flow | In this paper, we explore the potential of generative machine learning models
as an alternative to the computationally expensive Monte Carlo (MC) simulations
commonly used by the Large Hadron Collider (LHC) experiments. Our objective is
to develop a generative model capable of efficiently simulating detector
responses for specific particle observables, focusing on the correlations
between detector responses of different particles in the same event and
accommodating asymmetric detector responses. We present a conditional
normalizing flow model (CNF) based on a chain of Masked Autoregressive Flows,
which effectively incorporates conditional variables and models
high-dimensional density distributions. We assess the performance of the \cnf
model using a simulated sample of Higgs boson decaying to diphoton events at
the LHC. We create reconstruction-level observables using a smearing technique.
We show that conditional normalizing flows can accurately model complex
detector responses and their correlation. This method can potentially reduce
the computational burden associated with generating large numbers of simulated
events while ensuring that the generated events meet the requirements for data
analyses. | Allison Xu, Shuo Han, Xiangyang Ju, Haichen Wang | 2023-03-17T17:35:32Z | http://arxiv.org/abs/2303.10148v3 | # Generative Machine Learning for Detector Response Modeling with a Conditional Normalizing Flow
###### Abstract
In this paper, we explore the potential of generative machine learning models as an alternative to the computationally expensive Monte Carlo (MC) simulations commonly used by the Large Hadron Collider (LHC) experiments. Our objective is to develop a generative model capable of efficiently simulating detector responses for specific particle observables, focusing on the correlations between detector responses of different particles in the same event and accommodating asymmetric detector responses. We present a conditional normalizing flow model (\(\mathcal{CNF}\)) based on a chain of Masked Autoregressive Flows, which effectively incorporates conditional variables and models high-dimensional density distributions. We assess the performance of the \(\mathcal{CNF}\) model using a simulated sample of Higgs boson decaying to diphoton events at the LHC. We create reconstruction-level observables using a smearing technique. We show that conditional normalizing flows can accurately model complex detector responses and their correlation. This method can potentially reduce the computational burden associated with generating large numbers of simulated events while ensuring that the generated events meet the requirements for data analyses. We make our code available at [https://github.com/allixu/normalizing_flow_for_detector_response](https://github.com/allixu/normalizing_flow_for_detector_response)
Keywords:Conditional Normalizing Flow, Generative Model, Detector simulation, LHC
## 1 Introduction
The Monte Carlo (MC) simulation frameworks utilized by the Large Hadron Collider (LHC) experiments [1; 2; 3] play a crucial role in the success of its physics program, which probes physics beyond the Standard Model through precision measurements and direct searches. These MC simulation frameworks have been extensively tuned to model particle collisions and detector effects. In general, a simulation framework used by an LHC experiment is a chain of multiple components, including event generation, detector simulation, and event reconstruction. Each of these components may be further factorized into more focused tasks, which are primarily first-principle based, simulating the physics process or detector response according to our best theoretical and phenomenological knowledge of the collision process and detector material. However, the simulation of MC samples, especially the modeling of detector response, is computationally expensive. As the LHC continues to operate successfully, particularly with its upcoming high luminosity program, existing simulation schemes face difficulties in meeting the computational demands that come with the significant increase in integrated luminosity.
The application of generative machine learning as a surrogate for certain aspects or the entirety of the Monte Carlo (MC) simulation utilized at the LHC is a promising solution actively being investigated by the high energy physics community. A significant area of development is the use of generative machine learning to model particle shower development in detectors [4; 5; 6; 7; 8; 9; 10; 11]. Recently, the ATLAS experiment at the LHC has incorporated a Generative Adversarial Networks (GAN) based fast calorimeter shower simulation into its fast detector simulation framework [12]. Another
active area of investigation is the use of generative machine learning to model the collision, parton showering, hadronization, and jet formation processes [13; 14; 15; 16; 17; 18; 19]. In terms of choice of machine learning architecture, Ref.s [4; 5; 12; 13; 16; 17; 19; 20] utilized Generative Adversarial Networks (GAN), Ref.s [7; 8] adopted autoencoders, and Ref.s [9; 10; 18; 21] exploited normalizing flows. More detailed reviews of the state of the art of generative machine learning for particle physics can be found in Ref.s [22; 23].
In this paper, we target a different use case of generative machine learning. Many data analyses, targeting specific signatures, often do not need the detailed information of the collision final state produced from the full simulation framework. For example, in ATLAS \(H\rightarrow\gamma\gamma\) and \(H\rightarrow\mu\mu\) measurements, high-statistics background samples are generated for background modeling, and the equivalent integrated luminosity of these samples can be as large as 30 ab\({}^{-1}\)[24; 25]. In addition, as the Higgs boson measurements enter a precision phase, many analyses would require the simulation of a large number of signal samples with alternative physics parameters such as those defined in the Standard Model effective field theory, which is used in interpreting the observed results. Deploying the full simulation chain that uses GEANT4 package [26] to simulate detailed interactions between particles and detector materials is often unnecessarily inefficient and in some cases unrealistic, for such tasks.
A generative machine learning model that inputs generator-level particle variables and generates the detector responses for specified particle observables is all we need for this kind of analysis use case. We identified the following design objectives: the model should learn the detector response to a given observable as a function of conditional variables; the model should learn the correlation between detector responses of different particles in the same event; and the model should learn asymmetric detector response, which is commonplace in particle detection. Some recent works [27] explored similar objectives using generative models incorporating novel attention mechanisms. In our work, we designed a conditional normalizing flow model (\(\mathcal{CNF}\)) to achieve these objectives. The \(\mathcal{CNF}\) model is based on a chain of Masked Autoregressive Flows [28], which combines the advantages from the normalizing flow [29] and the autoregressive density estimation [30]. The \(\mathcal{CNF}\) model can naturally include conditional variables and model high dimensional density distributions.
We characterized the performance of the \(\mathcal{CNF}\) model using a simulated sample of Higgs boson decaying to diphoton (\(H\rightarrow\gamma\gamma\)) events at the LHC. For this sample, we engineered various physics-motivated detector response scenarios and created reconstruction-level observables using a smearing technique similar to that adopted by the fast detector simulation package DELPHES [31].
This paper is organized as follows: Section 3 describes the event generation and the smearing technique used to introduce experimental effects; Sections 2 and 4 present the architecture of our conditional normalizing flows model, and these sections also include training configurations; Section 5 shows the performance of the \(\mathcal{CNF}\) tool in various scenarios; Section 6 summarizes the findings and discusses potential applications and extensions of this tool.
## 2 Conditional Normalizing Flows
A normalizing flow is a technique that transforms a simple base density distribution \(\pi(\vec{z})\) to a more complex target density distribution \(p(\vec{x})\) using a bijective, differentiable function known as
a bijection \(\vec{x}=f(\vec{z})\). A normalizing flow often uses a chain of bijections to construct the final bijection, which allows the modeling of complex target distributions. To make the normalizing flow learnable and computationally efficient, a bijection is often chosen to be a simple function and the coefficients of the function are parameterized by neural networks, often by the MultiLayer Perceptrons (MLPs). Using the change of variables technique in mathematics, a normalizing flow can estimate the target density distribution with the input vector \(\vec{x}\). The learnable weights in the neural network \(\vec{w}\) are then optimized by the Adam [32] optimizer by minimizing the negative log-likelihood function \(\mathcal{L}(\vec{w}|\vec{x})\). A normalizing flow can be extended to a conditional normalizing flow by concatenating the conditional vector \(\vec{r}\) with the input vector \(\vec{x}\) and using the combined vector to estimate the target density distribution.
Our \(\mathcal{CNF}\) implementation was based on a type of normalizing flows, known as the Masked Autoregressive Flows (MAF). In MAF, the bijection transforms the base density distribution by sequentially transforming each dimension based on the previously transformed dimensions. This autoregressive feature transforms depend on the ordering of the input vector and slow for sampling. To minimize the ordering effect, we added a permutation bijection to each MAF.
In this work, we achieve the generation of detector responses that vary as functions of particle kinematics and event conditions with a conditional normalizing flow model. The target density distribution is a multidimensional distribution that describes the detector responses of particle kinematic observables and their correlation. The conditional vector comprises particle kinematics and event variables on which target detector responses depend.
## 3 Data Samples
### Event Generation
This study simulated the Higgs boson production in \(pp\) collisions at \(\sqrt{s}=13\) TeV. The Higgs boson subsequently decays into a pair of photons. The events were generated by the Madgraph@NLO (v2.3.7) [33] at next-to-leading order (NLO) accuracy in QCD. The Higgs boson decay, and the parton showering and hadronization processes, were implemented by Pythia 8.235 [34] with the CTEQ6L1 parton distribution function set [35]. A total of seven million events were generated. For the study, events were required to have at least two photons, each of which should have a transverse energy (\(E_{\mathrm{T}}\)) greater than 20 GeV and an absolute value of pseudorapidity (\(\eta\)) of less than 2.5.
### Detector Response
For a collider observable \(X\), we express its reconstructed value, \(X_{\mathrm{reco}}\), as the sum of its true value, \(X_{\mathrm{true}}\), and a term, \(\Delta_{X}\), resulting from the experimental effects in the particle detection and reconstruction: \(X_{\mathrm{reco}}=X_{\mathrm{true}}+\Delta_{X}\). In this study, we define \(\Delta_{X}\) as the _detector response_ of observable \(X\). For an ensemble of \(X\) measurements, the distribution of its detector response \(\Delta_{X}\) can be modeled by a location-scale family probability density function. The two most significant characteristics of the detector response are its scale and resolution, corresponding to the location and width of the function. We refer to this function as the _detector response_ function of observable \(X\), \(f_{X}(\theta)\), where \(\theta\) denotes a set of variables affecting the measurement.
In lieu of a detector simulation, we can create a proxy of \(X_{\mathrm{reco}}\) for a given \(X_{\mathrm{true}}\) by randomly sampling the detector response function \(f_{X}(\theta)\) and deriving \(X_{\mathrm{reco}}\) from \(X_{\mathrm{true}}+\Delta_{X}\). We used this
technique to create detector response and reconstruction-level observables that are considered as targets for the \(\mathcal{CNF}\) model.
### Experimental Effects in Photon Detection and Reconstruction
Collider experiments measure photons with an electromagnetic calorimeter (ECAL). For example, the ECAL at the ATLAS experiment is a LAr sampling calorimeter that uses lead/stainless steel as absorbing material and liquid Argon as sampling material [36]; the CMS experiment has a total-absorption ECAL constructed with Lead-Tungstate crystals [37]. The two experiments, adopting complementary calorimeter technologies, achieve similar photon detection and reconstruction performances. Both used the Crystal Ball or Double-sided Crystal Ball functions to model the detector response of photon energy measurements [38; 39]. Such functions include a Gaussian function to model the core part of the detector response distribution, and power-law functions to model the tails. Various instrumentation effects, such as photon conversions in materials upstream of the calorimeter, the presence of inactive materials in the calorimeter, energy leakage, etc., can introduce a low energy tail in the \(\Delta\) distribution.
At the LHC experiments, multiple proton collisions occur during the same bunch crossing, and this phenomenon is known as _pile up_. The extent of pile up is quantified by the average number of proton interactions per bunch crossing, \(\mu\), which has a mean value greater than 30 for the 2018 data-taking of ATLAS and CMS experiments [40]. Contributions from pile-up collisions deteriorate the measurements of particles arising from the primary collision. As a result, the detector response also depends on \(\mu\).
The correlation between measurements of various particles within the same collision event also needs to be considered. For instance, when determining the photon pseudo-rapidity, the collision event primary vertex is used as the photon origin, leading to correlations in the pseudo-rapidity measurements of photons in the same event. The use of pile-up suppression techniques in collider experiments also results in correlations between measurements of different photons, because both measurements receive corrections related to the global energy density of the same collision event.
### Parameterization
In this study, we consider the following photon observables: the transverse energy (\(E_{\mathrm{T}}\)), the pseudorapidity (\(\eta\)), and the azimuthal angle (\(\phi\)). Given these observables for each of the two photons in an \(H\to\gamma\gamma\) event, we can reconstruct the four momentum of the diphoton system, which is a proxy for the Higgs boson.
Resolutions of these photon observables vary as a function of its transverse energy and pseudorapidity and the event pile-up \(\mu\). Specifically, for each photon observable, the photon resolution dependencies are parameterized as follows:
\[R_{E_{\mathrm{T}}}(E_{\mathrm{T}},\eta,\mu)=1.5\times R_{E_{ \mathrm{T}}}(E_{\mathrm{T}})\cdot R_{E_{\mathrm{T}}}(\eta)\cdot R_{E_{\mathrm{ T}}}(\mu) \tag{1}\] \[R_{\eta}(E_{\mathrm{T}},\eta,\mu)=0.0005\times R_{\eta}(E_{ \mathrm{T}})\cdot R_{\eta}(\eta)\cdot R_{\eta}(\mu)\] (2) \[R_{\phi}(E_{\mathrm{T}},\eta,\mu)=0.0003\times R_{\phi}(E_{ \mathrm{T}})\cdot R_{\phi}(\eta)\cdot R_{\phi}(\mu) \tag{3}\]
where the resolution's dependencies are modeled separately by fourth-order polynomials \(R_{x}(\theta)\) where \(x\) represents a photon observable, and \(\theta\) are variables on which photon resolutions depend. The constants in the resolution functions are roughly corresponding to the best resolution values in the parameterization, which are chosen to be compatible with numbers published by the ATLAS experiment [38]. The polynomial parameterization is given in the Appendix. Figure 2 shows the resolution of measurements of photon kinematic observables \(E_{\mathrm{T}}\), \(\eta\), and \(\phi\), as functions of true values of photon \(E_{\mathrm{T}}\) and \(\eta\), as well as pile-up \(\mu\).
### Scenarios
In the _correlation_ scenario, detector responses for two photons were generated from the same set of detector response functions as in the _baseline_ scenario, but in a correlated manner. In the _asymmetric response_ scenario, detector responses for two photons were sampled independently from asymmetric resolution functions with a most probable value of 0.
BaselineThe photon detector response function \(f_{X}(\theta)\) is a normal distribution with a mean of zero and width of \(R_{X}(E_{\mathrm{T}},\eta,\mu)\). For each of the two photons in the event, its detector response \(\Delta_{X}\) was sampled from this normal distribution with true values of photon \(E_{\mathrm{T}}\), \(\eta\), and the event \(\mu\) as input. The event \(\mu\) was randomly sampled from a uniform distribution between 0 and 40. In the baseline scenario, the detector responses are independent between photons and are normally distributed.
CorrelationWe generated detector responses for the two photons in a correlated manner. For a given observable \(X\), we used the procedure described in the baseline scenario to create independent detector responses for the two photons (denoted as \(R_{1}\) and \(R_{2}\) respectively). The ordering of the photon is not critical, and we choose to order photons by their transverse energies. To introduce a correlation between the detector responses of two photons, we redefined the detector response for the second photon as follows:
\[R_{2}^{\mathrm{redefined}}=\rho^{2}R_{1}+\sqrt{1-\rho^{2}}R_{2} \tag{1}\]
The parameter \(\rho\) controls the correlation. To validate whether our generative model accurately captured the correlation, we created two target samples of events with \(\rho\) set to 1.0 or 0.5.
Asymmetric detector responseWe define the detector response function as a linear combination of two normal distributions. The core part of this detector response function is the same as the normal distribution defined in the baseline scenario, and the tail part is a normal distribution with a broad width and a mean shifted to a lower value. These asymmetric detector response functions are shown in Figure 6. The detector responses are drawn independently between two photons.
## 4 Model Architecture
In this work, the input vector \(\vec{x}\) corresponds to detector responses of six photon kinematic variables, namely, the \(E_{\mathrm{T}}\), \(\eta\), and \(\phi\) for each of the two photons. The base density distribution is a six-dimensional normal distribution. The conditional input vector \(\vec{c}\) supplied to the \(\mathcal{CNF}\) model
comprises the pileup condition \(\mu\) and the particle-level kinematic variables, \(X_{\text{true}}\), where \(X\in E_{\text{T}}^{\gamma 1},E_{\text{T}}^{\gamma 2},\eta^{\gamma 1},\eta^{ \gamma 2}\). Superscripts \(\gamma 1\) and \(\gamma 2\) indicate two distinct photons. The azimuthal angle \(\phi\) is excluded from the conditional input, as collider detectors like ATLAS and CMS exhibit symmetry in \(\phi\) and consequently provide uniform performance in that dimension. The \(\mathcal{CNF}\) model converts the six-dimensional base density distribution into the output six-dimensional target distribution. The conditional input features, and the input and output features are summarized in Table 1.
The input detector responses of the photon \(E_{\text{T}}\), \(\eta\), and \(\phi\) are scaled to be within \([-1,1]\). Accordingly, a \(\tanh\) bijection is added as the last bijection in the \(\mathcal{CNF}\) to ensure the output detector responses are also within \([-1,1]\). Events that render an absolute value of the scaled detector response above one were discarded in the study. The fraction of rejected events is negligible for the _baseline_ scenario and the _correlation_ scenario, and it is about \(8\%\) in the _asymmetric detector effect_ scenario. The data sample was split into \(80\%\) for training, \(10\%\) for validation, and \(10\%\) for testing. The validation sample was used to tune the hyperparameters of the model, and the testing sample was used to study the performance of the \(\mathcal{CNF}\) model.
The hyperparameters of the \(\mathcal{CNF}\) are described as follows. First, the base density distribution, \(\pi(\overline{z})\), is chosen to be a multivariate normal distribution, motivated by the overall similarity between detector response distributions and normal distributions. Second, the MLPs inside each MAF module consist of two layers of dense networks with a layer size of \(128\) and a ReLU activation function [41]. When we increased the layer size or the number of layers by a factor of two, no significant improvement was observed. Third, we used ten bijection blocks as a result of a trade-off between computational expense and model complexity. Increasing the number of bijection blocks to \(20\) did not result in any performance improvement compared to the nominal setup of ten bijection blocks. The model might have gained additional improvement if additional training epochs were pursued. Fourth, instead of using a constant learning rate, we employed a learning rate scheduler that decays the learning rate from \(10^{-3}\) to \(10^{-5}\) following a power-law distribution; doing so smoothed the training loss distribution and boosted the performance.
All models were trained with \(500\) epochs and the best model is chosen for testing. We chose to use the "Wasserstein Distance" (\(WD\)) [42; 43; 44], a measure of the dissimilarity of two probability distributions, to monitor the model performance during the course of training. The \(WD\) for a given variable is evaluated between its target and generated distributions. We define the mean Wasserstein Distance, \(\overline{WD}\), as the arithmetic mean of the \(WD\) values for the six photon detector response variables. After each epoch, the \(\overline{WD}\) is evaluated using the validation sample. Figure 1 shows the \(\overline{WD}\) for each epoch and the minimum \(\overline{WD}\) up to that epoch, as evaluated on the validation sample. The model that yields the minimum \(\overline{WD}\) is selected for our study.
\begin{table}
\begin{tabular}{c c} \hline \hline Conditional Features & Input and Output Features \\ \hline \(E_{\text{T}}^{\gamma 1},E_{\text{T}}^{\gamma 2}\) & \(\Delta_{E_{\text{T}}^{\gamma 1}},\Delta_{E_{\text{T}}^{\gamma 2}}\) \\ \(\eta^{\gamma 1},\eta^{\gamma 2}\) & \(\Delta_{\eta^{\gamma 1}},\Delta_{\eta^{\gamma 2}}\) \\ pile-up \(\mu\) & \(\Delta_{\phi^{\gamma 1}},\Delta_{\phi^{\gamma 2}}\) \\ \hline \end{tabular}
\end{table}
Table 1: A summary of the conditional features, and input and output features of the model.
## 5 Results
For each scenario outlined in Sec. 3.5, we trained a separate \(\mathcal{CNF}\) model. We then applied the trained model to the test samples and computed reconstruction-level photon kinematic variables using generated detector responses. The sample where the reconstruction-level variables are created from the smearing technique is referred to as the target sample, and the sample where the reconstruction-level variables are calculated from the \(\mathcal{CNF}\) generated detector responses is referred to as the \(\mathcal{CNF}\) sample.
Baseline scenarioTo quantify the extent in which the \(\mathcal{CNF}\) learns the detector response accurately, we calculated the detector resolutions of photons as a function of photon four momenta at the particle level. The detector resolution is defined as the width of the core of the detector response distribution, \(\Delta_{X}\). Figure 2 shows a good agreement between the target detector resolutions and the \(\mathcal{CNF}\) learned ones as functions of photon \(E_{\mathrm{T}}\) and \(\eta\) and event \(\mu\). The largest discrepancy is less than 5%. Figure 3 shows the comparison of the target and learned distributions for photon \(E_{\mathrm{T}}\), \(\eta\), and \(\phi\) at the detector level. A good agreement is observed in all distributions. In regions where the statistics of simulated events are low, such as the high \(E_{\mathrm{T}}\) region, the performance of \(\mathcal{CNF}\) would benefit from more simulation events in future studies.
We also calculated the invariant mass and transverse momentum of the diphoton system using the target sample and the \(\mathcal{CNF}\) sample. Figure 4 shows the comparison of their distributions. The mean and standard deviation values in the diphoton invariant mass distrib
Figure 1: The mean Wasserstein Distance (orange) and the minimum Wasserstein distance (blue) as a function of the training epochs for the baseline scenario. These quantities were evaluated on the validation sample.
sample are in agreement with those from the target sample within the statistical precision. For the diphoton transverse momentum distribution, an agreement between the target and the \(\mathcal{CNF}\) samples is seen across the full range.
Figure 2: Target and generated photon resolutions \(\sigma\) for photon kinematic variables \(E_{\mathrm{T}}\), \(\eta\), and \(\phi\). The resolutions are shown as functions of the true values of photon \(E_{\mathrm{T}}\) and \(\eta\), and the event pile-up \(\mu\). The blue (orange) entries represent the target (generated) quantities. The target resolutions corresponding to the parameterization presented in Section 3.4.
Correlation scenarioTwo sets of target samples were generated, with the correlation parameter \(\rho\) set to 0.5 and 1.0. The \(\mathcal{CNF}\) model was trained separately for these two samples. Detector responses were generated for the six measurements in the event. Their correlation matrix is shown in Figure 5 using \(\rho=0.5\) sample. The built-in correlation of \(\rho=0.5\) was accurately reproduced. The same performance was also achieved in the case of \(\rho=1.0\). These tests indicate that the \(\mathcal{CNF}\) model can accurately reproduce the correlations between the two photons that were built into the measurements.
Asymmetric detector responses scenarioFigure 6 shows the target and generated detector response distributions. The asymmetric tails in various detector response distributions are reproduced by the \(\mathcal{CNF}\) model.
## 6 Conclusions
The \(\mathcal{CNF}\) model presented in this work demonstrates its ability to describe the detector response to two photons produced in a single collision event. Our analysis shows that the model is capable of capturing the dependencies of detector responses on both particle and event observables, the correlations between particles within the same event, and the asymmetric behavior in the detector response. Generally, collision events comprise various particle types, each with distinct detector responses, and their multiplicities can also differ between events. Our model is versatile and can be adapted to create detector responses for events containing a larger number and diversity of particles. For instance, one can expand the model's output to produce detector responses for more than two particles simultaneously or apply this model to generate detector responses for a single particle, then sequentially apply it to particle types within the same event.
Figure 5: Correlation map of the kinematic resolution of the leading and sub-leading photons for a correlation coefficient of \(\rho=0.5\). Similar performance is observed for the \(\rho=1.0\). In the target correlation map, all off-diagonal entries except those designed to be 50% correlated are zeros.
Figure 6: Distributions of detector responses for photon kinematic variables \(E_{\mathrm{T}}\), \(\eta\), and \(\phi\) are shown for the target sample (blue) and the generated sample (orange), in the asymmetric detector effect scenario.
## Appendix
The variation of the resolution is parameterized as \(R_{X}(x)=\frac{\sum_{i}p_{i}x^{i}}{\mathcal{C}}\), where \(X\) is the measured quantity, \(x\) is the variable on which the resolution depends, \(p_{i}\) is the coefficient of the polynomial, and \(\mathcal{C}\) is a normalization constant. These parameter values are given in Table 2.
## Acknowledgments
This work is supported by the U.S. Department of Energy, Office of Science under contract DE-AC02-05CH11231. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231.
|
2309.02517 | Towards User Guided Actionable Recourse | Machine Learning's proliferation in critical fields such as healthcare,
banking, and criminal justice has motivated the creation of tools which ensure
trust and transparency in ML models. One such tool is Actionable Recourse (AR)
for negatively impacted users. AR describes recommendations of cost-efficient
changes to a user's actionable features to help them obtain favorable outcomes.
Existing approaches for providing recourse optimize for properties such as
proximity, sparsity, validity, and distance-based costs. However, an
often-overlooked but crucial requirement for actionability is a consideration
of User Preference to guide the recourse generation process. In this work, we
attempt to capture user preferences via soft constraints in three simple forms:
i) scoring continuous features, ii) bounding feature values and iii) ranking
categorical features. Finally, we propose a gradient-based approach to identify
User Preferred Actionable Recourse (UP-AR). We carried out extensive
experiments to verify the effectiveness of our approach. | Jayanth Yetukuri, Ian Hardy, Yang Liu | 2023-09-05T18:06:09Z | http://arxiv.org/abs/2309.02517v1 | # Towards User Guided Actionable Recourse
###### Abstract.
Machine Learning's proliferation in critical fields such as healthcare, banking, and criminal justice has motivated the creation of tools which ensure trust and transparency in ML models. One such tool is _Actionable Recourse_ (AR) for negatively impacted users. AR describes recommendations of cost-efficient changes to a user's _actionable_ features to help them obtain favorable outcomes. Existing approaches for providing recourse optimize for properties such as proximity, sparsity, validity, and distance-based costs. However, an often-overlooked but crucial requirement for actionability is a consideration of _User Preference_ to guide the recourse generation process. In this work, we attempt to capture user preferences via soft constraints in three simple forms: _i) scoring continuous features_, _ii) bounding feature values_ and _iii) ranking categorical features_. Finally, we propose a gradient-based approach to identify _User Preferred Actionable Recourse_ (_UP-AR_). We carried out extensive experiments to verify the effectiveness of our approach.
Actionable recourse, User preference +
Footnote †: journal: Accepted in 2023.
+
Footnote †: journal: Accepted in 2023.
* We start by enabling Alice to provide three types of user preferences: i) _Scoring_, ii) _Ranking_, and iii) _Bounding_. We embed them into an optimization function to guide the recourse generation mechanism.
* We then present _User Preferred Actionable Recourse (UP-AR)_ to identify a recourse tailored to her liking. Our approach highlights a cost correction step to address the _redundancy_ induced by our method.
* We consolidate performance metrics with empirical results of UP-AR across multiple datasets and compare them with state-of-art techniques.
### Related Works
Several methods exist to identify counterfactual explanations, such as FACE (Zhou et al., 2017), which uses the shortest path to identify counterfactual explanations from high-density regions, and Growing Spheres (GS) (Shi et al., 2017) which employs random sampling within increasing hyperspheres for finding counterfactuals. CLUE (Brock et al., 2015) identifies counterfactuals with low uncertainty in terms of the classifier's entropy within the data distribution. Similarly, manifold-based CCHVAE (Zhou et al., 2017) generates high-density counterfactuals through the use of a latent space model. However, there is often no guarantee that the _what-if_ scenarios identified by these methods are attainable.
Existing research focuses on providing feasible recourses, yet comprehensive literature on understanding and incorporating user preferences within the recourse generation mechanism is lacking. It is worth mentioning that instead of understanding user preferences, Mothilal et al. (Mothilal et al., 2018) provides a user with diverse recourse options and hopes that the user will benefit from at least one. The importance of diverse recourse recommendations has also been explored in recent works (Mothilal et al., 2018; Krizim et al., 2018; Krizim et al., 2018), which can be summarized as increasing the chances of actionability as intuitively observed in the domain of unknown user preferences (Krizim et al., 2018). Karimi et al. (Krizim et al., 2018) and Cheng et al. (Cheng et al., 2019) also resolve uncertainty in a user's cost function by inducing _diversity_ in the suggested recourses. Interestingly, only 16 out of the 60 recourse methods explored in the survey by Karimi et al. (Krizim et al., 2018) include diversity as a constraint where diversity is measured in terms of distance metrics. Alternatively, studies like Cui et al. (Cui et al., 2019), Rawal and Lakkaraju (Krizim et al., 2018), Ustun et al. (Ustun et al., 2019) optimize on a universal cost function. This does not capture individual idiosyncrasies and preferences crucial for actionability.
Efforts of eliciting user preferences include recent work by De Toni et al. (Doni et al., 2019). The authors provide interactive human-in-the-loop approach, where a user continuously interacts with the system. However, learning user preferences by asking them to select from one of the _partial interventions_ provided is a derivative of providing a diverse set of recourse candidates. In this work, we consider fractional cost as a means to communicate with Alice, where fractional cost of a feature refers to _fraction of cost incurred from a feature \(i\) out of the total cost of the required intervention_.
The notion of user preference or user-level constraints was previously studied as _local feasibility_(Toni et al., 2019). Since users can not precisely quantify the cost function (Krizim et al., 2018), Yadav et al. (Yadav et al., 2019) diverged from the assumption of a universal cost function and optimizes over the distribution of cost functions. We argue that the inherent problem of feasibility can be solved more accurately by capturing and understanding Alice's recourse preference and adhering to her constraints which can vary between _Hard Rules_ such as unable to bring a co-applicant and _Soft Rules_ such as hesitation to reduce the amount, which should not be interpreted as unwillingness. This is the first study to capture individual idiosyncrasies in the recourse generation optimization to improve feasibility.
## 2. Problem Formulation
Consider a binary classification problem where each instance represents an individual's feature vector \(\mathbf{x}=[\mathbf{x}_{1},\mathbf{x}_{2},\cdot,\mathbf{x}_{D}]\) and associated binary label \(\mathbf{y}\in\{-1,+1\}\). We are given a model \(f(\mathbf{x})\) to classify \(\mathbf{x}\) into either \(-1\) or \(+1\). Let \(f(\mathbf{x})=+1\) be the desirable output of \(f(\mathbf{x})\) for Alice. However, Alice was assigned an undesirable label of \(-1\) by \(f\). We consider the problem of suggesting action \(\mathbf{r}=[\mathbf{r}_{1},\mathbf{r}_{2},\cdot,\mathbf{r}_{D}]\) such that \(f(\mathbf{x}+\mathbf{r})=+1\). Since suggested recourse only requires actions to be taken on _actionable features_ denoted by \(F_{A}\), we have \(\mathbf{r}_{i}\equiv 0:\forall i\notin F_{A}\). We further split \(F_{A}\) into _continuous actionable features_\(F_{con}\) and _categorical actionable features_\(F_{cat}\) based on feature domain. Action \(\mathbf{r}\) is obtained by solving the following optimization, where _userCost_\((\mathbf{r},\mathbf{x})\) is any predefined cost function of taking an
\begin{table}
\begin{tabular}{l c c c} \hline \hline Actionable & Curr. & UP-AR values \\ Features & val. & & \\ \cline{2-3} & & Alice & Bob \\ \hline LoamDuration & 18 & 8 & 17 \\ LoamAmount & $1940 & $1840 & $1200 \\ HasGuarantor & 0 & 0 & 1 \\ HasCoapplicant & 0 & 1 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1. A hypothetical actionable feature set of adversely affected individuals sharing similar features and corresponding suggested actions by AR and UP-AR. UP-AR provides personalized recourses based on individual user preferences.
Figure 1. Illustration of UP-AR. Similar individuals Alice and Bob with contrasting preferences can have different regions of desired feature space for a recourse.
action \(\mathbf{r}\) such that:
\[\min_{\mathbf{r}}\text{ }userCost(\mathbf{r},\mathbf{x}) \tag{2}\] \[s.t.\text{ }userCost(\mathbf{r},\mathbf{x}) =\sum_{i\in F_{A}}userCost(\mathbf{r}_{i},\mathbf{x}_{i})\] (3) \[\text{and }f(\mathbf{x}+\mathbf{r}) =+1. \tag{1}\]
### Capturing individual idiosyncrasies
A crucial step for generating recourse is identifying _local feasibility_ constraints captured in terms of individual user preferences. In this study, we assume that every user provides their preferences in three forms. Every continuous actionable feature \(i\in F_{con}\) is associated with a _preference score_\(\Gamma_{i}\) obtained from the affected individual. Additional preferences in the form of feature value bounds and ranking for preferential treatment of categorical features are also requested from the user.
User Preference Type I (Scoring continuous features):User preference for continuous features are captured in \(\Gamma_{i}\in[0,1]:\forall i\in F_{con}\) subject to \(\sum_{i\in F_{con}}\Gamma_{i}=1\). Such _soft constraints_ capture the user's preference without omitting the feature from the actionable feature set. \(\Gamma_{i}\) refers to the fractional cost of action Alice prefers to incur from a continuous feature \(i\). For example, consider \(F_{con}=\{LoanDuration,LoanAmount\}\) with corresponding user-provided scores \(\Gamma=\{0.8,0.2\}\) implying that Alice prefers to incur 80% of fractional feature cost from taking action on _LoanDuration_, while only 20% of fractional cost from taking action on _LoanAmount_. Here, Alice prefers reducing _LoanDuration_ to _LoanAmount_ and providing recourse in accordance improves actionability.
User Preference Type II (Bounding feature values):Users can also provide constraints on values for individual features in \(F_{A}\). These constraints are in the form of lower and upper bounds for individual feature values represented by \(\delta_{i}\) and \(\overline{\delta_{i}}\) for any feature \(i\) respectively. These constraints are used to discretize the steps. For a continuous feature \(i\), action steps can be discretized into pre-specified step sizes of \(\Delta_{i}=\{s:s\in[\delta_{i},\overline{\delta_{i}}]\}\). For categorical features, steps are defined as the feasible values a feature can take. For all categorical features we define, \(\Delta_{i}=\{\delta_{i},\ldots,\overline{\delta_{i}}\}:\forall i\in F_{cat}\) representing the possible values for categorical feature \(i\).
User Preference Type III (Ranking categorical features):Users are also asked to provide a ranking function \(\mathcal{R}:F_{cat}\rightarrow\mathbb{Z}^{+1}\) on \(F_{cat}\). Let \(\mathcal{R}_{i}\) refers to the corresponding rank for a categorical feature \(i\). Our framework identifies recourse by updating the candidate action based on the ranking provided. For example, consider \(F_{cat}=\{HasCoapplicant,HasGuarantor,CriticalAccountOrLoansElsewhere\}\) for which Alice ranks them by \(\{3,2,1\}\). The recourse generation system considers suggesting an action on _HasGuarantor_ before _HasCoapplicant_. Ranking preferences can be easily guaranteed by a simple override in case of discrepancies while finding a recourse.
#### 2.1.1. Cognitive simplicity of preference scores
The user preferences proposed are highly beneficial for guiding the recourse generation process. Please note that in the absence of these preferences, the recourse procedure falls back to the default values set by a domain expert. Additionally, the users can be first presented with the default preferences, and asked to adjust as per their individual preferences. A simple user interface can help them interact with the system intuitively. For example, adjusting a feature score automatically adjusts the corresponding preference type scores.
### Proposed optimization
We depart from capturing a user's cost of feature action and instead obtain their preferences for each feature. We elicit three forms of preferences detailed in the previous section and iteratively take steps in the action space. We propose the following optimization over the basic predefined steps based on the _user preferences_. Let us denote the inherent hardness of feature action \(\mathbf{r}_{i}\) for feature value \(\mathbf{x}_{i}\) using \(cost(\mathbf{r},\mathbf{x})\) which can be any cost function easily communicable to Alice. Here, \(cost(\mathbf{r}_{i}^{(t)},\mathbf{x}_{i})\) refers to a "universal" cost of taking an action \(\mathbf{r}_{i}^{(t)}\) for feature value \(\mathbf{x}_{i}\) at step \(t\). Note that this cost function or quantity differs from the \(userCost(\cdot,\cdot)\) function specified earlier. This quantity is capturing the inherent difficulty of taking an action.
\[\max_{\mathbf{r}}\sum_{i\in F_{A}}\frac{\Gamma_{i}}{cost(\mathbf{r}_{i}, \mathbf{x}_{i})}\] (Type I) \[s.t.\ f(\mathbf{x}+\mathbf{r}) =+1\] \[\Gamma_{i} =0:\ \forall i\notin F_{A}\] (actionability) \[\Gamma_{j} =1:\ \forall j\in F_{cat}\] \[\mathbf{r}_{i} \in\Delta_{i}:\ i\in F_{A}\] (Type II) \[\mathbf{1}\{\mathbf{r}_{i}>0\} \geq\mathbf{1}\{\mathbf{r}_{j}>0\}:\mathcal{R}_{i}\geq\mathcal{R} _{j}\ \forall i,j\in F_{cat}\] (Type III)
The proposed method minimizes the cost of a recourse weighted by \(\Gamma_{i}\) for all actionable features. We discuss the details of our considerations of cost function in Section 3.1. The order preference of categorical feature actions can be constrained by restrictions while finding a recourse. The next section introduces UP-AR as a stochastic solution to the proposed optimization.
## 3. User Preferred Actionable Recourse (UP-AR)
Our proposed solution, User Preferred Actionable Recourse (UP-AR), consists of two stages. The first stage generates a candidate recourse by following a connected gradient-based iterative approach. The second stage then improves upon the _redundancy_ metric of the generated recourse for better actionability. The details of UP-AR are consolidated in Algorithm 1 and visualized in Figure 2.
### Stage 1: Stochastic gradient-based approach
Poyiadzi et al. (2019) identifies a counterfactual by following a high-density connected path from the feature vector \(\mathbf{x}\). With a similar idea, we follow a connected path guided by the user's preference to identify a feasible recourse. We propose incrementally updating the candidate action with a predefined step size to solve the optimization. At each step \(t\), a candidate intervention is generated, where any feature \(i\) is updated based on a Bernoulli trial with probability \(I_{i}^{(t)}\) derived from user preference scores and the cost of taking a
predefined step \(\delta_{i}^{(t)}\) using the following procedure:
\[I_{i}^{(t)}\sim Bernoulli\left(\sigma\left(z_{i}^{(t)}\right)\right) \tag{5}\] \[\text{where }\sigma\left(z_{i}^{(t)}\right)=\frac{\text{e}^{z_{i}^{(t)} /\tau}}{\sum_{j\in F_{A}}\text{e}^{z_{i}^{(t)}/\tau}},\ z_{i}^{(t)}=\frac{\Gamma_ {i}}{cost\left(\mathbf{r}_{i}^{(t)},\mathbf{x}_{i}\right)} \tag{4}\]
With precomputed costs for each step, _weighted inverse cost_ is computed for each feature, and these values are mapped to a probability distribution using a function like softmax. _Softmax_ gives a probabilistic interpretation \(P\left(I_{i}^{(t)}=1|z_{i}^{(t)}\right)=\sigma\left(z_{i}^{(t)}\right)\) by converting \(z_{i}^{(t)}\) scores into probabilities.
We leverage the idea of _log percentile shift_ from AR to determine the cost of action since it is easier to communicate with the users in terms of percentile shifts. Specifically, we follow the idea and formulation in (Srivastava et al., 2017) to define the cost:
\[cost\left(\mathbf{r}_{i},\mathbf{x}_{i}\right)=log\left(\frac{1-Q_{i}\left( \mathbf{x}_{i}+\mathbf{r}_{i}\right)}{1-Q_{i}\left(\mathbf{x}_{i}\right)}\right) \tag{6}\]
were \(Q_{i}\left(\mathbf{x}_{i}\right)\) representing the _percentile_ of feature \(i\) with value \(\mathbf{x}_{i}\) is a score below which \(Q_{i}\left(\mathbf{x}_{i}\right)\) percentage of scores fall in the frequency distribution of feature values in the target population.
We adapt and extend the idea that counterfactual explanations and adversarial examples (Srivastava et al., 2017) have a similar goal but with contrasting intention (Srivastava et al., 2017). A popular approach to generating adversarial examples (Srivastava et al., 2017) is by using a gradient-based method. We employ the learning of adversarial example generation to determine the direction of feature modification in UP-AR: the Jacobian matrix is used to measure the local sensitivity of outputs with respect to each input feature. Consider that \(f:\mathbb{R}^{D}\rightarrow\mathbb{R}^{K}\) maps a \(D\)-dimensional feature vector to a \(K\)-dimensional vector, such that each of the partial derivatives exists. For a given \(\mathbf{x}=[\mathbf{x}_{1},\dots,\mathbf{x}_{i},\dots,\mathbf{x}_{D}]\) and \(f(\mathbf{x})=[f_{[1]}(\mathbf{x}),\dots,f_{[f]}(\mathbf{x}),\dots,f_{[K]}( \mathbf{x})]\), the Jacobian matrix of \(f\) is defined to be a \(D\times K\) matrix denoted by \(\mathbf{J}\), where each \((j,i)\) entry is \(\mathbf{J}_{j,i}=\frac{\partial f_{[j]}(\mathbf{x})}{\partial\mathbf{x}_{i}}\). For a neural network (NN) with at least one hidden layer, \(\mathbf{J}_{j,i}\) is obtained using the chain rule during backpropagation. For an NN with one hidden layer represented by _weights_\(\{\)w\(\}\), we have:
\[\mathbf{J}_{j,i}=\frac{\partial f_{[j]}(\mathbf{x})}{\partial\mathbf{x}_{i}}= \sum_{l}\frac{\partial f_{[l]}(\mathbf{x})}{\partial a_{l}}\frac{\partial a_{l }}{\partial\mathbf{x}_{i}}\text{ where }a_{l}=\sum_{i}w_{l}\mathbf{x}_{i} \tag{7}\]
Where in Equation 7, \(a_{l}\) is the output (with possible activation) of the hidden layer and \(w_{l}\) is the weight of the node \(l\). Notice line 4 in Algorithm 1 which _updates the candidate_ action for a feature \(i\) at step \(t\) as:
\[\mathbf{r}_{i}^{(t)}=\mathbf{r}_{i}^{(t-1)}+Sign\left(\mathbf{J}_{+1,i}^{(t) }\right)\cdot I_{i}^{(t)}\cdot\delta_{i}^{(t)} \tag{8}\]
Following the traditional notation of a binary classification problem and with a bit of abuse of notation \(-1\to 1,+1\to+1\), \(Sign\left(\mathbf{J}_{+1,i}^{(t)}\right)\) captures the direction of the feature change at step \(t\). This direction is iteratively calculated, and additional constraints such as non-increasing or non-decreasing features can be placed at this stage.
```
0: Model \(f\), user feature vector \(\mathbf{x}\), cost function \(cost\left(\cdot,\cdot\right)\), step size \(\Delta_{i}:\forall i\in F_{A}\), maximum steps \(T\), action \(\mathbf{r}\) initialized to \(\mathbf{r}^{(0)}\), fixed \(\tau\), \(t=1\).
1:while\(t\leq T\) or \(f\left(\mathbf{x}+\mathbf{r}^{(t)}\right)\neq+1\)do
2:\(z_{i}^{(t)}=\frac{\Gamma}{cost\left(\mathbf{r}_{i}^{(t)},\mathbf{x}_{i}\right) }:\ \forall i\)
3:\(I_{i}^{(t)}\sim Bern(\sigma(z_{i}^{(t)}))\)\(:\ \forall i\), where \(\sigma(z_{i}^{(t)})=\frac{\text{e}_{i}^{z_{i}^{(t)}/\tau}}{\sum_{j\in F_{A}} \text{e}^{z_{i}^{(t)}/\tau}}\)
4:\(\mathbf{r}_{i}^{(t)}=\mathbf{r}_{i}^{(t-1)}+Sign\left(\mathbf{J}_{+1,i}^{(t) }\right)\cdot I_{i}^{(t)}\cdot\delta_{i}^{(t)}\)\(:\ \forall i\in F_{A}\)
5:\(t=t+1\)
6:Let \(\bar{t}\) be the smallest step such that \(f(\mathbf{x}+\mathbf{r}^{(\bar{t})})=+1\) and initialize \(t=\bar{t}\)
7:if\(\exists i\in F_{cat}:\mathbf{r}_{i}^{(t)}>0\)then
8:while\(f\left(\mathbf{x}+\bar{\mathbf{r}}^{(t)}\right)=+1\)do
9:\(\bar{\mathbf{r}}^{(t)}=\mathbf{r}^{(t)}\)
10:\(\bar{\mathbf{r}}_{i}^{(t)}=\mathbf{r}_{i}^{(\bar{t})}\)\(:\ \forall i\in F_{cat}\)
11:\(t=t-1\)
12:return\(\bar{\mathbf{r}}^{(t)}\) as action \(\mathbf{r}\)
```
**Algorithm 1** User Preferred Actionable Recourse (UP-AR)
#### 3.1.1. Calibrating frequency of categorical actions
We employ _temperature scaling_(Gord et al., 2017) parameter \(\tau\) observed in Equation 5 to calibrate UP-AR's recourse generation cost. Updates on categorical features with fixed step sizes are expensive, especially for binary categorical values. Hence, tuning the frequency of categorical suggestions can significantly impact the overall cost of a recourse. \(\tau\) controls the frequency with which categorical actions are suggested. Additionally, if a user prefers updates on categorical features over continuous features, UP-AR has the flexibility to address this with a smaller \(\tau\).
To study the effect of \(\tau\) on overall cost, we train a Logistic Regression (LR) model on a processed version of _German_(Bord et al., 2017) dataset and
Figure 2. Framework of UP-AR. Successful recourse candidates; \(\mathbf{r}^{(\cdot)},\ \bar{\mathbf{r}}^{(\cdot)}\) are colored in pink.
generate recourses for the 155 individuals who were denied credit. The cost gradually decreases with decreasing \(\tau\) since the marginal probability of suggesting a categorical feature change is diminished and the corresponding experiment is deferred to the Appendix. Hence, without affecting the success rate of recourse generation, the overall cost of generating recourses can be brought down by decreasing \(\tau\). In simple terms, with a higher \(\tau\), UP-AR frequently suggests recourses with expensive categorical actions. We note that \(\tau\) can also be informed by a user upon seeing an initial recourse. After the strategic generation of an intervention, we implement a cost correction to improve upon the potential redundancy of actions in a recourse option.
### Stage 2: Redundancy & Cost Correction (CC)
In our experiments, we observe that once an expensive action is recommended for a categorical feature, some of the previous action steps might become redundant. Consider an LR model trained on the processed _german_ dataset. Let \(F_{A}=\{LoanDuration,LoanAmount,HasGuarantor\}\) out of all the 26 features, where _HasGuarantor_ is a binary feature which represents the user's ability to get a guarantor for the loan. Stage 1 takes several steps over _LoanAmount_ and _LoanDuration_ before recommending to update _HasGuarantor_. These steps are based on the feature action probability from Equation 5. Since categorical feature updates are expensive and occur with relatively low probability, Stage 1 finds a low-cost recourse by suggesting low-cost steps more frequently in comparison with high-cost steps.
Once an update to a categorical feature is recommended, some of the previous low-cost steps may be redundant, which can be rectified by tracing back previous continuous steps. Consider a scenario such that \(\exists i\in F_{cat}:\mathbf{r}_{i}^{(T)}>0\) for a recourse obtained after \(T\) steps in Stage 1. The CC procedure updates all the intermediary recourse candidates to reflect the categorical changes i.e., \(\forall i\in F_{cat}:\mathbf{r}_{i}^{(T)}>0\), we update \(\mathbf{r}_{i}^{(t)}=\mathbf{r}_{i}^{(T)}:\forall t\in\{1,2,\dots,T-1\}\) to obtain \(\mathbf{\tilde{r}}^{(t)}\). We then perform a linear retracing procedure to return \(\mathbf{\tilde{r}}^{(t)}\) such that \(f\left(\mathbf{x}+\mathbf{\tilde{r}}^{(t)}\right)=+1\) for the smallest \(t\).
## 4. Discussion and Analysis
In this section, we analyze the user preference performance of UP-AR. For simplicity, a user understands cost in terms of log percentile shift from her initial feature vector described in Section 3. Let \(\hat{\Gamma}_{l}\) be the observed fractional cost for feature \(i\) formally defined in Equation 11. Any cost function can be plugged into UP-AR with no restrictions. A user prefers to have \(\Gamma_{l}\) fraction of the total desired percentile shift from feature \(i\). Consider \(F_{A}=\{LoanDuration,LoanAmount\}\) and let the corresponding user scores provided by all the adversely affected individuals be: \(\Gamma=\{0.8,0.2\}\). Here, "Denied loan applicants prefers reducing _LoanDuration_ to _LoanAmount_ by \(8:2\)." Figure 3 shows the frequency plot of feature cost ratio for feature _LoanDuration_ out of total incurred cost from _LoanDuration_ and _LoanAmount_. i.e., \(y-\)axis represents \(\hat{\Gamma}_{l}\). Also, Figure 4 further shows the fractional cost of feature _DebtRatio_ for recourses obtained for a NN based model trained on _Give Me Some Credit (GMSC)_ dataset. These experiments signify the adaptability of UP-AR to user preferences and provides evidence that distribution of \(\hat{\Gamma}_{i}\) is centered around \(\Gamma_{i}\).
**Lemma 4.1**.: _Consider UP-AR identified recourse \(\mathbf{r}\) for an individual \(\mathbf{x}\). If \(C_{i,min}^{(T)}\) and \(C_{i,max}^{(T)}\) represent the minimum and maximum cost of any step for feature \(i\) until \(T^{*}\), then:_
\[\mathbb{E}\left[\mathit{cost}\left(\mathbf{r}_{i},\mathbf{x}_{i}\right)\right] \leq T^{*}\sigma\left(\frac{\Gamma_{l}}{C_{i,min}^{(T^{*})}}\right)C_{i,max}^{ (T^{*})}. \tag{9}\]
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Features to** & **Current** & **Stage 1** & **Stage 2** \\
**change** & **values** & **values** & **values** \\ \hline LoanDuration & 18 & 8 & 12 \\ LoanAmount & $1940 & $1040 & $1540 \\ HasGuarantor & 0 & 1 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Redundancy corrected recourse for a hypothetical individual.
Figure 4. GS and UP-AR’s distribution of \(\hat{\Gamma}_{DebtRatio}\) for a _Neural Network_ model trained on _GMSC_.
Figure 3. AR and UP-AR’s distribution of \(\hat{\Gamma}_{LoanDuration}\) for a _Logistic Regression_ model trained on _German_.
Lemma 4.1 implies that the expected cost \(\mathbb{E}\left[\left.cost\left(\mathbf{r}_{i},\mathbf{x}_{i}\right)\right.\right]\), specifically for a continuous feature action is positively correlated to the probabilistic interpretation of user preference scores. Hence \(\mathbf{r}\) satisfies users critical Type I constraints in expectation. Recall that Type II and III constraints are also applied at each step \(t\). Lemma 4.1 signifies that UP-AR adheres to user preferences and thereby increases the actionability of a suggested recourse.
Corollary 4.2 ().: _For UP-AR with a linear \(\sigma\left(\cdot\right)\), predefined steps with equal costs and cost \(\left(\mathbf{r},\mathbf{x}\right)=\sum_{i\in F_{A}}cost\left(\mathbf{r}_{i}, \mathbf{x}_{i}\right)\), total expected cost after \(T^{+}\) steps is:_
\[\mathbb{E}\left[\left.cost\left(\mathbf{r},\mathbf{x}\right)\right]\leq T^{ +}\sum_{i\in F_{A}}\sigma\left(\Gamma_{i}\right)\right.. \tag{10}\]
Corollary 4.2 states that with strategic selection of \(\sigma\left(\cdot\right)\), \(\delta^{\left(\cdot\right)}\) and \(cost\left(\cdot,\cdot\right)\), UP-AR can also tune the total cost of suggested actions. In the next section, we will compare multiple recourses based on individual user preferences for a randomly selected adversely affected individual.
### Case study of individuals with similar features but disparate preferences
Given an LR model trained on _german_ dataset and Alice, Bob and Chris be three adversely affected individuals. \(F_{A}=\left\{LoanDuration, LoanAmount, HasGuarantor\right\}\) and corresponding user preferences are provided by the users. In Table 3, we consolidate the corresponding recourses generated for the specified disparate sets of preferences.
From Table 3 we emphasize the ability of UP-AR to generate a variety of user-preferred recourses based on their preferences, whereas AR always provides the same low-cost recourse for all the individuals. The customizability of feature actions for individual users can be found in the table. When the Type I score for _LoanAmount_ is 0.8, UP-AR prefers decreasing loan amount to loan duration. Hence, the loan amount is much lesser for Chris than for Alice and Bob.
## 5. Empirical evaluation
In this section, we demonstrate empirically: 1) that UP-AR respects \(\Gamma_{i}\)-fractional user preferences at the population level, and 2) that UP-AR also performs favorably on traditional evaluate metrics drawn from CARLA (Luo et al., 2018). We used the native CARLA catalog for the Give The Some Credit (GMSC) (Kumar et al., 2017), Adult Income (Adult) (Kumar et al., 2017) and Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) (Bianchi et al., 2017) data sets as well as pre-trained models (both the **Neural Network (NN)** and **Logistic Regression** (LR)). NN has three hidden layers of size (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017), and the LR is a single input layer leading to a Softmax function. Although AR is proposed for _linear models_, it can be extended to _nonlinear models_ by the local linear decision boundary approximation method LIME (Luo et al., 2018) (referred as AR-LIME).
_PERFORMANCE METRICS:_ For UP-AR, we evaluate:
1. _Success Rate (Succ. Rate)_: The percentage of adversely affected individuals for whom recourse was found.
2. _Average Time Taken (Avg.Tim.)_: The average time (in seconds) to generate recourse for a single individual.
3. _Constraint Violations (Con. Vio.)_: The average number of non-actionable features modified.
4. _Redundancy (Red.)_: A metric that tracks superfluous feature changes. For each successful recourse calculated on a univariate basis, features are flipped to their original value. The redundancy for recourse is the number of flips that do not change the model's classification decision.
5. _Proximity (Pro.)_: The normalized \(l_{2}\) distance of recourse to its original point.
6. _Sparsity (Spa.)_: The average number of features modified.
We provide comparative results for UP-AR against state-of-the-art counterfactual/recourse generation techniques such as GS, Wachter, AR-(LIME), CCHAVE and FACE. These methods were selected based on their popularity and their representation of both independence and dependence based methods, as defined in CARLA. In addition to the traditional performance metrics, we also measure _Preference-Root mean squared error (pRMSE)_ between the user preference score and the fractional cost of the suggested recourses. We calculate \(pRMSE_{i}\) for a randomly selected continuous valued feature \(i\) using:
\[pRMSE_{i} =\sqrt{\frac{1}{n}\sum_{j=1}^{n}\left(\hat{\Gamma}_{i}^{(j)}- \Gamma_{i}^{(j)}\right)^{2}} \tag{12}\] \[\text{where }\hat{\Gamma}_{i}^{(j)} =\frac{cost\left(\mathbf{r}_{i},\mathbf{x}_{i}\right)}{\sum_{k\in F _{con}}cost\left(\mathbf{r}_{k},\mathbf{x}_{k}\right)} \tag{11}\]
Here \(\Gamma_{i}^{(j)}\) and \(\hat{\Gamma}_{i}^{(j)}\) are user provided and observed preference scores of feature \(i\) for an individual \(j\). In Table 4, we summarize \(pRMSE\), which is the average error across continuous features such that:
\[pRMSE=\frac{1}{|F_{con}|}\sum_{i\in F_{con}}pRMSE_{i}. \tag{13}\]
_DATASETS._ We train an LR model on the processed version of german (Bianchi et al., 2017) credit dataset from _sklearn's linear_model_ module. We replicate Ustun et al. (Ustun et al., 2017)'s model training and recourse generation on german. The dataset contains 1000 data points with 26 features for a loan application. The model decides if an applicant's credit request should be approved or not. Consider \(F_{con}=\left\{LoanDuration, LoanAmount\right\}\), and \(F_{cat}=\left\{CriticalAccountOrLoansElsewhere, HasGuarantor, HasCoapplicant\right\}\). Let the user scores for \(F_{con}\) be \(\Gamma=\left\{0.8,0.2\right\}\) and ranking for \(F_{cat}\) be \(\left\{3,1,2\right\}\) for all the denied individuals. For this experiment, we set \(\tau^{-1}=4\). Out of 155 individuals with denied credit, AR and UP-AR provided recourses to 135 individuals.
**Cost Correction:** Out of all the denied individuals for whom categorical actions were suggested, an average of \(\sim 5400\) in _LoanAmount_ was recovered by cost correction.
For the following datasets, for traditional metrics, user preferences were set to be uniform for all actionable features to not bias the results to one feature preference over another:
1. **GMSC:** The data set from the 2011 Kaggle competition is a credit underwriting dataset with 11 features where the target is the presence of delinquency. Here, we measure what feature changes would lower the likelihood of delinquency. We again used the default protected features (_age_ and _number of dependents_). The baseline accuracy for the NN model is 81%, while the baseline accuracy for the LR is 76%.
2. **Adult Income:** This dataset originates from 1994 census database with 14 attributes. The model decides whether an individual's income is higher than \(50,000\) USD/year. The baseline accuracy for the NN model is 85%, while the baseline accuracy for the LR is 83%. Our experiment is conducted on a sample of 1000 data points.
3. **COMPAS:** The data set consists of 7 features describing offenders and a target representing predictions. Here, we measure what feature changes would change an automated recidivism prediction.
The baseline accuracy for NN is 78%, while baseline accuracy for LR is 71%.
Performance Analysis of Up-ARWe find UP-AR holistically performs favorably to its counterparts. Critically, it respects feature constraints (which we believe is fundamental to actionable recourse) while maintaining a significantly low redundancy and sparsity. This indicates that it tends to change fewer necessary features. Its speed makes it tractable for real-world use, while its proximity values show that it recovers relatively low-cost recourse. These results highlight the promise of UP-AR as a performative, low-cost option for calculating recourse when user preferences are paramount. UP-AR shows consistent improvements over all the performance metrics. The occasional lower success rate for a NN model is attributed to 0 constraint violations.
_pRMSE_: We analyze user preference performance in terms of _pRMSE_. From Table 4, we observe that UP-AR's _pRMSE_ is consistently better than the state of art recourse methods. The corresponding experimental details and visual representation of the distribution of _pRMSE_ is deferred to Appendix 5.1.
### Random user preference study
We performed an experiment with increasing step sizes on _German_ dataset. We observed that, with increasing step sizes, _pRMSE_ is increased from 0.09 to 0.13, whereas it was consistent for AR.
In the next experiment, we randomly choose user preference for _LoanDuration_ from \([0.4,0.5,0.6,0.7,0.8]\). The rest of the experimental setup is identical to the setup discussed in Section 4. In this experiment, we observe _pRMSE_ with non-universal user preference for adversely affected individuals. Here the average _pRMSE_ of both
\begin{table}
\begin{tabular}{l l r r r r r r r r r r r r} \hline \hline & & \multicolumn{6}{c}{Neural Network} & \multicolumn{6}{c}{Logistic Regression} \\ \cline{3-13} Data. & Recourse & Succ. & pRMSE & Avg & Con. & Red. & Pro. & Spa. & Succ. & pRMSE & Avg & Con. & Red. & Pro. & Spa. \\ & Method & Rate & Tim. & Vio. & & & Rate & Tim. & Vio. & & & & & \\ \hline \multirow{6}{*}{GMSC} & GS & 0.75 & 0.16 & 0.02 & 0.00 & 6.95 & 1.01 & 8.89 & 0.62 & 0.18 & 0.03 & 0.00 & 4.08 & 1.39 & 8.99 \\ & Wachter & 1.00 & 0.18 & 0.02 & 1.49 & 6.84 & 1.08 & 8.46 & 1.00 & 0.17 & 0.03 & 1.23 & 3.51 & 1.42 & 7.18 \\ & AR\({}_{(\times 10^{2})}\) & 0.03 & 0.17 & 0.45 & 0.00 & 0.00 & 0.17 & 1.72 & 0.17 & 0.17 & 0.73 & 0.00 & 0.00 & 0.93 & 1.91 \\ & CCHVAE & 1.00 & 0.18 & 1.05 & 2.0 & 9.99 & 1.15 & 10.1 & 1.00 & 0.18 & 1.37 & 2.00 & 8.64 & 2.05 & 11.0 \\ & FACE & 1.00 & 0.17 & 8.05 & 1.57 & 6.65 & 1.20 & 6.69 & 1.00 & 0.16 & 11.9 & 1.65 & 7.47 & 2.30 & 8.45 \\ & **UP-AR** & 0.94 & 0.07 & 0.08 & 0.00 & 1.30 & 0.49 & 3.22 & 1.00 & 0.07 & 0.12 & 0.00 & 1.47 & 0.68 & 3.92 \\ \hline \multirow{6}{*}{Adult} & GS & 0.84 & 0.10 & 0.03 & 0.00 & 2.86 & 1.30 & 5.09 & 0.84 & 0.10 & 0.04 & 0.00 & 1.76 & 2.05 & 5.85 \\ & Wachter & 0.55 & 0.10 & 0.04 & 1.44 & 3.05 & 0.74 & 4.90 & 1.00 & 0.11 & 0.10 & 1.68 & 0.90 & 1.44 & 5.81 \\ & AR\({}_{(\times 10^{2})}\) & 0.42 & 0.10 & 9.20 & 0.00 & 0.00 & 2.10 & 2.54 & 0.76 & 0.10 & 7.37 & 0.00 & 0.03 & 2.10 & 2.31 \\ & CCHVAE & 0.84 & 0.11 & 0.77 & 4.47 & 5.83 & 3.95 & 9.40 & 0.84 & 0.10 & 1.08 & 4.22 & 6.85 & 3.96 & 9.45 \\ & FACE & 1.00 & 0.10 & 6.78 & 4.58 & 7.54 & 4.11 & 7.91 & 1.00 & 0.10 & 8.37 & 4.53 & 5.91 & 4.28 & 7.81 \\ & **UP-AR** & 0.82 & 0.10 & 0.76 & 0.00 & 0.78 & 1.77 & 2.78 & 0.82 & 0.05 & 0.67 & 0.00 & 0.55 & 1.78 & 2.88 \\ \hline \multirow{6}{*}{COMPAS} & GS & 1.00 & 0.15 & 0.03 & 0.00 & 1.09 & 4.47 & 3.35 & 1.00 & 0.14 & 0.04 & 0.00 & 0.34 & 1.12 & 3.98 \\ & Wachter & 1.00 & 0.14 & 0.05 & 1.00 & 1.61 & 0.56 & 4.35 & 1.00 & 0.14 & 0.04 & 1.00 & 0.85 & 1.66 & 4.83 \\ \cline{1-1} & AR\({}_{(\times 10^{2})}\) & 0.65 & 0.13 & 0.20 & 0.00 & 0.00 & 0.78 & 0.90 & 0.52 & 0.15 & 0.24 & 0.00 & 0.00 & 1.45 & 1.57 \\ \cline{1-1} & CCHVAE & 1.00 & 0.14 & 5.09 & 2.27 & 4.31 & 1.70 & 4.91 & 1.00 & 0.14 & 0.02 & 1.62 & 2.70 & 1.74 & 4.92 \\ \cline{1-1} & FACE & 1.00 & 0.15 & 0.37 & 2.39 & 3.96 & 2.35 & 4.72 & 1.00 & 0.15 & 0.40 & 2.47 & 4.38 & 2.46 & 4.81 \\ \cline{1-1} & **UP-AR** & 0.92 & 0.08 & 0.04 & 0.00 & 0.60 & 0.63 & 1.82 & 1.00 & 0.10 & 0.05 & 0.00 & 0.81 & 0.82 & 2.74 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Summary of performance evaluation of UP-AR. Top performers are highlighted in green.
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & & & \multicolumn{2}{c}{Alice} & \multicolumn{2}{c}{Bob} & \multicolumn{2}{c}{Chris} \\ \cline{3-10} Features to & Current & AR & User & UP-AR & User & UP-AR & User & UP-AR \\ change & values & values & Pref & values & Pref & values & Pref & values \\ \hline LoanDuration & 30 & 25 & 0.8 & 20 & 0.8 & 10 & 0.2 & 27 \\ LoanAmount & $8072 & $5669 & 0.2 & $7372 & 0.2 & $6472 & 0.8 & $5272 \\ HasGuarantor & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Recourses generated by UP-AR for similar individuals with a variety of preferences.
LoanDuration_ and _LoadAmount_ for UP-AR is \(0.19\), whereas for AR it is \(0.34\).
Further, using the CARLA package, we generated recourses for a set of \(1000\) individuals and \(\Gamma\) for two continuous features was randomly selected from \([0.3,0.6,0.9]\). Figure 7 provides a visual analysis of the distribution of average \(pRMSE\) using violin plots. The experiments were performed on the \(3\) datasets discussed in Section 5 for both the LR and NN models. For _GMSC_ dataset, \(F_{con}=\{\)_DebRatio_, _MonthlyIncome_\(\}\) and \(F_{A}=\{\)_RevolvingUtilizationOf UnsecuredLines_, _NumberOfTime30-59DaysPastDueNotWorse_, _DebRatio_, _MonthlyIncome_, _NumberOfOpenCreditLinesAndLons_, _NumberOfTime60-Times90DaysPastDueNotWorse_, _For COMPAS_ dataset, \(F_{con}=\{\)_priors-count_, _length-of-stay_\(\}\) and \(F_{A}=\{\)_two-year-recid_, _priors-count' length-of-stay_\(\}\). For _Adult_ dataset, \(F_{con}=\{\)_education-num_, _capital-gain_\(\}\) and \(F_{A}=\{\)_education-num_, _capital-gain_, _capital-loss_, _hours-per-week_, _workclass-Non-Private_, _workclass-Private_, _marital-status-Married_, _marital-status-Non-Married_, _occupation-Managerial-Specialist_, _occupation-Other_\(\}\).
With these experiments we conclude that UP-AR's \(\hat{\Gamma}\) deviation from the user's \(\Gamma\) is consistently lower than the existing recourse generation methodologies. We observe that AR is unaffected by the varying user preference due to the fact that AR and other state-of-the-art recourse methodologies lack the capability of capturing such idiosyncrasies. On the other hand, UP-AR is driven by those preferences and has significantly better \(pRMSE\) in comparison to AR.
### Cost Correction analysis
In Table 5 we explore the effect of UP-AR's cost correction procedure on the Adult and COMPAS datasets. We do not include the GMSC dataset as it does not include binary features, and therefore does not utilize the cost correction procedure. In Table 5 we show the number of factuals, the percentage of factuals for which recourse was found, the percentage of recourse found which contained at least one binary action, the percent of recourse found which underwent cost correction, the average percentage of steps
saved by the cost correction procedure, and the average percent of cost savings, measured as the percent reduction in continuous cost (\(l_{2}\) distance) between a factual and its recourse before and after the cost-correction procedure.
## 6. Concluding Remarks
In this study, we propose to capture different forms of user preferences and propose an optimization function to generate actionable recourse adhering to such constraints. We further provide an approach to generate a connected (Hendle et al., 2017) recourse guided by the user. We show how UP-AR adheres to soft constraints by evaluating user satisfaction in fractional cost ratio. We emphasize the need to capture various user preferences and communicate with the user in comprehensible form. This work motivates further research on how truthful reporting of preferences can help improve overall user satisfaction.
## 7. User Acceptance Survey
We surveyed 40 random students and employees from a mailing list. The goal of this survey is to establish whether people preferred to provide specific preferences over other mechanism. The survey included one question with four options as follows:
_If you are denied a loan application. What do you expect from bank to get your loan approval?_
1. _Single list of suggestions to your profile. Ex: (increase income by 1008 & reduce loan duration by 1 year)_
2. _A set with multiple lists of suggestions to your profile. Ex: (i) increase income by 100S and reduce loan duration by 1 year OR ii) increase income by 500S OR iii) reduce loan duration by 3 year OR iv) bring a co-applicant)_
3. _Influence bank's suggestions by providing preferential scores for actions you can take. Ex: (preferring to increase loan duration more than loan amount by 8:2, or preferring to bring a guarantor before a co-applicant)_
4. _Any other form of preferences_
Every individual in the survey was asked to select one of the four choices provided. In this survey, it is identified that majority of 60% of individuals preferred influencing the bank's decision by providing preference scores for individual features, followed by 30% of individuals who wanted multiple recourses from the bank. The remaining 10% of individuals preferred a single recourse or any other form of preference.
## Acknowledgments
This work is partially supported by the National Science Foundation (NSF) under grants IIS-2143895 and IIS-2040800, and CCF-2023495.
|
2306.12181 | Feature Interactions Reveal Linguistic Structure in Language Models | We study feature interactions in the context of feature attribution methods
for post-hoc interpretability. In interpretability research, getting to grips
with feature interactions is increasingly recognised as an important challenge,
because interacting features are key to the success of neural networks. Feature
interactions allow a model to build up hierarchical representations for its
input, and might provide an ideal starting point for the investigation into
linguistic structure in language models. However, uncovering the exact role
that these interactions play is also difficult, and a diverse range of
interaction attribution methods has been proposed. In this paper, we focus on
the question which of these methods most faithfully reflects the inner workings
of the target models. We work out a grey box methodology, in which we train
models to perfection on a formal language classification task, using PCFGs. We
show that under specific configurations, some methods are indeed able to
uncover the grammatical rules acquired by a model. Based on these findings we
extend our evaluation to a case study on language models, providing novel
insights into the linguistic structure that these models have acquired. | Jaap Jumelet, Willem Zuidema | 2023-06-21T11:24:41Z | http://arxiv.org/abs/2306.12181v1 | # Feature Interactions Reveal Linguistic Structure in Language Models
###### Abstract
We study _feature interactions_ in the context of _feature attribution_ methods for post-hoc interpretability. In interpretability research, getting to grips with feature interactions is increasingly recognised as an important challenge, because interacting features are key to the success of neural networks. Feature interactions allow a model to build up hierarchical representations for its input, and might provide an ideal starting point for the investigation into linguistic structure in language models. However, uncovering the exact role that these interactions play is also difficult, and a diverse range of interaction attribution methods has been proposed. In this paper, we focus on the question which of these methods most _faithfully_ reflects the inner workings of the target models. We work out a _grey box_ methodology, in which we train models to perfection on a formal language classification task, using PCFGs. We show that under specific configurations, some methods are indeed able to uncover the grammatical rules acquired by a model. Based on these findings we extend our evaluation to a case study on language models, providing novel insights into the linguistic structure that these models have acquired.1
Footnote 1: All code and data is available here: [https://github.com/jumelet/fidam-eval](https://github.com/jumelet/fidam-eval)
## 1 Introduction
Feature attribution methods (FAMs) are a popular family of tools for explaining the behaviour of deep learning models, by explaining a prediction in terms of contributions of individual features Ribeiro et al. (2016); Lundberg and Lee (2017). There are many such methods proposed, and mathematical results (such as axiomatic approaches based on game theory) and theoretical frameworks (such as Covert et al. (2021)'s 'Explaining by Removing') are starting to offer a good understanding of how different methods relate to one another.
However, there are also some important shortcomings. Perhaps most importantly, popular FAMs mostly ignore the existence of interactions between the effects of features on the prediction. This is problematic, because **Feature Interactions** are widely seen as a major factor in the success of neural networks Goodfellow et al. (2016). This is all the more important in domains such as language and music processing, because feature interactions allow neural networks to model hierarchical representations of their input, which is considered a key design feature of language and music. To address these shortcomings, there is now an emerging literature on **feature interaction detection and attribution methods** (FIDAMs) that explain model predictions in terms of interacting features Tsang et al. (2020); Janizek et al. (2021).
However, assessing the faithfulness of FIDAMs is even more challenging than assessing the faithfulness of feature attribution methods more generally Jacovi and Goldberg (2021). In this paper, we present a systematic framework to characterise FIDAMs, and derive several new FIDAMs based on that framework. We then proceed with creating an evaluation pipeline that measures a FIDAM's ability to recover the structural rules for which we have good evidence that they play an important role in the target model's performance (Figure 1). We first test this on a set of small-scale formal language tasks, that provide stronger faithfulness guarantees. Finally, we present a case study of a large language model on the CoLA task for linguistic acceptability.
We find that the performance of FIDAMs is very variable, and that the performance on the small-scale formal language tasks may not be predictive of the performance of methods on the large-scale natural language task. This is an illustration of what we call the **Attribution Generalisation problem**. We argue that this problem remains a key open problem in the study of explanation methods in general.
## 2 Related Work: Assessing Faithfulness
In this section we discuss related work on assessing the faithfulness of feature attribution methods (FAMs). A model explanation ideally provides better insights into model behaviour. However, it is important that an explanation is faithful to the reasoning of the model, and not merely plausible to a researcher. Unfortunately, attribution models can yield vastly different outcomes Neely et al. (2022).
Defining a notion of faithfulness itself is an ongoing debate, and it has been argued that we should not be aiming for a binary notion, but a graded one instead Jacovi and Goldberg (2021). To this end, various methodologies have been proposed to evaluate the faithfulness of explanation methods.
One research direction introduces metrics to evaluate faithfulness by quantifying the impact of features that were deemed to contribute the most by an attribution method. Hooker et al. (2019) does this by _retraining_ a model on data from which the most contributing features have been removed. DeYoung et al. (2020) provide a more direct measure, by quantifying changes in model predictions when only a subset of the most contributing features is fed to model. Atanasova et al. (2020) build on this notion, introducing a range of diagnostic metrics that capture various aspects of explanation quality including faithfulness, human rationale agreement, and explanation consistency. Jain et al. (2020) ensure and evaluate faithfulness by only allowing a model access to the set of features that were deemed important by the explanation method, which has also been shown to improve model robustness Wiegreffe et al. (2021); Ross et al. (2022).
Another line of work modifies the training data in such a way that we obtain guarantees of certain features the model must be paying attention to when making a prediction: e.g. by shuffling test data such that only part of the input resembles the statistics from the train set Porner et al. (2018), or by explicitly adding exploitable heuristics in the train set Bastings et al. (2022); Adebayo et al. (2022). These two approaches could be characterised as _grey box_ models: we adapt the data in such a way that we gain a degree of confidence what cues the model must be relying on, without having a full understanding of the model's internal reasoning. A _glass box_ model, on the other hand, is a model whose behaviour is fully understood: it's not derived by training a model on a task, but hand-crafted. Hao (2020) utilises such models to evaluate FAMs on formal language tasks, providing more robust guarantees on model behaviour.
Our own approach is related to the first line of research, making use of _grey box_ models. Instead of evaluating FAMS, we evaluate FIDAMs, that provide more comprehensive insights into model reasoning. Deployment of such methods within NLP has been fairly limited, and as such evaluating their faithfulness in a language context has been an underexplored research topic.
## 3 A Framework for Characterising FIDAMs
Feature attribution methods typically decompose a model prediction into a sum of feature contributions Sundararajan et al. (2017); Lundberg and Lee (2017). A large contribution then indicates that this feature played an important role in a model's prediction. Although feature attributions can provide meaningful insights into the inner model dynamics, they paint a fairly limited picture of the model be
Figure 1: We generate a corpus based on a PCFG, and create negative examples by corrupting the generated corpus. Next, we train a neural model to predict whether a string is well-formed, forcing the model to obtain a comprehensive understanding of the rules of the language. Then, we extract the internal interactions by using FIDAMs described in §4, allowing us to directly evaluate the grammatical knowledge of the neural model.
haviour. Most importantly, **interactions** between features are lumped together, making it impossible to discern whether a large contribution of a feature stemmed from that feature alone, or from its interaction with neighbouring features. To address this, multiple methods have been proposed that decompose a model prediction into a sum of feature interactions, based on similar mathematical formalism as those of feature attributions.
NotationA neural network is represented as a single function \(f\). The input to \(f\) is denoted as \(\mathbf{x}\), which consists of \(N\) input features. A partial input \(\mathbf{x}_{S}\) only consists of input features \(S\subseteq N\). A value function \(v(\mathbf{x}_{S})\) quantifies the model output on the partial input \(\mathbf{x}_{S}\). Padding the missing features in \(\mathbf{x}_{S}\) with replacement features \(\mathbf{x}^{\prime}_{\backslash S}\) is denoted as \(\mathbf{x}_{S}\cup\mathbf{x}^{\prime}_{\backslash S}\). The attribution value of feature \(i\) is denoted as \(\phi_{i}\), and the interaction effect of a set of features \(\mathcal{I}\) is denoted as \(\Gamma_{\mathcal{I}}\).
Attribution DimensionsAttribution methods can generally be characterised along two dimensions Covert et al. (2021): 1) how the method deals with feature removal, and 2) how the impact of removing a feature is quantified. FIDAMs are built on the same principles as FAMs, and can be categorised along the same two dimension. By discerning these two dimensions we can separately evaluate their impact on the faithfulness of the attribution method. Furthermore, we can combine feature removal procedures with influence quantification methods in order to obtain novel attribution methods, an observation that has also been made in the context of FIDAMs by Jiang and Steinert-Threlkeld (2023), who, concurrent to our work, provide a general framework for characterising FIDAMs.
### Feature Removal
It is not straight-forward to define the absence of a feature to a model's input. The main goal here is to replace the removed feature with a neutral **baseline**, that adequately represents the absence of the feature. Methods often make use of a neutral input feature, the **static baseline**\(\mathbf{x}^{\prime}\), such as a zero-valued embedding or a pad token:
\[v(\mathbf{x}_{S})=f(\mathbf{x}_{S}\cup\mathbf{x}^{\prime}_{\backslash S}) \tag{1}\]
This may, however, lead to input that lies outside of the original input distribution Kim et al. (2020). The reason why this is problematic is that the model may behave erratically on such modified input, posing issues to the faithfulness of the explanation.
Instead of using a static baseline, we can also opt to use a baseline that is sampled from a _background distribution_Datta et al. (2016). There exist two approaches to this procedure Sundararajan and Najmi (2020); Chen et al. (2020). The **observational conditional expectation** samples the baseline features from a distribution that is conditioned on the set of features that are still present in the input Frye et al. (2020); Aas et al. (2021):
\[v(\mathbf{x}_{S})=\mathbb{E}_{\mathbf{x}^{\prime}_{\backslash S}}\left[f( \mathbf{x}_{S}\cup\mathbf{x}^{\prime}_{\backslash S})\mid\mathbf{x}_{S}\right] \tag{2}\]
The **interventional conditional expectation** drops the conditional, and samples the baseline features from an independent distribution:
\[v(\mathbf{x}_{S})=\mathbb{E}_{\mathbf{x}^{\prime}_{\backslash S}}\left[f( \mathbf{x}_{S}\cup\mathbf{x}^{\prime}_{\backslash S})\right] \tag{3}\]
There exist two motivations for the latter approach: Lundberg and Lee (2017) drop the conditional expectation for computational reasons, allowing them to approximate the observational conditional expectation. Janzing et al. (2020) provide a perspective derived from causality theory, stating that the _intervention_ of removing a feature should break the dependence between the baseline and remaining features, and hence conditioning on these features is fundamentally wrong.
The previous two methods sample baseline values for individual missing features, but we can also compute the expectation over the range of possible baselines. This yields the technique of **expected explanations**Erion et al. (2021), in which attributions with different static baselines are averaged out over a background distribution \(D\):
\[\phi_{i}=\mathbb{E}_{\mathbf{x}^{\prime}\sim D}\left[\phi_{i}(\mathbf{x}; \mathbf{x}^{\prime})\right] \tag{4}\]
### Quantifying Feature Influence
The simplest method of quantifying the influence of a feature is expressed as the output difference after **ablating** the feature:
\[\phi_{i}=v(\mathbf{x})-v(\mathbf{x}_{\backslash i}) \tag{5}\]
Note that this formulation can be combined with any of the feature removal methods: e.g. Occlusion Zeiler and Fergus (2014) combines this influence method with a static baseline (Eq. 1), whereas Kim et al. (2020) combines it with the observational
conditional expectation (Eq. 2), employing BERT as the conditional distribution.
A more involved method leverages a technique from the field of game theory, called the **Shapley value**Shapley (1953). Shapley values were originally introduced in the domain of cooperative games, in which players can form coalitions to change the outcome of the game. This setup can be transferred directly to machine learning models, in which features now take up the role of the players. A Shapley value expresses the contribution of a feature as the marginal gain of including that feature in the input, averaged over all possible coalitions of features.
## 4 FIDAMs
We now address a series of interaction methods that we use in our own experiments.
Group AblationThe feature influence principle of Equation 5 can straightforwardly be extended to _groups_ of features. In our experiments we will focus on pairwise interactions, but any kind of feature subset can be used here.
\[\Gamma_{i,j}=v(\mathbf{x})-v(\mathbf{x}_{\backslash ij}) \tag{6}\]
**Archipelago**Explaining model behaviour in terms of pairwise interactions will already yield a better portrayal of its internal behaviour than 'flat' attributions, but it neglects the interactions that occur within larger groups of features. Archipelago (Tsang et al., 2020) splits up the feature interaction procedure into two phases: first an interaction detection method is performed that clusters features into interaction sets, and afterwards interaction scores are assigned to these sets as a whole. Interaction detection is based on measuring the non-additive effect of pairs of features. The interaction effect that is assigned to an interaction set \(\mathcal{I}\) is expressed as follows, with respect to a static baseline \(\mathbf{x}^{\prime}\):
\[\Gamma_{\mathcal{I}}=f(\mathbf{x}_{\mathcal{I}}\cup\mathbf{x}^{\prime}_{ \mathcal{I}})-f(\mathbf{x}^{\prime}) \tag{7}\]
Note that Archipelago expresses the interaction effect inversely compared to the Group Ablation procedure: instead of measuring the impact of removing a group of features, we now measure the impact of solely keeping this group in the input.
Shapley(-Taylor) Interaction IndexBoth the previous methods base interaction effects on direct output differences. We can modify the formulation of the Shapley value to yield interaction effects. This modification was originally introduced in the field of game theory, called the Shapley Interaction Index (SII, Owen, 1972; Grabisch and Roubens, 1999). Instead of computing the marginal gain that is achieved by a single feature, we now compute the marginal gain of _groups_ of features. The Shapley-Taylor Interaction Index (STII, Sundararajan et al., 2020) is an extension of SII, satisfying additional theoretical properties.
HessianAnalogous to utilising the gradient for feature attributions, we can employ the second-order derivative to quantify interactions between features, which is captured by the Hessian matrix. Friedman and Popescu (2008) and Sorokina et al. (2008) consider an interaction between two variables to exist when the effect of one variable on the response depends on values of the other variable, which can be expressed in terms of the second-order partial derivative:
\[\Gamma_{i,j}=\left[\frac{\partial^{2}f(\mathbf{x})}{\partial x_{i}\partial x_{ j}}\right]^{2}\]
A common approach when using the gradient of a model as a proxy for feature importance is to multiply it with the input embeddings (Shrikumar et al., 2017; Ancona et al., 2019): in our experiments we consider an analogous method to the Hessian that we call **Hessian \(\times\) Input**.
Integrated HessiansDirectly using the Hessian as explanation method is prone to the same caveats as using the gradient: the interactions signal may vanish due to saturation. Integrated Hessians (IH, Janizek et al., 2021) address this issue by integrating over the Hessian manifold along a path between the input and a baseline. This is achieved by applying the method of Integrated Gradients (Sundararajan et al., 2017) to itself. An IH interaction between features \(i\) and \(j\) can hence be interpreted as the contribution of \(i\) to the contribution of \(j\) to the models prediction. The path integral between input and baseline is approximated via a Riemann sum interpolation.
Other MethodsThe methods explained thus far have all been incorporated in our experimental pipeline. The scope of our work focuses mainly on _pairwise_ interactions, but methods that extract higher-order interactions have been proposed as well (Jin et al., 2020). Comparing such methods to linguistic structure is an exciting avenue that we
leave open to future work. Other interaction methods that were not considered include two methods that preceded Archipelago: Neural Interaction Detection Tsang et al. (2018) and MAHE Tsang et al. (2018). The feature attribution method Contextual Decomposition Murdoch et al. (2018) has been extended to extract interactions as well Singh et al. (2019); Saphra and Lopez (2020); Chen et al. (2020), but these methods place the constraint that only contiguous groups of features can interact. Integrated Directional Gradients Sikdar et al. (2021), an extension of Integrated Gradients to capture _group attributions_, could be adapted to our framework, but we leave this open for future work.
## 5 Evaluating FIDAMs
The final component of our framework is a methodology for evaluating the faithfulness of FIDAMs. To lay a robust foundation for such work, we propose to evaluate a range of interaction methods and baselines on smaller deep learning models (using LSTM and Transformer architectures) that have been trained to recognise formal languages, based on a probabilistic context-free grammar (PCFG).
Our models are trained on a binary language classification task, in which a model needs to learn to discern between well-formed strings and minimally corrupted counterparts. Models are trained to perfection (100% accuracy) on both train and test set. To obtain perfect performance, a model must rely solely on the grammatical rules that underlie the language, without resorting to spurious heuristics, because only these results allow completely solving the task. This way, due to the controlled nature of the task, we obtain a high degree of confidence about the model's behaviour.
The goal of our experimental approach is to recover the structure of the language _based on the trained model itself_. This is achieved by the FIDAMs outlined in SS4. We aim to uncover whether a structural dependency between two features results in a high interaction effect. Since our models have been trained to perfection, this allows us to employ our setup as a way of measuring the **faithfulness** of a FIDAM. A method that assigns a high interaction effect to features that contain a dependency in the original grammar is able to provide a faithful reflection of a model's understanding of the task. By testing a wide range of FIDAMs and baselines we can uncover which configuration yields the most faithful explanations. A graphical overview of our approach is depicted in Figure 1.
TaskThe binary language classification task is set up by generating positive examples \(D^{+}\), based on some PCFG, and negative examples \(D^{-}\), derived from minimally corrupting the positive examples. We split the union of these two sets into a random train/test split of 80/20%. We train our models with a default cross-entropy loss, using the AdamW optimiser Loshchilov and Hutter (2019), a learning rate of 0.01, and a batch size of 48.
ModelsOur pipeline permits the use of any kind of neural model architecture, in our experiments we considered both LSTMs Hochreiter and Schmidhuber (1997) and Transformers Vaswani et al. (2017). In our experiments we report the results of the LSTM model, but we observed similar results for Transformers: due to the black-box approach of our explanation procedure the architecture itself is not of great importance. The models are deliberately small: we use an embedding size that is equal to the number of symbols in the language it is trained on, a hidden state size of 20, and a single layer. This results in models that provide a compute-friendly test bed for evaluating the FIDAMs.
EvaluationWe focus on _pairwise_ interactions: interactions between individual pairs of features. A FIDAM that extracts pairwise interactions for an input sequence \(\mathbf{x}\in\mathbb{R}^{N}\) returns a matrix of interaction effects \(\Gamma\in\mathbb{R}^{N\times N}\). Since our goal is to uncover whether structural dependencies result in high interaction effects, we approach the evaluation of the interaction matrix as a retrieval task. By aggregating and normalising the _rank_ of each interaction of interest we can quantify the performance of a FIDAM. We call this metric the **Average Relative Rank** (ARR):
\[ARR(\Gamma,\mathcal{I})=\frac{1}{|\mathcal{I}|}\sum_{i,j\in I}\frac{R(\Gamma_ {i})_{j}}{N-1} \tag{8}\]
where \(\mathcal{I}\) denotes the set of interaction pairs of interest and \(R(\Gamma_{i})\) denotes the rank of each interaction between feature \(i\) and the other features in input \(\mathbf{x}\) (the lowest interaction is ranked 0, and the highest interaction is ranked \(N-1\)). We aggregate these scores over an evaluation set to obtain a general performance score of the FIDAM. A graphical overview of this procedure is provided in Figure 2.
BaselinesWe consider a range of baselines in our experiments, based on the procedures explained
in SS3.1. For the static baselines we consider a zero-valued baseline (\(\mathbf{x}^{\prime}=0\)), and a baseline that utilises a fixed mapping \(T\) based on the original input symbols (\(\mathbf{x}^{\prime}=T(\mathbf{x})\)). Expected attributions are marginalised over samples from the distribution of well-formed strings \(D^{+}\) and corrupted strings \(D^{-}\). The interventional conditional expectation (Eq. 3) is computed with a corpus-wide unigram distribution (\(P(x_{i})\)), a unigram distribution that is conditioned on the sentence position (\(P(x_{i}|i)\)), and as a joint distribution over the missing features (\(P(\mathbf{x}^{\prime}_{\backslash S})\)), that we sample from the training corpus. The observational conditional expectation (Eq. 2) is computed based on the original corpus data.2
Footnote 2: Due to the small scale of the PCFGs considered here we can generated the complete language up to a certain length, and sample from strings that have feature overlap with the features that are still present in the partial input. For more complex tasks an auxiliary LM can be used instead.
## 6 Experiments on Formal Languages
We apply the evaluation procedure of SS5 to two formal languages: the Identity Rule language and the Dyck-2 language. In the appendix ( SSA) we also present results on a palindrome language.
### Identity Rule
The first language we consider is a regular language consisting of strings in which the first two symbols are identical, followed by a random sequence of symbols. The language is formed by the following grammar:
\[\begin{array}{ccccc}\text{S}&\rightarrow&x&x&\text{A}&&x\in\{a,b,c\}\\ \text{A}&\rightarrow&x&\text{A}&|&\epsilon&&x\in\{a,b,c\}\end{array}\]
The only interaction of interest here is between the first two symbols; all subsequent symbols are irrelevant for the prediction. An ARR score of 1.0 then indicates that for all corpus items the interaction between the first two items was the strongest out of all interactions.
We use a corpus size of 1.000, a maximum sequence length of 20, with 3 different input symbols. Corrupted strings are derived by altering one of the first two symbols (e.g. \(\mathbf{a}\mathbf{a}\mathbf{b}\mathbf{c}\mathbf{b}\mathbf{c}\mathbf{b}\)).
ResultsThe results for an LSTM that was trained on the language are shown in Table 1. Due to the simplicity of the language and for brevity we only report results on three baselines. A static zero-valued baseline provides imperfect interactions for all methods. The Hessian, that does not depend on any baseline, performs better than all other methods here. When sampling the baseline, however, multiple methods perfectly retrieve the interaction between the first two symbols for all corpus items. Interestingly, Group Ablation and IH benefit from sampling from the distribution of well-formed items, whereas Archipelago performs best when sampling from the distribution of corrupted items.
### Dyck-2
The Dyck language is the language of well-nested brackets, and is a popular testbed for research on formal languages. It is a context-free language with center embedding clauses, requiring a model to keep track of a memory stack while processing a string. Earlier work on Dyck languages has shown that a wide range of neural model architectures can learn the grammar, including LSTMs (Sennhauser and Berwick, 2018), memory augmented RNNs (Suzgun et al., 2019), Transform
\begin{table}
\begin{tabular}{l c c c c} & _NB_ & **0** & \(\mathbf{x}^{\prime}\thickspace D^{+}\) & \(\mathbf{x}^{\prime}\thickspace D^{-}\) \\ \hline Group Ablation & – & 0.49 & **1.00** & 0.53 \\ Archipelago & – & 0.30 & 0.24 & **1.00** \\ SII & – & 0.70 & **1.00** & **1.00** \\ STII & – & **0.83** & **1.00** & **1.00** \\ Hessian & **0.93** & – & – & – \\ Hessian\(\times\)Input & 0.66 & – & – & – \\ IH & – & 0.81 & **1.00** & 0.31 \\ \end{tabular}
\end{table}
Table 1: Average Relative Rank for the Identity Rule language, columns indicate different baseline procedures. An average rank of 1 indicates that the method (correctly) assigned the interaction between the first two tokens the highest score. _NB_ indicates these methods use no baseline.
Figure 2: Example for the computation of the Average Relative Rank metric. For each row we compute the relative rank of the interaction of interest (here the Dyck language), and these row-wise relative ranks are averaged into a single score between 0 and 1. A random interaction matrix results in an ARR of around 0.5.
ers (Ebrahimi et al., 2020), and handcrafted RNNs (Hewitt et al., 2020; Hao, 2020). We consider the Dyck-2 language, consisting of two types of brackets. The language is formed by the following grammar:
\[\texttt{S}\ \rightarrow\ \texttt{[}\ \texttt{S}\ \texttt{]}\ \texttt{|}\ \texttt{(}\ \texttt{S}\ \texttt{)}\ \texttt{|}\ \texttt{S}\ \texttt{S}\ \texttt{|}\ \epsilon\]
We use a corpus size of 15.000, a maximum sequence length of 20, and a maximum branching depth of 4. We use the same branching probabilities as Suzgun et al. (2019), which results in a uniform probability of 0.25 for each rule. Corrupted strings are derived by flipping a single bracket to any other bracket. For the baseline mapping \(T(\mathbf{x})\), we map a bracket to the other bracket type, i.e. '(' \(\leftrightarrow\) '[' and ')' \(\leftrightarrow\) ']'. This results in a baseline that is of the same structure as the original input, but without feature overlap.
ResultsWe report the results for this language in Table 2, computed over all our baselines for an LSTM. The zero-valued baseline again turns out to be a mediocre baseline: for none of the methods this results in a high ARR score. The method that performs best is the fixed mapping \(T(\mathbf{x})\). For Group Ablation, SII, and STII this results in a perfect ARR; for IH it is the best performing baseline.
It is encouraging that a baseline exists that results in perfect ARR scores, but this mapping depends strongly on the nature of the Dyck task itself. It is, for example, unclear how this static mapping would transfer to the natural language domain. Ideally, a more general solution makes no strong assumptions about the baseline itself. The three other baseline types in Table 2 may provide such a solution, as these only depend on the access to the original training data. Out of these, the observational baseline performs best: for the SII and STII methods this baseline performs nearly on par with the static mapping. Obtaining this conditional distribution is challenging for more complex tasks, and it can be seen here that the interventional baseline with a joint distribution over the missing features performs well too.
## 7 A Natural Language Case Study: CoLA
As a case study on a larger scale natural language task, we apply our methodology to language models fine-tuned on the CoLA task (Warstadt et al., 2019). CoLA is part of the GLUE Benchmark (Wang et al., 2019), and is defined as a binary classification task of determining the linguistic acceptability of a single input sentence. The task consists of linguistically valid sentences, and sentences that contain either a syntactic, semantic, or morphological violation. A model that performs well on this task must have a thorough grasp of grammatical structure, and as such it provides a useful test bed for our FIDAM evaluation procedure.
In the previous experiments there was a degree of certainty about the structure that must be encoded by the model. In the natural language domain, however, we do not have such certainty, and should therefore be careful of making strong claims about faithfulness. Furthermore, natural language is highly multi-faceted and can not be captured by a single hierarchical structure that covers all these facets. Nonetheless, we consider it valuable to test our setup on a natural domain in order to see if interesting differences between FIDAMs arise, and whether particular facets of language such as syntactic dependency structure can be extracted.
### Experimental Setup
For our experiment we consider the RoBERTa-base model (Liu et al., 2019) which obtains a Matthew's Correlation Coefficient score of 69.70 on the in-domain validation split. We filter out sentences that contain words that are split into multiple subwords by the tokenizer, since this leads to issues with aligning the interactions of multiple subwords to
\begin{table}
\begin{tabular}{l c c c c c c c c c} & \multicolumn{3}{c}{_Static_} & \multicolumn{3}{c}{_Expected_} & \multicolumn{3}{c}{_Interventional_} & _Observational_ \\ \cline{3-10} & _No baseline_ & **0** & \(T(\mathbf{x})\) & \(D^{+}\) & \(D^{-}\) & \(P(x^{\prime}_{i})\) & \(P(x^{\prime}_{i}|i)\) & \(P(x^{\prime}_{\setminus S})\) & \(P(x^{\prime}_{\setminus S}|x_{S})\) \\ \hline Group Ablation & – & **0.684** & **1.000** & 0.916 & 0.884 & 0.822 & 0.821 & 0.938 & 0.956 \\ Archipelago & – & 0.466 & 0.528 & 0.250 & 0.554 & – & – & – & – \\ SII & – & 0.555 & **1.000** & **0.921** & **0.895** & 0.876 & 0.885 & 0.923 & 0.989 \\ STII & – & 0.583 & 0.999 & 0.876 & 0.820 & **0.881** & **0.906** & **0.952** & **0.991** \\ Hessian & 0.413 & – & – & – & – & – & – & – & – \\ Hessian\(\times\)Input & **0.542** & – & – & – & – & – & – & – \\ IH & – & 0.591 & 0.837 & 0.723 & 0.665 & – & – & – & – \\ \end{tabular}
\end{table}
Table 2: Average Relative Ranks for the Dyck language (higher indicates stronger alignment with Dyck rules), columns indicate different baseline procedures.
the dependency graph that is used for evaluation. Furthermore, we limit sentences to a max length of 14 in order to allow the STII and SII methods to be computed exactly without approximations. This resulted in a subset of around 60% of the original in-domain validation split that we will use in our experiment.
We evaluate the FIDAM scores on the dependency parse tree of the sentence, that we obtain with the parser of spaCy Honnibal et al. (2020). The ARR score is computed based on the interaction of each token with its _parent_ token. We omit the interaction of the token that has the root node as its parent. An example of this procedure can be found in Appendix B. Do note that our evaluation procedure is one of many possibilities: we make the assumption that a token should interact strongly with its parent, but other interactions are likely to play a role within the model as well. We leave a more detailed investigation into using different types of linguistic structure open for future work.
We again consider the FIDAMs of Group Ablation, STII/SII, and Integrated Hessians. We leave out Archipelago, since its procedure of assigning features to a single interaction set is not feasible with our setup in which multiple child tokens might be interacting with the same parent token. Due to computational constraints we were unable to compute the full Hessian matrix of the language model, whose computation scales quadratically in the number of input _neurons_Bishop (2007, SS5.4). For the static baselines we again consider the zero-valued baseline, as well as the <pad> token. The interventional baselines are obtained by computing simple count-based distributions over a sample of 100.000 sentences from the Google Books corpus. The distributions are based on the tokenization of the model's tokenizer, and allow for computationally efficient sampling. We leave the incorporation of an observational baseline for future work, where an auxiliary masked LM might provide a useful conditional probability distribution.
### Results
The results for the experiment are shown in Table 3. As expected, due to reasons outlined at the start of this section, none of the methods reaches ARR scores that are close to 1. Nonetheless, it is encouraging to see that various method/baseline combinations attain ARR scores that are far above chance level, indicating that there exists a strong degree of alignment between feature interactions and dependency structure. Contrary to the Dyck results, using a zero-valued baseline yields some of the highest ARR scores, which indicates that within RoBERTa's embedding space this baseline represents a better neutral value.
A closer inspection of these results shows that the ARR scores are strongly negatively correlated to sentence length: for Group Ablation with a <pad> baseline, for example, we obtain a Spearman correlation of -0.38 (\(p<<0.001\), regression plot in Appendix C). This is not surprising: as the sentence length increases, the chance of a token's largest interaction being with its parent decreases. Another correlation of interest is between the ARR score and the model's prediction of a sentence's acceptability. A high correlation would indicate that the FIDAM's alignment with dependency structure are indicative of a model's performance. For this we obtain a Spearman correlation of 0.14 (\(p=0.036\)): a relatively weak result that indicates that the structure our FIDAM extracted is only partly driving the model's comprehension of the sentence structure.
## 8 Discussion & Conclusions
In this paper, we have presented a framework for characterising FIDAMs and evaluating their faithfulness. For the characterisation we set out two dimensions, feature removal and feature influence, along which existing FIDAMs can be characterised, by extending the 'Explaining by Removing' framework of Covert et al. to also apply to FIDAMs. This allows us to place each of the known FIDAMs in a two-dimensional grid, and to define novel variants of these models. As such, many of the methods that we incorporated in our experiments are novel FIDAMs, such as combining Archipelago with expected explanations and STII with an observational baseline.
To assess the faithfulness of FIDAMs, we made use of formal language theory and 'grey box mod
\begin{table}
\begin{tabular}{l c c c c} & \multicolumn{2}{c}{_Static_} & \multicolumn{2}{c}{_Interventional_} \\ \cline{2-5} & **0** & \textless{}pad\textgreater{} & \(P(x^{\prime}_{i})\) & \(P(x^{\prime}_{\setminus S})\) \\ \hline Group Ablation & 0.702 & **0.757** & 0.518 & 0.491 \\ SII & **0.746** & 0.668 & **0.714** & **0.696** \\ STII & 0.741 & 0.708 & 0.704 & 0.658 \\ IH & 0.577 & 0.516 & – & – \\ \end{tabular}
\end{table}
Table 3: Average Relative Ranks for the dependency tree recovery of RoBERTa fine-tuned on CoLA.
els'. We use formal grammars to generate multiple datasets, each with known feature interactions, and train deep learning models to perfection on those datasets. Using FIDAMs, we can then extract the learned feature interactions based on the model itself, and compare these interactions to the dependencies in the original grammar. We demonstrate that only specific combinations of FIDAMs and baselines are able to retrieve the correct interactions, while methods such as Archipelago and Integrated Hessians consistently fail to do so.
Finally, we tested our methodology on a natural language case study using a model fine-tuned on the CoLA task for linguistic acceptability. Our results on the formal language tasks either did not turn out to be predictive of this experiment or, alternatively, the results _were_ predictive but the LMs made less use of dependency graph information than we might have expected. This illustrates the challenge of the Attribution Generalisation problem, and the open question remains how we can transfer faithfulness guarantees from a synthetic, controlled context to the domain of natural language and LLMs.
We do show, however, that under certain configurations feature interactions align to some degree with the (syntactic) dependency structure of a sentence. This paves the way for revealing linguistic structure in a more direct way than, for instance, can be achieved with Structural Probes (Hewitt and Manning, 2019). Investigating whether different methods and baseline configurations are able to retrieve different aspects of structure is an exciting next step that we look forward to exploring in more detail. This could be examined, for instance, through the lens of contrastive explanations Yin and Neubig (2022), a procedure that demonstrates that different baselines can reveal different aspects of linguistic structure. Furthermore, investigating the role that attention plays in modelling interactions could be a fruitful line of work, for instance by incorporating _context mixing_ methods to our pipeline, such as _Value Zeroing_(Mobebi et al., 2023) and _ALTI_(Ferrando et al., 2022).
## 9 Limitations
Our work has only considered _pairwise_ interactions, but linguistic structure can also manifest through higher-order interactions. We show that our results on small-scale, formal languages, are different from our results on a natural language task. It would be premature to conclude that small-scale, synthetic tasks can not be predictive of behaviour on more complex tasks, and a more detailed investigation into the properties of the task that play a role is a viable next step. Some of the FIDAMs we considered, most notably SII and STII, are intractable for larger inputs (scaling \(O(2^{n})\)), and a necessary step in employing these methods to larger models is to construct better approximation procedures, e.g. by adapting SHAP to SII as has been done before for tabular data by Lundberg et al. (2018). More generally, although we believe our probabilistic formal language setup provides a important step forward, solving the Attribution Generalization problem - i.e., showing that results for small setups generalize to very large model - remains a key open problem.
|
2308.15342 | The Terzina instrument onboard the NUSES space mission | In this paper we will introduce the Terzina instrument, which is one of the
two scientific payloads of the NUSES satellite mission. NUSES serves as a
technological pathfinder, hosting a suite of innovative instruments designed
for the in-orbit detection of cosmic rays, neutrinos, and gamma rays across
various energy ranges. The Terzina instrument itself is a compact telescope
equipped with Schmidt-Cassegrain optics. Its primary objective is to detect
Cherenkov radiation emitted by Extensive Air Showers generated by the
interaction of high-energy (> 100 PeV) cosmic rays with the Earth's atmosphere.
Terzina represents a critical step forward in the development of future
space-based instruments aimed at detecting upward-moving showers induced by
tau-leptons and muons resulting from the interaction of high-energy
astrophysical neutrinos with the Earth. In this paper, we will delve into the
key technical aspects of the Terzina instrument, its capabilities, and its
potential for detection. | R. Aloisio, L. Burmistrov, A. Di Giovanni, M. Heller, T. Montaruli, C. Trimarelli | 2023-08-29T14:39:35Z | http://arxiv.org/abs/2308.15342v1 | # The Terzina instrument onboard the NUSES space mission
###### Abstract:
In this paper we will introduce the Terzina instrument, which is one of the two scientific payloads of the NUSES satellite mission. NUSES serves as a technological pathfinder, hosting a suite of innovative instruments designed for the in-orbit detection of cosmic rays, neutrinos, and gamma rays across various energy ranges. The Terzina instrument itself is a compact telescope equipped with Schmidt-Cassegrain optics. Its primary objective is to detect Cherenkov radiation emitted by Extensive Air Showers generated by the interaction of high-energy (> 100 PeV) cosmic rays with the Earth's atmosphere. Terzina represents a critical step forward in the development of future space-based instruments aimed at detecting upward-moving showers induced by tau-leptons and muons resulting from the interaction of high-energy astrophysical neutrinos with the Earth. In this paper, we will delve into the key technical aspects of the Terzina instrument, its capabilities, and its potential for detection.
Introduction
The NUSES (Neutrinos and Seismic Electromagnetic Signals) satellite mission is a collaborative project led by the Gran Sasso Science Institute (GSSI) aimed at exploring new scientific and technological pathways for future astroparticle physics space-based detectors. This project is conducted in collaboration with the Istituto Nazionale di Fisica Nucleare (INFN), the Italian Space Agency (ASI), several Universities in Italy, the University of Geneva in Switzerland and the University of Chicago in the USA. The NUSES mission is supported by Thales Alenia Space Italy (TAS-I), industrial partner providing the satellite platform 2MF/NIMBUS (New Italian Micro BUS) with a modular and flexible design based on additive manufacturing techniques. The NUSES satellite, scheduled to launch in the second half of 2025 under the management of ASI, will be a ballistic mission without orbital control, operating at a Low Earth Orbit (LEO), with an altitude at the Beginning of Life (BoL) of 535 km, with a high inclination of 97.8\({}^{\circ}\) (LTAN = 18:00) in a Sun-synchronous orbit along the day-night boundary. The nominal duration of the NUSES mission is three years (End of Life, EoL).
The NUSES satellite will host two innovative scientific payloads: Zire and Terzina. In this proceedings paper, we will focus on the Terzina instrument, while detailed information about the Zire detector can be found in [1].
Terzina is a telescope specifically designed for the detection of Cherenkov light emitted by Extensive Air Showers (EAS) induced by high-energy Cosmic Rays (CR) and neutrinos in the Earth's atmosphere. In astrophysical environments, high-energy neutrinos are produced through the decay chain of pions, leading to an equipartition (due to flavour oscillation) among the three different leptonic flavours when observed at the Earth. At sufficiently high energy (\(E>1\ PeV\)), tau neutrinos and, to a lesser extent, mu neutrinos passing through the Earth can produce \(\tau\) and \(\mu\) leptons, which can emerge by decaying or interacting in the atmosphere when the elevation angles of the neutrino momentum on the Earth's surface are small (Earth skimming events). As a result, Earth skimming neutrinos generate EAS moving in the atmosphere from bottom to top [2], similar to the EAS produced by charged particles (CR) impinging the atmosphere from above the Earth limb [2]. The Cherenkov emission from these EAS can be detected by space-based instruments, providing a unique signal for Low Earth Orbit (LEO) satellites [3, 4], which, given the high exposures, could potentially revolutionise the observation of high-energy neutrinos and CR. Terzina serves as a technological pathfinder, aiming to detect the EAS Cherenkov emission demonstrating the viability of the space-based detection technique.
This paper presents a general description of the Terzina telescope and its observational capabilities.
## 2 Cherenkov emission observed from space
The Cherenkov emission produced by an EAS is mainly due to the high energy (\(E>100\) MeV) electron-positron pairs generated in a large amount during the shower development. Thus, the number of Cherenkov photons emitted by an EAS is directly proportional to the shower energy, which corresponds to the energy of the primary particle that initiated the cascade. Considering the specific characteristics of the Terzina telescope, particularly the area of its primary mirror
(approximately 0.1 \(m^{2}\)), as discussed in the next section, it is capable of detecting the Cherenkov emission only from EAS with energies exceeding 100 PeV. Consequently, Terzina is expected to predominantly observe CR with trajectories above the Earth's limb due to their higher flux compared to neutrinos (roughly 4 orders of magnitude higher). Nevertheless, the Cherenkov signal produced by these CR events, apart from the incoming direction, exhibits nearly identical properties to the expected signal from neutrino events occurring below the limb, such as similar wavelength spectra of the arriving photons, as well as comparable spatial profiles and time distributions. Thus, above-the-limb CR events serve as a reliable benchmark for directly testing the various components of an in-orbit Cherenkov telescope (e.g., optics, photo-sensor, electronics, and triggers) during the actual mission. This strategic approach underpins the Terzina mission, which aims to validate the detection technique through in-orbit testing.
In this section we will briefly review the nature of Cherenkov emission as observed from a space based telescope in the case of above-the-limb EAS. The results presented are based on the EAScherSim computational framework (c4341.gitlab.io/easchersim/index.html), a simulator designed for modelling the production and atmospheric transport of Cherenkov photons by EAS, build upon the findings discussed in [2]. To provide an estimate of the expected signal in the Terzina telescope, we consider the case of EAS generated by protons. Due to the geometry of above-the-limb trajectories, a significant portion of the particle cascade occurs at high altitudes in a rarefied atmosphere. Consequently, the generation of optical Cherenkov emission is limited, but so is its atmospheric attenuation during photon propagation. Therefore, a detailed calculation of the Cherenkov signal strength and geometry is necessary to determine the overall instrumental sensitivity to such events.
The results presented in figures 1, 2 show several important points. Given the geometry of the observation from the Terzina altitude (figure 1 left panel) and the characteristics of the atmosphere, Cherenkov emission can be observed from a tiny layer of the atmosphere, with a angular size less than \(1^{\circ}\), which corresponds to altitudes above the Earth's limb that span from 20 km up to 50 km (the Earth's limb is seen by Terzina at an elevation angle \(\theta_{d}=67.5^{\circ}\)), as follows from the central and right panels of figure 1. The Cherenkov signal is a burst with a typical duration of few tens on nano seconds (left panel figure 2) of visible-UV photons distributed on a cone with a very narrow
Figure 1: [Left panel] Schematic of the orbital configuration and the geometry of an above-the-limb event. [Central panel] Distribution of the relevant line of sight angles for different values of the proton EAS energy (as labelled). [Right panel] Distribution of the Cherenkov photons produced by a proton EAS of 100 PeV energy as function of the viewing angle and above the limb altitudes of the first interaction point.
aperture (\(\delta\simeq 1^{\circ}\)) around the EAS axis, which corresponds to the direction of motion of the primary particle that generated the EAS. At the operative altitudes of Terzina the cone has a typical base radius of few tens of km, with a flux integrated over the burst duration of about 100 photons per \(m^{2}\), in the case of a proton EAS with 100 PeV energy (central panel figure 2). Finally, it is interesting the effect of photons propagation across the Earth's atmosphere that mainly suffer the effect of absorption on the ozone layers that reduce the photons spectra between wavelength of 500 nm and 700 nm, with a progressive attenuation of this effect for EAS generated at increasing altitudes (due to the reduction of the ozone layer traversed), as follows from the right panel of figure 2.
## 3 The Terzina instrument
The Terzina detector is composed by a near-UV-optical telescope, with a Schmidt-Cassegrain optics, and the Focal Plane Assembly (FPA), figure 3 left panel. The optical system of the telescope is based on a dual mirror configuration composed of two parabolic mirrors: primary, with radius 394 mm, and secondary, with radius 144 mm, placed at a relative distance of 280 mm. The FPA has a maximum radius 121 mm and it is placed at a distance of 40 mm from the primary mirror. This configuration is chosen to maximise the focal length up to 925 mm in a compact telescope which, including the baffles needed to obscure straight light propagation on the FPA (left panel figure 3), should fit in an envelope of 600x600x730 mm\({}^{3}\). The telescope will operate inclined by 67.5\({}^{\circ}\) with respect to nadir, with the optical axis pointing towards the dark side of the Earth's limb, the expected duty cycle is around 40%. The star tracker system of the satellite platform maintains the optical axis configuration with a high accuracy of 0.1\({}^{\circ}\). The total weight of the Terzina instrument (telescope and FPA) is around 35 kg.
The FPA is designed to detect photons from both below and above the limb. It has a rectangular shape with a 2 : 5 aspect ratio. It is composed of 10 Silicon Photon Mutlipliers (SiPM) arrays [7] of \(8\times 8\) pixels forming 2 rows of 5 arrays each (640 pixels overall, see figure 3 right panel). Given the Schmidt-Cassegrain optics the upper row of 5 SiPM arrays will observe events coming from below the Earth's limb (red area in right panel of figure 3), this part of the FPA will perform a clear characterisation of the background and is unlikely to observe neutrino-induced EAS. On the other
Figure 2: [Left panel] Temporal evolution of the Cherenkov burst of a 100 PeV proton EAS for different values of the altitude of the first interaction point (as labelled). [Central panel] Total flux of Cherenkov photons at the Terzina (BoL) altitude integrated over 100 ns as function of the distance from the EAS axis, in the case of a proton EAS of 100 PeV energy for different altitudes of the first interaction point of the proton (as labeled). [Right panel] Spectrum of Cherenkov photons produced by a proton EAS of 100 PeV energy at the Terzina (BoL) altitude for different altitudes of the first interaction point of the proton (as labeled).
hand, the lower row of 5 SiPM arrays will observe events coming from EAS generate by CR from above the limb, with the blue area in the right panel of figure 3 signalling the most contributing part of the atmosphere. The axis in the right panel of figure 3 show the length probed by the telescope at the Earth, along the limb line and across it. Terzina observes a vast volume of the atmosphere with a section across the Earth's limb of 360 x 140 km\({}^{2}\). Given the focal length of the telescope \(F_{l}=925~{}mm\) and the SiPM pixels size \(r_{SiPM}\simeq 3~{}mm\), the Field-of-View (FoV) per pixel of the FPA can be estimated as \(\mathrm{FoV}_{pix}=\arctan(r_{SiPM}/F_{l})\simeq 0.18^{\circ}\), with a telescope FoV of 7.20\({}^{\circ}\) (40 pixels) along the Earth's limb and 2.88\({}^{\circ}\) across it (16 pixels). The point spread function (PSF) of the Terzina optical system is compatible with the 3 x 3 mm\({}^{2}\) pixel size chosen for the FPA, with the overall encircled energy always contained inside 1.5 mm independently of the inclination angle of the incoming photons (figure 3 central panel).
The SiPM sensors are provided by the Fondazione Bruno Kessler (FBK) and briefly described below (see [5] for a detailed discussion). The camera frontend electronics is composed of 10 Application Specific Integrated Circuits (ASICs) [6], each reading one SiPM array with \(8\times 8\) channels. The ASIC has an input amplification stage and digitises signals, upon validation of trigger conditions, as determined by the trigger logic implemented in the ASICs and in a dedicated Field Programmable Gate Array (FPGA). The ASIC digitises the signal on a programmable time interval (see below) that spans from a minimum of 180 ns up to 1.280 \(\mu\)s, enabling pulse-shape reconstruction [6].
In order to build a complete simulation chain of the Terzina detector it is important to estimate the expected background. This is composed by: the Night Glow Background (NGB) of visible light and the background radiation of charged particles in the 100 keV - 100 MeV energy range. The rate per pixel due to the NGB (\(R^{NGB}_{pix}\)) has been estimated using the formula [8]: \(R^{NGB}_{pix}=\eta\times\Delta\Omega\times\phi_{NGB}\times S\times PDE_{eff}\) where: \(S=0.1\) m\({}^{2}\) is the collecting area of the telescope's primary mirror; \(PDE_{eff}=0.1\) is the total Photon Detection Efficiency (PDE) calculated from the convolution of the SiPM PDE and the NGB spectrum (see figure 4 left panel); \(\Delta\Omega\simeq(FoV_{pix})^{2}\) is the pixel viewing solid angle; \(\phi_{NGB}=1.55\times 10^{4}~{}m^{-2}sr^{-1}ns^{-1}\) is the total integrated NGB flux in the wavelengths range \(\lambda=300~{}nm~{}\lambda=1000~{}nm\)[8]; \(\eta=6\) is a conservative safety factor that takes into account the expected large fluctuations of the NGB flux. The rate per pixel due to the NGB estimated by these reference values is \(R^{NGB}_{pix}\sim\)10 MHz.
Figure 3: [Left panel] Scheme of the optics (M1, M2 primary and secondary mirror) and the FPA of Terzina with the baffles structure to protect from straight light pollution. [Central panel] Point spread function with the encircled energy on the FPA produced by photons for different photons incidence angles respect to the telescope focal axis (as labelled). [Right panel] Scheme of the focal plane with the pixels structure, coloured bands show the corresponding observed regions below (red band) and above (blue band) the limb. The axis label the corresponding length in km probed at the Earth along the limb line (x axis) and across it (y axis).
The background radiation expected at the operating altitudes of Terzina were estimated using the SPENVIS computation scheme (www.spenvis.oma.be), focusing on the dominant component of electrons and protons in the 100 keV - 100 MeV energy range, coupled with a Geant4 simulation (geant4.web.cern.ch/geant4) of the full detector (mechanical structure, optics and FPA). The effect of the in-orbit background radiation on the FPA is twofold: from one side it can mimic events, as for Cherenkov emission produced by electrons inside the telescope's optical/mechanical parts and hitting on the SiPM layer, on the other side it produces a progressive sensor damage with an increasing Dark Current Rate (DCR) during the mission.
The in-orbit time evolution of the SiPMs characteristics and their related power consumption are crucial factors that should be taken into account in choosing the sensors technology [5]. The SiPMs chosen are the NUV-HD series produced by FBK and designed for the near-UV visible wavelengths [9]. The NUV-HD SiPM technology has typical operating parameters for 35 \(\mu\)m cell-size given by: DCR \(\simeq\) 100 kHz/mm\({}^{2}\), after-pulsing AP \(\simeq\) 5% and optical crosstalk CT \(\simeq\) 5%\(-\)20%. In figure 4 left panel, we plot the PDE of different SiPM produced by FBK, our baseline solution is the NUV-HD without coating (blue line in left panel of figure 4) [5, 7]. Given the SiPM choice we can simulate the background rate due to NGB and DCR at different times of the Terzina mission: at BoL, after the first and second years and at EoL, the rates obtained are: 11 MHz, 22 MHz, 33 MHz, 44 MHz respectively [5]. In the right panel of figure 4 we plot the trigger rate per pixel as function of the number of photo-electrons (p.e.) produced by the SiPM at BoL, after one year, after two years and at EoL. It is evident the effect on the expected trigger rate of the increased DCR due to radiation effects.
At EoL, the power consumption of the sensors of the camera, operated at an over-voltage of about 6 V, will reach 0.2 W. This figure does not include the power consumption of the 10 ASICs, which are expected to consume 5 mW/channel [6], 3.2 W for the 640 channels of the camera. The
Figure 4: [Left panel] Photon detection efficiency versus photon wavelength for different SiPM types by FBK (as labelled). [Right panel] Single pixel trigger rate as a function of the threshold expressed in photo electrons (p.e.) for DCR and NGB values estimated at different times during the mission life (as labelled). The horizontal blue dashed line corresponds to the maximum event rate of 120 Hz, see text. Horizontal blue line corresponds to the maximum single pixel trigger rate (120 Hz/640 \(\sim\) 0.18 Hz per pixel). The horizontal red line corresponds to 1.25 kHz (maximum single pixel rate with two pixels cluster in the hit-map). The vertical lines shows thresholds for single (blue) and double coincidence (red) trigger logic.
overall power needed to operate the FPA is expected to be lower than 3.5 W.
The ASIC technology is discussed in [6], here we recall that each ASIC has 64 channels (8x8 pixels of a single SiPM array), each channel has a memory with a total number of 256 cells (12bit resolution, sampling frequency 200 MHz) arranged in 8 blocks of 32 cells each. Each ASIC has two programmable thresholds (low \(S_{0}\) and high \(S_{1}\)) and a clock cycle \(T_{\rm clk}=5\) ns. The trigger scheme is based on the recognition of specific pixels topologies in the hit-map of the ASIC depending on the (low or high) threshold exceeded by multiple pixels (see [6] for a detailed discussion). If one channel at the time \(t_{S}\) exceeds \(S_{0}\) (\(S_{1}\)), during the time interval \(\Delta t_{c}=16\,T_{\rm clk}=80\) ns changes to the pixels state are accepted, after \(\Delta t_{c}\) the hit-map is transferred to the FPGA. The event is accepted by the FPGA if the hit-map shows two (three) or more adjacent pixels, with defined topologies, above \(S_{1}\) (\(S_{0}\)). Once the event is accepted it will be centred for digitisation at \(t_{S}\) and digitised through 32 time-samples of the signal spaced by \(T_{\rm clk}\) for a total sampled time interval \(2\Delta t_{c}=160\) ns in the time interval \((t_{S}-\Delta t_{c},t_{S}+\Delta t_{c})\), occupying one memory block per channel (pixel). The hit-map recognition chain has a total duration of 250 ns, given by \(\Delta t_{c}=80\) ns plus the time needed to transfer the hit-map to the FPGA (140 ns) and the time needed by the FPGA to recognise the event and communicating it back to the ASIC (30 ns). Each ASIC is able to manage in parallel a maximum of 8 (number of memory blocks per channel) digitisation processes [6].
The digitised signal in a single pixel is encoded in a number of bit: 12x32+header+padding = 434 bit. Thus, an event in a single ASIC is encoded in 64x434 = 27776 bit. The maximum downlink data stream for Terzina is 45 Gbit/day, which corresponds to an absolute maximum number of events per day that can be sent for the offline analysis \(1.29\times 10^{7}\) events/day, roughly an event rate of 150 Hz. The possibility of reprogramming the ASICs thresholds during the flight guarantees the opportunity to adjust \(S_{0}\) and \(S_{1}\) to the changing response of the sensors, in order to maintain a fixed event rate. In the right panel of figure 4, we show the variation of \(S_{0}\) (red vertical lines) and \(S_{1}\) (blu vertical lines) with the increasing mission age. These estimations follows by defining \(S_{0}\) as the threshold corresponding to the maximum single pixel rate \(120/640\) Hz (blu horizontal line) and \(S_{1}\) as the threshold corresponding to the maximum allowed single pixel rate in the case of configurations with two pixels clustering 1.25 kHz (red horizontal line) as discussed in [5].
## 4 Conclusions
In conclusion, we provide a preliminary estimation of the detection capabilities of the Terzina instrument. When monitoring below the limb, Terzina will perform background sampling with hit-map recording at a rate that can reach several Hz. From above the limb, Terzina is capable of observing CR events. Figure 5 illustrates the expected detector's aperture associated with CR protons, calculated based on the BoL sensors' response and a single threshold trigger scheme with \(S_{0}=7\) p.e. The results in figure 5 are obtained by combining the Geant4 simulation scheme of the detector with the EASCherSim computation scheme for generating protons' EAS. Considering the observed CR flux at energies exceeding 100 PeV, \(\phi_{CR}\sim 6.6\times 10^{3}\) km\({}^{-2}\)sr\({}^{-1}\)y\({}^{-1}\)[10], assuming a proton fraction of 50% [10], and a detector duty cycle of 40%, the aperture shown in figure 5 demonstrates Terzina's capability to observe a significant number of CR proton events (with \(E\geq 100\) PeV) already during the first year of operation, with an estimated count of no less than
20 events per year. Achieving this level of detection would serve as a clear validation of the experimental technique employed by Terzina.
**Acknowledgements**. NUSES is funded by the Italian Government (CIPE n. 20/2019), by the Italian Minister of Economic Development (MISE reg. CC n. 769/2020), by the Italian Space Agency (CDA ASI n. 15/2022), by the Swiss National Foundation (SNF grant n. 178918) and by the European Union - NextGenerationEU under the MUR National Innovation Ecosystem grant ECS0000041 - VITALITY - CUP D13C21000430001.
|
2308.01337 | Distribution of Telecom Entangled Photons through a 7.7 km Antiresonant
Hollow-Core Fiber | State of the art classical and quantum communication rely on standard optical
fibers with solid cores to transmit light over long distances. However, recent
advances have led to the emergence of antiresonant hollow-core optical fibers
(AR-HCFs), which due to the novel fiber geometry, show remarkable optical
guiding properties, which are not as limited by the material properties as
solid-core fibers. In this paper, we explore the transmission of entangled
photons through a novel 7.7 km AR-HCF in a laboratory environment at 1550 nm,
presenting the first successful demonstration of entanglement distribution via
a long AR-HCF. In addition to showing these novel fibers are compatible with
long distance quantum communication, we highlight the low latency and low
chromatic dispersion intrinsic to AR-HCF, which can increase the secure key
rate in time-bin based quantum key distribution protocols. | Michael Antesberger, Carla M. D. Richter, Francesco Poletti, Radan Slavík, Periklis Petropoulos, Hannes Hübel, Alessandro Trenti, Philip Walther, Lee A. Rozema | 2023-08-02T18:00:01Z | http://arxiv.org/abs/2308.01337v3 | # Distribution of telecom Time-Bin Entangled Photons through a 7.7 km Hollow-Core Fiber
###### Abstract
State of the art classical and quantum communication rely on standard optical fibers with solid cores to transmit light over long distances. However, recent advances have led to the emergence of hollow-core optical fibers (HCFs), which due to the novel fiber geometry, show remarkable optical guiding properties, which are not as limited by the material properties as solid-core fibers. In this paper, we explore the transmission of entangled photons through a novel 7.7 km HCF, presenting the first successful demonstration of entanglement distribution via long-distance HCF. Our study highlights the low latency and low chromatic dispersion intrinsic to HCF, which can increase the secure key rate in time-bin based quantum key distribution protocols.
## I Introduction
Over the past few years, quantum technologies such as quantum communication [1] and quantum computing [2] have made remarkable steps towards maturation. This is currently leading to the emergence of large-scale quantum networks [3], which require quantum communication links between space-like separated nodes [4]. Consequently, flying qubits, encoded in quantum states of light, must be shared between distant parties, using optical fibers. Various quantum protocols, including quantum key distribution (QKD) [1], quantum money [5; 6] and quantum coin flipping [7], rely on low-loss optical links [8]. In commercially available optical fibers, the lowest propagation loss can be achieved in the relatively narrow telecom C-band (1530-1565 nm), which is located at an absorption minimum [9]. In particular, conventional solid-core telescopen single mode fibre (SMF), which is best suited for this wavelength range, offers the minimum absorption at 1550 nm, while the nearby telecom O-band (1260-1360 nm) offers zero chromatic dispersion at \(\approx\) 1300 nm [9]. Moreover, dispersion shifting, via refractive index modification, allows one to shift the zero-dispersion wavelength into the telecom C-band [10]. Given the optimal performance of solid-core fibers in the C-band, and since quantum protocols are typically very sensitive to loss, most fiber-based quantum communication protocols operate in the C-band. Due to this, there has been a concerted effort to modify many quantum technologies away from their natural wavelengths into the C-band, or to frequency shift the emitted photons to the C-band. For instance, quantum dots are a near-perfect single-photon source at \(\approx 900\) nm [11], and achieving the same performance at 1550 nm remains an open challenge [12; 13]. Similarly, quantum memories, which typically achieve optimal performance in the visible range, are being developed in the C-band [14; 15], simply to accommodate solid-core fibers.
Hollow-core fibers (HCF) [17] can be customized in a manner not possible in solid-core fibers. This degree of freedom could be a key ingredient, allowing future wide-band quantum networks to transmit quantum states of light at their natural wavelength with low loss [18]. Moreover, the guiding core of HCF, which can be evacuated or filled with gas, exhibits a refractive index of \(n\approx 1\)[19], resulting in light transmission at approximately the speed of light in vacuum \(c\). This holds tremendous potential to revolutionize both classical and quantum communication by achieving the lowest latency communication possible [20]. At the same time, HCFs possess inherently low chromatic dispersion. For example, the HCF we use in this work exhibits a dispersion parameter of approximately 2 ps/nm\(\cdot\)km at \(\approx 1550\) nm [16]. Low dispersion (or dispersion compensation at the cost of increased insertion loss) is necessary for time-bin based quantum communication, wherein dispersion currently limits the time-bin spacing and thus the maximal key rate [21]. Furthermore, it has been proposed that HCFs can surpass the fundamental limit of propagation loss imposed by Rayleigh scattering in solid-core fibers [22] over a broad wavelength range, and it has been recently experimentally realized for wavelength at 850 nm and 1060 nm [23].
Given the multitude of appealing properties of HCFs, they are emerging as excellent candidates to serve as the backbone for future quantum networks, supporting a di
versity of quantum components operating at their natural wavelengths. In spite of this promise, to the best of our knowledge, the current record for entanglement distribution through a HCF stands at 36.4 m [24]. Here we present the first distribution of entangled-photon states through a long-distance (7.7 km) HCF. Our fiber is optimized for transmission in the telecom C-band; however, similar performance could be achieved in the visible spectrum. Furthermore, we highlight the advantage of low dispersion in HCF compared to dispersion unshifted SMF (Corning SMF28) by studying the quality of our entanglement distribution as a function of the time-bin spacing. We achieve the transmission of high-concurrence entanglement at 1550 nm in HCF using picosecond-spaced time-bin qubits.
## II Experimental implementation
We study a 7.7-km-long hollow-core nested antiresonant nodeless fiber (NANF) [17]. Our specific NANF was previously investigated in Ref. [16]. It is composed of two individual HCFs, measuring 3.38 km (NANF 1) and 4.34 km (NANF 2), that are spliced together (see Fig. 1c). The average propagation loss of the NANF at a wavelength of 1550 nm is estimated to be 0.82 dB/km, which exceeds that of standard telecom SMF28 fiber. Nevertheless, recent advancements have shown that a so-called double nested antiresonant nodeless fiber (DNANF) lowers the propagation loss to 0.174 dB/km [25], which is approximately equal to that of SMF28 fiber. Moreover, HCFs can even surpass the fundamental loss limit imposed by Rayleigh scattering in solid-core fibers [18].
Our experimental setup is illustrated in Fig. 1. In brief, we generate polarization-entangled photon pairs. We then convert one polarization qubit into a time-bin qubit, then transmit that time-bin qubit through a long distance fiber (either HCF or SMF28), and, finally, we convert the time-bin qubit back into a polarization qubit. We implement this "conversion protocol" for three reasons: (1) We can make use of well established polarization-entangled photon-pair sources. (2) Time-bin encoding is one of the leading candidates for long-distance QKD. (3) By converting back to a polarization qubit, we can easily implement arbitrary measurements on the final two-qubit state, allowing for a complete characterization of the entanglement distribution.
To generate entangled photon pairs at 1550 nm, we use a Type-II SPDC source in Sagnac configuration (Fig. 1 green area) [26]. The measured spectral full width at half maximum (FWHM) bandwidth of the source is \(\Delta\lambda\approx~{}0.859\) nm, and assuming transform limited Gaussian pulses, we calculate the coherence time of the photons to be \(\tau_{\mathrm{c}}\approx 4.1\) ps (which yields \(\tau_{\mathrm{c}}\approx~{}2.4\) ps when expressed as a standard deviation). Initially, we prepare a 2-qubit polarization-entangled Bell state \(\left|\Psi^{-}\right\rangle=\left.\frac{1}{\sqrt{2}}\left|HV\right\rangle_{12 }-\left|VH\right\rangle_{12}\right.\)[27]. "Photon 1" is directly coupled to a quantum state tomography (QST) stage consisting of a quarter- (QWP), a half-waveplate (HWP), and a polarizing beam splitter (PBS). Photons at each output port of the PBS are detected by super
Figure 1: **Experimental Apparatus: a)** A schematic of the full experimental setup. Panel **b)** shows a simplified “unfolded” setup with color coded panels corresponding to different sections of Panel **a)**. See the main text for a detailed explanation of each section of the experiment. Panel **c)** displays a scanning electron microscope image of the cross sections of the two NANFs, which are subsequently spliced together to form the resulting 7.72 km HCF [16].
conducting nanowire single-photon detectors (SNSPDs) from Single Quantum (Fig. 1 yellow area). Our detectors have an average detector jitter of \(\approx 21\) ps, an average detection efficiency of 87%, and a dark count rate \(<100\) Hz.
Meanwhile, "Photon 2" is coupled to a "qubit conversion interferometer" (Fig. 1 red area), which converts the polarization qubit to a time-bin qubit. To accomplish this, the photon is first sent to a PBS so that if the photon is horizontally (vertically) polarized, it is transmitted (reflected) to the short (long) path. A HWP at \(45^{\circ}\) in the long path flips the polarization state from \(\left|V\right\rangle\) to \(\left|H\right\rangle\). Then the two paths are recombined using a fiber 50/50 beamsplitter (BS). We discard the cases wherein Photon 2 exits in the lower arm of the BS using the optical circulator shown in the blue section of Fig. 1. While this could be avoided using active optical switches [28, 29], for our picosecond time-bin spacings this would require GHz switching speeds. This results in Photon 2 encoding a time-bin qubit in the upper output mode of the BS. Photon 2's time-bin qubit is entangled with the polarization qubit encoded in Photon 1. Thus, the Bell state can be written as \(\left|\Psi^{-}\right\rangle=\frac{1}{\sqrt{2}}\left|HL\right\rangle_{12}-\left| VS\right\rangle_{12}\), where \(\left|S\right\rangle\) (\(\left|L\right\rangle\)) refers to the photon taking the short (long) path of the interferometer. Furthermore, we can adjust the time-bin spacing \(\Delta t\) by tuning a delay line in the long path.
Photon 2 then proceeds to the blue area of Fig. 1 (propagating clockwise through the loop), and passes a short free-space u-bench hosting a PBS to ensure that both time-bins are horizontally polarized. Up to this point, we use manual fiber polarization controllers to set all fiber-induced polarization transformations to identity. After the u-bench, we send the photon to either the 7.7 km HCF or a SMF28 fiber of comparable length (7.8 km). After the long-distance fiber transmission, the fiber is connected to the circulator, and then to the lower port of the qubit conversion interferometer (back in the red area of Fig. 1). Now, as the photon traverses the interferometer in reverse direction, the time-bin qubit encoded in Photon 2 is converted back to a polarization encoding.
In order to convert the time-bin qubit back to a polarization qubit, the long time-bin mode needs to be delayed, after which the two time bins should be recombined on the PBS. However, since we use passive optics, we cannot selectively delay the long time bin. In other words, at the BS we want the long (short) time bin to take the short (long) path, but half of the time the opposite situation occurs. This leads to three prominent peaks in the photon arrival time in the output path of the PBS (see Fig. 1b). The two side peaks correspond to \(\left|S\right\rangle\) (\(\left|L\right\rangle\)) taking the short (long) path. Whereas the center peak arises from the coherent recombination of \(\left|S\right\rangle\) and \(\left|L\right\rangle\). Hence, the polarization in the central peak is now re-entangled with the polarization of Photon 1. Now, back in polarization basis and post-selecting on the central peak, the ideal Bell state can be then written as \(\left|\Psi^{-}\right\rangle=\frac{1}{\sqrt{2}}\left|HV\right\rangle_{12}-\left| VH\right\rangle_{12}\). By using the same conversion interferometer in a loop geometry (Fig. 1 blue area) we can ensure that the recombination of the time-bins is passively phase stable. In order to ensure that Photon 2 is sent from the first PBS of the conversion interferometer to QST stage (Fig. 1 yellow area) we use fiber polarization controllers before and after the HCF/SMF28 spool together with the QWP and HWP in the u-bench such that Photon 2's polarization is flipped from \(\left|H\right\rangle\) to \(\left|V\right\rangle\) as it traverses the loop. Finally, after the PBS in the QST stage Photon 2 is detected using SNSPDs.
## III Results
We will first present the results of our the latency measurements. The group velocity of light in a guided mode (\(v_{g}=c/n_{g}\), where \(c\) is the speed of light in vacuum) is determined by \(n_{g}\), the group refractive index of the mode [22]. At 1550 nm HCF has a group refractive index of approximately \(n_{HCF}\approx 1\), while in SMF28 fiber \(n_{SMF28}\approx 1.47\) (see Fig. 2c). Consequently, \(v_{g,HCF}>v_{g,SMF28}\). We demonstrate this by recording an arrival-time histogram between the Photon 1 and Photon 2. To do so, Photon 1 is directly detected at the QST stage (see Fig. 1), while Photon 2 undergoes the encoding conversion and propagates through either the 7.7 km HCF or the 7.8 km SMF28 fiber spool before being detected at the second QST stage. The resulting histogram is illustrated in Fig. 2a. As expected, when Photon 2 traverses the SMF28 solid-core fiber it arrives \(\approx 13\)\(\mu\)s later than when it propagates through the HCF, resulting in an approximately 34 % lower latency in HCF compared to SMF28. For this measurement we chose a time-bin spacing of \(\Delta t=140\) ps. The inset plots in Fig. 2a also show that, due to the larger dispersion of SMF28 compared to HCF (see Fig. 2b), the three prominent peaks from our passive time-bin recombination are almost entirely unresolvable. However, after transmission through HCF, these peaks remain intact. We address this quantitatively later, by varying the time-bin spacing.
To demonstrate the distribution of high-concurrence entanglement through a long-distance HCF, we first characterize the entangled state (nominally a \(\left|\Psi^{-}\right\rangle\) state) produced by our source using two-qubit quantum state tomography (QST). The concurrence \(C\) is measure of entanglement, where for a maximally entangled state \(C=1\), for a fully separable state \(C=0\), and for any non-zero value of \(C\) the state is partially entangled [30]. We generate entangled photon pairs using the SPDC source depicted in Fig.1. The density matrix \(\hat{\rho}_{source}\), reconstructed by QST, is illustrated in Fig. 3a. It has a purity of \(\gamma_{source}=0.9493\pm 0.0008\) and a concurrence of \(C_{source}=0.9482\pm 0.0007\). (Throughout our work, the errors for all results extracted from QST (i.e. the purity and concurrence) are numerically estimated using Monte-Carlo simulations to account for Poisson counting statistics.) We then convert Photon 2's qubit from polarization to time-bin and distribute it through a 7.8 km SMF28 fiber. To ensure that the time-bin qubit is not affected
by dispersive pulse broadening we use a large time-bin spacing of \(\Delta t=520\) ps. Performing QST after this process confirms that the input state is almost unchanged, with \(\gamma_{SMF28}=0.956\pm 0.002\) and \(C_{SMF28}=0.946\pm 0.002\) (see the density matrix \(\hat{\rho}_{SMF28}\) in Fig. 3b).
We then repeat the experiment with the 7.7 km HCF. A QST measurement again confirms that we have successfully distributed one photon of an entangled Bell state through the HCF while maintaining a high concurrence. In particular, we find \(C_{HCF}=0.901\pm 0.006\) and \(\gamma_{HCF}=0.875\pm 0.006\) for \(\hat{\rho}_{HCF}\). The full density matrix is illustrated in Fig. 3c. We do note, however, that \(\hat{\rho}_{HCF}\) has a reduced concurrence compared to the SMF28 results. This is due to a slight depolarization of the photon. The origin of this depolarization effect is not clear, but it is likely related to polarization mode dispersion in the HCF. Note, that HCFs can be optimized for extremely high-purity polarization transmission [31]. Since the transmitted qubit is encoded in the time degree of freedom (DOF), this depolarization should not affect the distributed quantum state. However, because we convert the time-bin qubit back to polarization, the converted polarization state is degraded by the depolarization. This issue could be overcome by adding a polarizer in the setup before performing the back-conversion of the qubit to polarization. However, this would be accompanied by higher photon loss. Alternatively, with the use of ultra-fast active optic elements (e.g. phase shifters and optical switches), one could directly perform QST on the time-bin qubit.
To verify the mild depolarization in our HCF we perform ancilla-assisted quantum process tomography on the HCF polarization channel [32] to reconstruct the underlying quantum process described by a \(\chi\)-matrix [27]. To do this, we directly send Photon 2 through the HCF without converting it to a time-bin qubit, using Photon 1 as the ancilla qubit. The HCF-channel is described by \(\chi_{HCF}\), which we plot in Fig. 3d. For comparison, we model a uniform depolarizing channel of the form \(\chi_{depol}(\rho)=p1\rho 1+\frac{1-p}{3}(X\rho X+Y\rho Y+Z\rho Z)\), where \(X\), \(Y\) and \(Z\) are the three Pauli operators (see Fig. 3e) [27]. We then set the probability of the identity component, \(p\), in \(\chi_{depol}\) to match the experimentally reconstructed probability in \(\chi_{HCF}\), which is \(p=0.94\pm 0.02\). We calculate the fidelity between both channels, finding \(F(\chi_{HCF},\chi_{depol})=0.988\). The main discrepancy between our simple model and the experimental channel is evident in the slight non-zero off-diagonal elements in \(\chi_{HCF}\), introducing a dependence of the output state purity on the input polarization. In other words, \(\chi_{depol}\) describes completely uniform depolarization for all input states, whereas \(\chi_{HCF}\) has a preferred transmission axis. We numerically estimate the best and worst case state purities to be 0.97 for approximately vertically-polarized photons and 0.92 for horizontally-polarized photons.
So far, we have demonstrated the low-latency distribution of entangled photons through HCF at a group velocity \(v_{g,HCF}\approx c\). However, HCFs also naturally possess low chromatic dispersion, which is crucial for distribut
Figure 3: **Entanglement Distribution.** The density matrices measured before being converted to a time-bin qubit and transmitted through a long-distance fiber (**a**), after conversion to a time-bin qubit with \(\Delta t=520\) ps and being distributed through (**b**) the 7.8 km SMF28 fiber and (**c**) the 7.7 km HCF. **d**) The reconstructed \(\chi\)-matrix of the HCF. This describes the quantum channel experienced by a polarization state traversing the fiber. **e**) The theoretical \(\chi\)-matrix of a purely depolarizing channel, with the depolarization strength pf \(p=0.94\) set to match the HCF.
Figure 2: **Latency Measurements:****a)** The normalized arrival-time histogram between Photon 1 and 2 after transmission of the entangled time-bin qubit through either 7.7 km of HCF (blue) or 7.8 km SMF28 (red) fiber. The photons arrive 13.11 \(\mu\)s earlier when traversing the HCF. Insets: Zoomed view on both major peaks, showing three peaks discussed in the main text. The effect of dispersion in the SMF28 fiber is evident in the upper inset. **b)** The dispersion parameter \(D\) of both fibers as a function of wavelength. **c)** The group refractive index of HCF and SMF28 plotted versus wavelength.
ing ultra-short time-bins over long distances. When the closely-spaced time-bins overlap in the fiber due to dispersion, the quality of the entanglement is compromised. We thus repeat the entanglement distribution using different time-bin spacings \(\Delta t\), from 0 to 520 ps with a step size of 20 ps in both fibers. As the time-bins start to overlap (in practice, when \(\Delta t<6\sigma\)), the concurrence and purity of the distributed state decreases. Here, \(\sigma\) represents the standard deviation of the coincidence histogram recorded by our detectors and time tagger (Swabian Instruments, TimeTagger Ultra), which is a combination of the width of the photon wavepacket and the timing jitter of our detection system.
We find that after propagating through a 7.8 km SMF28 fiber the width of the time bin peaks is \(\sigma_{SMF28}\approx~{}54.1\pm 0.3\) ps, while after a 7.7 km HCF, the histogram exhibited a smaller standard deviation of \(\sigma_{HCF}\approx 23.1\pm 0.3\) ps (which is dominated by the detection system jitter). To verify this, we measure the same histogram directly from the source (with no long distance fiber), observing \(\sigma_{source}\approx 21.1\pm 0.2\) ps. This confirms that most of the width in our HCF measurements comes from limitations in our detection system. In particular, from the dispersion parameter \(D\) and the spectral width of the photons (\(\Delta\lambda\approx 0.859\) nm FWHM, or \(\Delta\lambda\approx 0.365\) nm in standard deviation), we can determine the expected pulse broadening after propagating through the respective fibers. Assuming Gaussian wavepackets, the pulses will experience an additional temporal broadening of \(\Delta\tau(z)\approx D\Delta\lambda z\), where \(z\) represents the length of the dispersive medium [22]. For SMF28 fiber with \(D_{\rm SMF28}\approx 18\) ps/nm\(\cdot\)km, we anticipate a pulse broadening of \(\Delta\tau_{\rm SMF28}(z=7.8\)km\()~{}\approx~{}51.2\) ps, in standard deviation. On the other hand, our HCF exhibits lower dispersion. with \(D_{\rm HCF}~{}\approx~{}2\) ps/nm\(\cdot\)km, resulting in a pulse broadening of only \(\Delta\tau_{\rm HCF}(z=~{}7.72\) km\()\approx~{}5.6\) ps in standard deviation. Combining this with our system jitter we expect to observe a width of \(\sigma^{\prime}_{\rm SMF28}=~{}\sqrt{\Delta\tau_{\rm SMF28}^{2}+\sigma_{\rm source }^{2}}~{}\approx~{}55.4\) ps for SMF28 fiber and \(\sigma^{\prime}_{\rm HCF}=\sqrt{\Delta\tau_{\rm HCF}^{2}+\sigma_{\rm source }^{2}}\approx 21.8\) ps for HCF. Both of these values agree fairly well with our measured widths. Furthermore, this suggests that our observed \(\sigma_{\rm HCF}\) is primarily influenced by the system jitter, which means that it should possible achieve even smaller time-bin spacings in HCF.
Our data showing the dependence of the two-qubit concurrence \(C\) and purity \(\gamma\) on the time-bin spacing for HCF and dispersion unshifted SMF28 fiber is presented in Fig. 4a. Clearly, HCF outperforms SMF28 fiber as the time-bin spacing \(\Delta t\) is reduced. In SMF28, we find that the concurrence and purity begin to drop at \(\Delta t\approx 300\) ps, while, due to the smaller dispersion parameter in HCF,
Figure 4: **Time-Bin Spacing Dependence.****a)** The concurrence and purity of the \(\left|\Psi^{-}\right\rangle\)-Bell state after propagation of the time-bin qubit through either 7.7 km HCF (blue) or 7.8 km SMF28 fiber (red) with different time-bin spacings \(\Delta t\). The blue triangles (squares) corresponds to the measured purity (concurrence) of the state after HCF transmission, the solid blue lines represents the simulated data. The red markers and dashed lines correspond to data taken through the SMF28 fiber. Panel **b** (**c**) shows the real and imaginary parts of the reconstructed density matrix after distribution through the SMF28 fiber (HCF), with a time-bin spacing of \(\Delta t=140\) ps.
this drop off does not occur until \(\Delta t\approx 140\) ps in HCF. To illustrate this effect, we present the density matrices measured with \(\Delta t=140\) ps through SMF28 fiber (\(\rho_{SMF28,\Delta t=140\text{ps}}\)) and HCF (\(\rho_{HCF,\Delta t=140\text{ps}}\)) in Fig. 4b and 4c, respectively. In Fig. 4a, the dashed and solid lines represent our simulation data for the concurrence and purity of the distributed state. As presented in the Appendix, our model incorporates the effect of overlapping time-bins as error counts at the detectors. Fig. 4a shows that when \(\Delta t\to 0\) for both SMF28 and HCF, the concurrence of the distributed quantum state is lost. However, even for \(\Delta t\to 0\), some coherence is still preserved and the state purity \(\gamma_{SMF28}\) and \(\gamma_{HCF}\) remain greater than the minimum purity \(\gamma_{min}=\frac{1}{4}\) for a two-qubit maximally mixed state. This is also reproduced by our model.
## IV Discussion
We have presented the distribution of entanglement over a long distance (7.7 km) HCF. To achieve this, we generated a \(|\Psi^{-}\rangle\)-Bell state between a polarization qubit encoded in one photon and a time-bin qubit in another, transmitting the time-bin qubit through the HCF. We verified our entanglement distribution by performing two-qubit state tomography and reconstructing the resulting density matrix. This allowed us to quantify the concurrence and purity of the quantum state, finding that the concurrence (purity) decreased slightly from \(0.9482\pm 0.0007\) (\(0.9493\pm 0.0008\)) to \(0.901\pm 0.006\) (\(0.875\pm 0.006\)) due to a slight depolarization effect in the HCF. For comparison, we performed entanglement distribution using a 7.8 km SMF28 fiber. We found that, for larger time-bin spacings, the SMF28 fiber outperformed the HCF fiber; i.e. we found no measurable decrease in either the concurrence or purity in this case. However, when we repeated the experiment with different time-bin spacings, we found the HCF preserves the entanglement for much smaller time spacings, because of its low chromatic dispersion. In particular, we found that in our SMF28 fiber the concurrence already decreases for \(\approx~{}300\) ps spaced time bins, while in HCF the concurrence remains high until \(\approx 140\) ps. Moreover, the HCF data decreases primarily due to our detector jitter, rather than dispersion. Although there are techniques to circumvent dispersion in SMF, in HCF low dispersion comes for free. Moreover, HCF has the additional advantages of low optical nonlinearity (which can allow strong classical signals to copropagate with quantum light), and group velocities near \(c\) (for ultra-low latency communication). Given the rapid advancements in high-quality HCF fabrication at a variety of operating wavelengths, our work sets the stage for HCF-based quantum communication protocols and quantum photonic technologies.
## Acknowledgements
We thank Lennart Jehle and Michal Vyvlecka for use of their low-jitter SNSPD system, and we thank Obada Alia and George T. Kanellos for their assistance with initial characterization of the hollow-core fiber.
## Data availability
All the data that are necessary to replicate, verify, falsify and/or reuse this research is available online at [33].
## Funding
This project was funded in whole, or in part, from the European Union's Horizon 2020 and Horizon Europe research and innovation programme under grant agreement No 101071779 (GRAVITES), No 101114043 (QSNP); from the EPSRC Airguide Photonics project (EP/P030181/1); from the European Union's Horizon 2020 Research and Innovation Programme through Quantum-Flagship Project UNIQORN under Grant 820474; from the Austrian Science Fund (FWF) through [F7113] (BeyondC), and [FG5] (Research Group 5); from the AFOSR via FA9550-21- 1-0355 (QTRUST); from the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
|
2303.03652 | Earthquakes and related anomalous electromagnetic radiation | According to the presented work, VLF/LF electromagnetic emissions might be
declared as the main precursor of earthquakes since based on these very
emissions, it might predict ($M\ge 5$) inland earthquakes. As for ULF
radiations, it governs some processes going on in the
lithosphere-atmosphere-ionosphere coupling (LAIC) system. By these points,
VLF/LF/ULF electromagnetic emissions have to consider more universal fields
than other geophysical field anomalies during the earthquake preparation period
up to aftershocks extinction. | Manana Kachakhidze, Nino Kachakhidze-Murphy, Badri Khvitia | 2023-03-07T05:07:25Z | http://arxiv.org/abs/2303.03652v1 | Earthquakes and related anomalous electromagnetic radiation Title shortened form: Earthquakes electromagnetic anomalies Manana Kachakhidze\({}^{1}\), Nino Kachakhidze-Murphy\({}^{1}\), Badri Khvitia\({}^{2}\)
## Abstract
According to the presented work, VLF/LF electromagnetic emissions might be declared as the main precursor of earthquakes since based on these very emissions, it might predict (M\(\geq\) 5) inland earthquakes. As for ULF radiations, it governs some processes going on in the lithosphere-atmosphere-ionosphere coupling (LAIC) system. By these points, VLF/LF/ULF electromagnetic emissions have to consider more universal fields than other geophysical field anomalies during the earthquake preparation period up to aftershocks extinction.
## 1 Introduction
Nowadays, the problem concerning to investigation of the relationship between electromagnetic emissions and tectonic processes in the earth's crust is very important.
This article is devoted to the study of these connections.
Very significant papers are published in the scientific world on the basis of ground-based and satellite data of earth VLF/LF and ULF electromagnetic (EM) emissions observed in the earthquake preparation period (Molchanov, et al., 1998, 2008; Hayakawa, et al., 1996;2013, 2019; 2021; Biagi, 1999; Biagi et al., 2014;2019. Uyeda, et al., 2000; Uyeda, 2013; Hattori, et al., 2004; Hattori, 2004; Freund, 2000; Freund, et al., 2006; Parrot, 2006; Pulinets, et al., 2006; Pulinets, 2009; Eftaxias, et al., 2009; 2010, 2018).
These phenomena are detectable both at the laboratory and geological scale (Molchanov, et al., 1998, 2008; Hayakawa, et al., 1996;2013, 2019;2021; Eftaxias, et al., 2009; 2010, 2018)
Observations proved that when a material is strained, electromagnetic emissions in a wide frequency spectrum ranging from kHz to MHz are produced by opening cracks (Eftaxias, et al., 2009; 2010, 2018). On the large (geological) scale, intense MHz and kHz EM emissions precede earthquakes that: (i) occurred on land (or near the coastline), (ii) were large (magnitude 6 or larger), or (iii) were shallow. (Eftaxias, et al., 2009; 2010, 2018; Karamanos, et al., 2006). Importantly, the MHz radiation precedes the kHz at geophysical scale, as well. These emissions are constantly accompanied by ULF radiations (Eftaxias, et al., 2009; Kapiris, et al., 2004a;).
Our goal is to consider the possible mechanisms for the origination of electromagnetic radiation before an earthquake.
## 2 Discussion
### VLF/LF electromagnetic emission detected prior to the large earthquakes
Earth crust segment where incoming earthquake focus is to be formed from the very starting moment of earthquake preparation belongs to the system, which suffers specific type oscillations: the process of energy accumulation is in progress in the system. But simultaneously, as a result of foreshocks, main shock, and aftershocks, the accumulated energy is released too. With this, in view, this system is an oscillation system.
The extreme diversity of oscillation systems and their properties needs the identification of common features in various oscillation systems and their gathering into certain classes and types according to the most characteristic signs.
In the seismogenic area, the mass, elasticity (mechanical systems), capacity, and inductance (electric systems) elements are uniformly and uninterruptedly spread in the whole volume of the system (Migulin et al., 1978). Moreover, in the earthquake preparation area, each least element has its own capacity and inductance because of piezo-electric, piezo-magnetic, electrochemical, and other effects. Therefore seismogenic zone simultaneously can be considered a distributed system. The seismogenic area can be determined by the Dobrovolsky formula (1) (Dobrovolsky et al. (1979)
\[R=10^{0.43M}\hskip 28.452756pt(1)\]
where \(R\) is the strain area radius in kilometer and \(M\) is earthquake magnitude.
Since in this system, permanently acts the tectonic stress that changes the physical and chemical properties of the environment, the relationship between the locations of elements (or groups of elements) plays a significant role by point of view of the functioning of the system
Usually, in distributed systems, it is impossible to isolate a single point, since each channel for the passage of energy is considered a pair of poles (connection points) (Migulin et al., 1978), therefore, the radiated in the system frequency will change, in accordance with changes in the lengths of the combined fractures involved in the process of the formation of the fault.
Just on these considerations was based the model of the generation of electromagnetic radiation existed prior to the earthquake, where the formula for the length \(l\) of the earthquake fault is obtained:
\[l=k\,\frac{c}{\omega}\hskip 56.905512pt(2)\]
where \(\omega\) is the eigen-frequency of electromagnetic emissions, \(c\) is the light speed, \(k\) is the characteristic coefficient of geological medium (it approximately equals to 1).
In the scientific literature, electromagnetic radiation before an earthquake has recently been grouped as follows:
ULF (ultra-low frequency, **f<1Hz);** ULF, ELF (extremely low frequency, **1Hz<f<3kHz**);
VLF (very low frequency, **3kHz<f<30kHz**); LF (low frequency, **30kHz<f<300 kHz**) (Hayakawa et al., 2019). The question arises whether this radiation is earthbound or not during the earthquake preparation period.
To clear this issue, let's separately consider the frequencies of the range of electromagnetic radiation mentioned above.
According to the formula (2), the frequencies of the spectrum in the range of 23 830 kHz\(\geq\)f\(\geq\)0.378 kHz, corresponding to the lengths of the faults of the 1\(\leq\)M\(\leq\)9 magnitudes earthquakes, originate in the earthquake preparation process.
- is outside this range;
* ULF, ELF (extremely low frequency, 1Hz<f<3kHz) is partially included in this range for 9.0 \(\geq\)M \(\geq\)7.5 magnitude earthquakes, the corresponding frequency range of which is 0.4 kHz \(<\) f \(<\) 3kHz.
* VLF (very low frequency, **3kHz<f<30kHz**). Electromagnetic radiation of this range corresponds to earthquakes of magnitude 7.5\(\geq\)M \(\geq\)5.8;
* LF (low frequency, **30kHz<f<300 kHz**). Electromagnetic radiation of this range corresponds to earthquakes of magnitude 5. 8\(\geq\)M \(\geq\)4.1
Thus, the source of LF, VLF and ELF (0.4 kHz<f<3kHz) frequencies during the earthquake preparation period is the earth (Kachakhidze et. al, 2015,2019, 2022).
Electromagnetic radiation of high frequencies almost does not reach the earth's surface.
We focus on electromagnetic radiation in the 102 kHz - 0.377 kHz frequency range since earthquakes of magnitude 5\(\leq\)M\(\leq\)9 are noteworthy for seismically active regions and countries.
The main determining parameter of the magnitude of an earthquake is the length of the fault formed in the earthquake focus.
If any geophysical field, in addition to determining the epicenter and time of occurrence of the earthquake, can analytically describe the change in the length of the fault in the focus during the preparation of the earthquake, it is clear that only such a field can be considered as a precursor of the earthquake.
The other fields, which change abnormally during the earthquake preparation period, but cannot describe the change in the length of the fault, should be considered only as indicators.
Since it was found that the electromagnetic wave of VLF/LF frequency before the earthquake is formed by the complex geological process of the coalescence of cracks in the earthquake focus, and therefore, the frequency \(\omega\) corresponding to this wave is the parameter that analytically describes the changes of the fault length arisen in the focus during the earthquake preparation period, the electromagnetic radiation of the VLF/LF frequency should be considered as a precursor to an earthquake.
In addition to the electromagnetic radiation of VLF/LF frequencies, we are also interested in the anomalous perturbation of ULF electromagnetic radiation, which constantly accompanies the process of earthquake preparation.
### ULF electromagnetic emission detected prior to the large earthquakes.
It is proved, that dynamic processes in the earthquake preparation zones can produce current systems of different kinds (Molchanov et al.,1998; Kopytenko, et al., 2001) which can be local sources for electromagnetic waves at different frequencies, including ULF. ULF waves can propagate through the crust and reach the earth's surface, unlike high-frequency waves (Hattori, et al., 2004; Kopytenko, et al., 2006; Liu, et al., 2006; Varotsos, et al., 2011; Chen, et al., 2011).
Thus, in ground-based observations, we could expect some ULF signals of seismic origin observed in both geo-electric and geomagnetic fields (Kopytenko, et al., 2001; Uyeda, 2013).
Different attempts have been made to explain the generation of electro-telluric variations before an earthquake takes place in an earthquake preparation zone. Their disturbances in the preparation zone were considered a factor of such importance that in a number of works it was proposed to use telluric variations as a short-term precursor of strong earthquakes (Varotsos et.al., 2006, 2011; Lazarus, 1993;).
We have a different view on this issue.
As a result of the growth of tectonic stress heterogeneity appears in earthquake preparation areas, but earthquake preparation takes place in a relatively weak zone, by the view of solidity (Morozova, et al., 1999; Tada-nori Goto, et al., 2005; Kovtun, et al., 2009, Freund, 2000). Since tectonic stress basically "works" only for the formation of the main fault in the earthquake focus, in other parts of the seismogenic area, it cannot cause the fracturing of rocks and their significant combining, that is, it can no longer form the necessary conditions for the occurrence of an earthquake.However, tectonic stress can cause perturbations of the geophysical fields in the seismogenic area.
One such field is the telluric current.
As a result of the growth of tectonic stress heterogeneity appears in earthquake preparation areas(Freund, 2000), similar to "Frankel's generator" this segment of the earth's crust will have inductive polarization (Frenkel, 1949; Yoshino, 1991; Molchanov et al., 1998; Liperovsky et al., 2008).
Generally, polarization charge should be distributed over some surface, which should be limited by fault or should be formed along the faults (Yoshino, 1991).
Experimentally an important fact has been proved that at the formation of cracks in the earthquake preparation period, electric dipoles appear on their surface (Freund, et al. 2006; Eftaxias, et al.,2009, 2010, Eftaxias, et al., 2018).
The polarization charge takes part in two different processes:
**a)** If anywhere in the seismogenic area, microcracks coalesce in the form of any size rupture nucleus (including the main fault) and in the fault plane there are changes in the specific electrical resistance of the rocks, that is there are inclusions of high electric conductivity, which conditions the sharp increase of the electric conductivity, it is not excluded that the layer on which the polarization charge is distributed and the fault, like a double-wire conduction layer might be locked by vertical electric field and form an oscillating contour-like structure. This will happen if the segment of the earth's crust, on the surface of which the polarized charges are distributed, is about the size of the fault and is formed approximately along this fault. When the tectonic stress overcomes the limit of the geological strength of the medium it begins the rock integrity fracturing process (later it goes to fault formation avalanche process and ends with an earthquake) which accompanies by emissions of electromagnetic waves of VLF/LF frequency. The value at this time emitted frequency of the electromagnetic wave depends on the length of the fault.
In the case of small cracks, the electromagnetic radiation will be of the order of MHz, and in the case of cracks of the order of _km,_ it goes into kHz (Kachakhidze, et al., 2015, 2019, 2022).
**b)** In areas where tectonic stress is not able to form a crack, and because, in general, rock density increases with depth, in case of the same tectonic stress effect, more inhomogeneities appear in the upper rocks with lower density compared to the lower rocks, that is, in the upper rocks, more polarization charges are generated compared to the lower ones.
However, as stated above, since the polarization charge must be distributed over some surface (Yoshino, 1991), it is clear that each of these surfaces will be approximately equipotential.
In general, the work done during the movement of the charge \(q_{0}\) on the \(\Delta I\) element of the equipotential surface is equal Vepkhvadze, 1995):
\[\Delta A=q_{0}(\varphi_{1}\!-\!\varphi_{2})\quad(3)\]
This work can also be represented by means of field tension:
\[\Delta A=q_{0}E\Delta cos\alpha\quad(4)\]
According to formulas (3) and (4):
\(q_{0}E\Delta cos\alpha=0\)
and since \(q\neq 0,E\neq 0,\Delta l\neq 0\), therefore \(cos\alpha=0\) i.e. \(\alpha=\frac{\pi}{2}\).
Since the field lines in any electrostatic field are perpendicular to the equipotential surfaces and in addition in the case of the field of the system of charges, the tensions summarize vectorially, it is obvious that the total direction of the electric field tensions created by the polarization charges of these inhomogeneities will be the same.
It is known that the telluric electric fields all the time change by direction and magnitude at any point (Kraev, 2007), At the same time at perturbation telluric field becomes linearly (or plainly) polarized (Kraev, 2007).
Therefore, under conditions of increasing tectonic stress, we have to assume that the telluric field is mainly disturbed and polarized.
If we exclude the effects that external factors can cause, the changes in this field in a given area will uniquely depend on the changes in tectonic stresses.
In generaly, in special scientific works magneto-telluric field is considered as a field that is perturbed by local and regional factors (Kraev, 2007).
The telluric field perturbation and polarization during earthquake preparation are contributed by: a) stratospheric-electrical processes (ionospheric oscillations, auroras), b) boundary-electrical processes (filtration-electrical processes, convection currents in the lower layers of the atmosphere, lightning processes, etc.), c) lithospheric-electrical processes (contact voltages, thermoelectric and chemical-electrical processes (Kraev, 2007);
It is known that thanks to contact of solid or gaseous phases existing between two mediums, the earth and atmosphere, diffusion of electrons and ions and ion adsorption take place, which conditions the creation of a stable electric layer (dipole layer) on the contact. In this layer, the electric field, supported by factors conditioned by earthquake preparation, can be called an "additional" electric field and can be marked as _En_ (5) (Kraev, 2007) (Fig, 1).
In this case, as has been mentioned above, electric field potential at the separating border of these two mediums suffers discontinuation, which equals contact "additional" electric field strength:
\[\varepsilon^{add}=\int_{1}^{2}E_{n}^{add}dn \tag{5}\]
where 1 and 2 points are located on both near coasts of the contact surface. It is clear that the fact of mentioned field discontinuation will be expressed in all geophysical phenomena connected with the "additional " field.
Fig. 1. Perturbed Telluric current
Since the charges move in the direction of the field line, it is obvious that at this time is generated, the stress-induced electric current in the rock caused by the changes in mobility of charged dislocations (Tzanis and Vallianatos, 2002; St-Laurent et al., 2006; Triantis, et al., 2008) and/or point defects (Freund, 2000).
Telluric current is practically a synonym for geoelectric potential difference and preseismic telluric current signals, called seismic electric signals (SES) (Orihara et al., 2012),
Obviously, the direction of the telluric current, under these conditions, will be polarized and directed vertically in the direction of the Earth-ionosphere. This fact can also be observed in nature (Pulinets, 2009).
As this current is associated with the displacement of charges, it releases heat (changing the orientation of the dipoles causes heat release) (Vepkhvadze, 1995), so the temperature in this area will increase, which is also confirmed by the experiment (Tramutoli, et al., 2005; Pulinets, et al., 2006; Saradjian, et al., 2011; Kundu et al., 2022).
The telluric current will generate a magnetic field by induction, which is known in the scientific literature too: as a result of the current changes in the Earth's crust the anomalous magnetic field variations start (which, according to Maxwell equations, should be accompanied by a strong SES activity) (Chapman, S. and Whitehead, 1922; Panayiotis et al., 2019).
Indeed, according to Maxwell's theory, in the case of plane electromagnetic waves, a change in the electric field directed along the axis OZ leads to the generation of a magnetic field directed along the axis OY.
Since in our case, the telluric current is directed from the Earth to the ionosphere along OZ, and it is also flatly (or linearly) polarized, for the magnetic field induced by this field we have:
\[\varepsilon\varepsilon_{o}\quad\frac{\partial\varepsilon_{z}}{\partial t}= \frac{\partial h_{y}}{\partial x}\quad\text{ (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
Thus, during the preparation, occurrence, and subsequent period of the earthquake, including the aftershocks attenuation, changes in the telluric current are directly related to changes in tectonic stress, so this field to be considered only as an indicator of an earthquake.
## 3 Summary
Accumulation of tectonic stress in any area of a seismically active region (country) causes:
On the one hand:
- Formation of a fault in the focus of the incoming earthquake.
- This process is accompanied by the generation of VLF/LF electromagnetic waves, based on the data of which it is possible to determine simultaneously the magnitude, location, and time of the expected earthquake;
On the other hand:
- In the area of accumulation of tectonic stress, a vertical telluric current of the earth-ionosphere direction is generated;
- The polarized telluric current in the earth-atmosphere border suffers discontinuation;
- Polarized telluric current, due to the polarization of the eastern component of the Earth's magnetic field, causes the generation of a polarized magnetic field in the earthquake preparation area;
- - The polarized magnetic field uninterruptedly passes through the earth-atmosphere border;
- The earth's surface should have positive potential permanently in the earthquake preparation area for a rather long period.
Thus:
- Tectonic stress causes and fully governs ULF electromagnetic field anomalies in the earthquake preparation area, which from its side conditions continuous interconnection of the lithosphere-atmosphere-ionosphere (LAIC) system.
- ULF electromagnetic radiation is only an indicator of an earthquake since anomalous changes of this field are merely related to changes in tectonic stress, and not to the process of fault formation in the focus of the incoming earthquake.
|
2305.08489 | Extensional Taylor Expansion | We introduce a calculus of extensional resource terms. These are resource
terms \`a la Ehrhard-Regnier, but in infinitely eta-long form. The calculus
still retains a finite syntax and dynamics: in particular, we prove strong
confluence and normalization.
Then we define an extensional version of Taylor expansion, mapping ordinary
lambda-terms to (possibly infinite) linear combinations of extensional resource
terms: like in the ordinary case, the dynamics of our resource calculus allows
us to simulate the beta-reduction of lambda-terms; the extensional nature of
this expansion shows in the fact that we are also able to simulate
eta-reduction.
In a sense, extensional resource terms contain a language of finite
approximants of Nakajima trees, much like ordinary resource terms can be seen
as a richer version of finite B\"ohm trees. We show that the equivalence
induced on lambda-terms by the normalization of extensional Taylor-expansion is
nothing but H*, the greatest consistent sensible lambda-theory - which is also
the theory induced by Nakajima trees. This characterization provides a new,
simple way to exhibit models of H*: it becomes sufficient to model the
extensional resource calculus and its dynamics.
The extensional resource calculus moreover allows us to recover, in an
untyped setting, a connection between Taylor expansion and game semantics that
was previously limited to the typed setting. Indeed, simply typed, eta-long,
beta-normal resource terms are known to be in bijective correspondence with
plays in the sense of Hyland-Ong game semantics, up to Melli\`es' homotopy
equivalence. Extensional resource terms are the appropriate counterpart of
eta-long resource terms in an untyped setting: we spell out the bijection
between normal extensional resource terms and isomorphism classes of
augmentations (a canonical presentation of plays up to homotopy) in the
universal arena. | Lison Blondeau-Patissier, Pierre Clairambault, Lionel Vaux Auclair | 2023-05-15T09:45:30Z | http://arxiv.org/abs/2305.08489v3 | # Extensional Taylor Expansion
###### Abstract
We introduce a calculus of extensional resource terms. These are resource terms _a la_ Ehrhard-Regnier [1], but in infinite \(\eta\)-long form, while retaining a finite syntax and dynamics: in particular, we prove strong confluence and normalization.
Then we define an extensional version of Taylor expansion, mapping ordinary \(\lambda\)-terms to sets (or infinite linear combinations) of extensional resource terms: just like for ordinary Taylor expansion [20], the dynamics of our resource calculus allows to simulate the \(\beta\)-reduction of \(\lambda\)-terms; the extensional nature of expansion shows in that we are also able to simulate \(\eta\)-reduction.
In a sense, extensional resource terms form a language of (non-necessarily normal) finite approximants of Nakajima trees [21] (see also [1, Exercise 19.4.4]), much like ordinary resource terms are approximants of Bohm-trees. Indeed, we show that the equivalence induced on \(\lambda\)-terms by the normalization of extensional Taylor-expansion is nothing but \(\mathcal{H}^{*}\), the greatest consistent sensible \(\lambda\)-theory.
Taylor expansion has profoundly renewed the approximation theory of the \(\lambda\)-calculus by providing a quantitative alternative to order-based approximation techniques, such as Scott continuity and Bohm trees [2]. Extensional Taylor expansion enjoys similar advantages: e.g., to exhibit models of \(\mathcal{H}^{*}\), it is now sufficient to provide a model of the extensional resource calculus. We apply this strategy to give a new, elementary proof of a result by Manzonetto [14]: \(\mathcal{H}^{*}\) is the \(\lambda\)-theory induced by a well-chosen reflexive object in the relational model of the \(\lambda\)-calculus [1].
## 1 Preliminaries
Tuples and bags.If \(X\) is a set, we write \(X^{*}=\bigcup_{n\in\mathbb{N}}X^{n}\) for the set of finite lists, or tuples, of elements of \(X\), ranged over by \(\vec{a},\vec{b}\), _etc._ We write \(\langle a_{1},\dots,a_{n}\rangle=\langle a_{i}\rangle_{1\leq i\leq n}\) to list the elements of a tuple, \(\varepsilon\) for the empty tuple,
\(|\vec{a}|\) for the length of \(\vec{a}\), and denote concatenation simply by juxtaposition, e.g., \(\vec{a}\,\vec{b}\). If \(a\in X\) and \(\vec{b}\) is a tuple, we write \(a:\vec{b}\) for the tuple obtained by pushing \(a\) at the head of \(\vec{b}\): this **cons** operation generates \(X^{*}\) inductively from \(\varepsilon\).
We write \(\mathfrak{M}_{\mathrm{f}}(X)\) for the set of finite multisets of elements of \(X\), which we call **bags**, ranged over by \(\bar{a},\bar{b}\), _etc._ We write \([a_{1},\ldots,a_{n}]\) for the bag \(\bar{a}\) defined by a list \(\vec{a}=\langle a_{1},\ldots,a_{n}\rangle\) of elements: we say \(\vec{a}\) is an **enumeration** of \(\bar{a}\) in this case. We write \([\,]\) for the empty bag, and use \(*\) for bag concatenation. We also write \(|\bar{a}|\) for the length of \(\bar{a}\): \(|\bar{a}|\) is the length of any enumeration of \(\bar{a}\). We may abuse notation and use a tuple \(\vec{a}\) or a bag \(\bar{a}\) for the set of its elements: e.g., we may write \(a\in\bar{a}\).
We shall often need to _partition_ bags, which requires some care. For \(k\in\mathbb{N}\), a \(k\)**-partitioning** of \(\bar{a}\) is a function \(p:\{1,\ldots,|\bar{a}|\}\to\{1,\ldots,k\}\): we write \(p:\bar{a}\lhd k\). Given an enumeration \(\langle a_{1},\ldots,a_{n}\rangle\) of \(\bar{a}\) and \(J=\{j_{1},\ldots,j_{l}\}\subseteq\{1,\ldots,n\}\) with \(\#J=l\), we write \(\bar{a}\upharpoonright J\coloneqq[a_{j_{1}},\ldots,a_{j_{l}}]\) for the **restriction** of \(\bar{a}\) to \(J\). The \(k\)**-partition** of \(\bar{a}\) associated with \(p:\bar{a}\lhd k\) is then the tuple \(\langle\bar{a}\upharpoonright_{p}1,\ldots,\bar{a}\upharpoonright_{p}k\rangle\), where we set \(\bar{a}\upharpoonright_{p}i\coloneqq\bar{a}\upharpoonright\{j\mid p(j)=i\}\) for \(1\leq i\leq k\), so that
\[\bar{a}=\bar{a}\upharpoonright_{p}1*\cdots*\bar{a}\upharpoonright_{p}k\;.\]
There is a (temporary) abuse of notation here, as the definitions of restrictions and \(k\)-partitions depend on the chosen enumeration of \(\bar{a}\). But having fixed \(\bar{a}\) and \(k\), neither the set of \(k\)-partitions of \(\bar{a}\), nor the _number_ of partitionings \(p\) of \(\bar{a}\) into a given \(\langle\bar{a}_{1},\ldots,\bar{a}_{n}\rangle\), depend on the enumeration. So for any function \(f:\mathfrak{M}_{\mathrm{f}}(X)^{k}\to\mathcal{M}\) (for \(\mathcal{M}\) a commutative monoid, noted additively), the value of
\[\sum_{\bar{a}\lhd\bar{a}_{1}*\cdots*\bar{a}_{k}}f(\bar{a}_{1},\ldots,\bar{a}_{ k})\coloneqq\sum_{p:\bar{a}\lhd k}f(\bar{a}\upharpoonright_{p}1,\ldots,\bar{a} \upharpoonright_{p}k)\]
is independent of the enumeration. When indexing a sum with \(\bar{a}\lhd\bar{a}_{1}*\cdots*\bar{a}_{k}\) we thus mean to sum over all partitionings \(p:\bar{a}\lhd k\), \(\bar{a}_{i}\) being shorthand for \(\bar{a}\upharpoonright_{p}i\) in the summand, and the result being independent of the choice of an enumeration. This construction is easily proved to be associative, in the sense that, e.g.:
\[\sum_{\bar{a}\lhd\bar{a}_{1}*\bar{a}^{\prime}}\sum_{\bar{a}^{\prime}\lhd\bar{ a}_{2}*\bar{a}_{3}}f(\bar{a}_{1},\bar{a}_{2},\bar{a}_{3})=\sum_{\bar{a}\lhd \bar{a}_{1}*\bar{a}_{2}*\bar{a}_{3}}f(\bar{a}_{1},\bar{a}_{2},\bar{a}_{3})\;.\]
The **isotropy degree**\(\mathsf{d}(\bar{a})\) of a bag \(\bar{a}\) of length \(k\) is the cardinality of the stabilizer of any enumeration \(\langle a_{1},\ldots,a_{k}\rangle\) of \(\bar{a}\) under the action of the group \(\mathbb{S}_{k}\) of permutations of \(\{1,\ldots,k\}\): formally,
\[\mathsf{d}(\bar{a})\coloneqq\#\{\sigma\in\mathbb{S}_{k}\mid\langle a_{1}, \ldots,a_{k}\rangle=\langle a_{\sigma(1)},\ldots,a_{\sigma(k)}\rangle\}\;.\]
The following result is a routine exercise in combinatorics:
**Fact 1.1**.: _If \(\bar{a}=\bar{a}_{1}*\cdots*\bar{a}_{n}\) then_
\[\mathsf{d}(\bar{a})=\#\{p:\bar{a}\lhd n\mid\bar{a}\upharpoonright_{p}i=\bar{a} _{i}\text{ for }1\leq i\leq n\}\times\prod_{i=1}^{n}\mathsf{d}(\bar{a}_{i})\;.\]
Sequences of bags and streams.We will also use possibly infinite sequences of bags, with a finiteness constraint: only finitely many bags may be non-empty. We write \(\mathcal{S}_{f}(X)\) for the set \(\mathfrak{M}_{\!\ell}(X)^{*}\) of tuples of bags, and we write \(\mathcal{S}(X)\) for the subset of \(\mathfrak{M}_{\!\ell}(X)^{\mathbb{N}}\) such that \(\langle\bar{a}_{i}\rangle_{i\in\mathbb{N}}\in\mathcal{S}(X)\) iff \(\{i\in\mathbb{N}\mid|\bar{a}_{i}|>0\}\) is finite. We denote elements of \(\mathcal{S}_{f}(X)\) or \(\mathcal{S}(X)\) as \(\vec{a},\vec{b},\)_etc._ just like for plain tuples, and we reserve the name **stream** for the elements of \(\mathcal{S}(X)\).
We write \(\iota\coloneqq\langle[\,]\rangle_{i\in\mathbb{N}}\). Note that streams are inductively generated from \(\iota\), by the **cons** operation defined by
\[(\bar{a}::\vec{b})_{i}\coloneqq\begin{cases}\bar{a}&\text{if }i=0\\ \bar{b}_{j}&\text{if }i=j+1\end{cases}\qquad(\text{writing }\vec{b}=\langle\bar{b}_{j} \rangle_{j\in\mathbb{N}})\]
subject to the identity \([\,]\::\iota=\iota\). We can thus reason inductively on streams, treating \(\iota\) as the base case, and considering \(\vec{b}\) as a "strict sub-stream" of \(\bar{a}::\vec{b}\) when \(\bar{a}::\vec{b}\neq\iota\).
A \(k\)**-partitioning**\(p:\vec{a}\lhd k\) of \(\vec{a}=\langle\bar{a}_{1},\ldots,\bar{a}_{n}\rangle\in\mathcal{S}_{f}(X)\) is a tuple \(p=\langle p_{1},\ldots,p_{n}\rangle\) of \(k\)-partitionings \(p_{i}:\bar{a}_{i}\lhd k\). This defines a **partition**\(\langle\vec{a}\upharpoonright_{p}1,\ldots,\vec{a}\upharpoonright_{p}k)\), component-wise: each \(\vec{a}\upharpoonright_{p}i\) is the sequence \(\langle\bar{a}_{1}\upharpoonright_{p_{1}}i,\ldots,\bar{a}_{n}\upharpoonright_{p_ {n}}i)\). We obtain \(\vec{a}=\vec{a}\upharpoonright_{p}1*\cdots*\vec{a}\upharpoonright_{p}k\), where we apply the concatenation of bags component-wise, to sequences all of the same length. And, just as before, we write
\[\sum_{\vec{a}\lhd\vec{a}_{1}*\cdots*\vec{a}_{k}}f(\vec{a}_{1},\ldots,\vec{a} _{k})\coloneqq\sum_{p:\vec{a}\lhd k}f(\vec{a}\upharpoonright_{p}1,\ldots,\vec{ a}\upharpoonright_{p}k)\,\]
the result of the sum being independent from the enumeration of the bags of \(\vec{a}\).
Similarly a \(k\)**-partitioning**\(p:\vec{a}\lhd k\) of a stream \(\vec{a}=\langle\bar{a}_{i}\rangle_{i\in\mathbb{N}}\) is a sequence \(p=\langle p_{i}\rangle_{i\in\mathbb{N}}\) of \(k\)-partitionings \(p_{i}:\bar{a}_{i}\lhd k\): note that a stream \(\vec{a}\) has only finitely many \(k\)-partitionings, because \(\bar{a}_{i}\) is empty for sufficiently large values of \(i\). A \(k\)-partitioning of a stream \(\vec{a}\) defines a **partition**\(\langle\vec{a}\upharpoonright_{p}1,\ldots,\vec{a}\upharpoonright_{p}k)\), component-wise: each \(\vec{a}\upharpoonright_{p}j\) is the sequence \(\langle\bar{a}_{i}\upharpoonright_{p_{i}}j)_{i\in\mathbb{N}}\). We obtain \(\vec{a}=\vec{a}\upharpoonright_{p}1*\cdots*\vec{a}\upharpoonright_{p}k\), where we apply the concatenation of bags component-wise. And we write
\[\sum_{\vec{a}\lhd\vec{a}_{1}*\cdots*\vec{a}_{k}}f(\vec{a}_{1},\ldots,\vec{a} _{k})\coloneqq\sum_{p:\vec{a}\lhd k}f(\vec{a}\upharpoonright_{p}1,\ldots,\vec{ a}\upharpoonright_{p}k)\,\]
which is always a finite sum, whose result is independent from the enumeration of the bags of \(\vec{a}\).
## 2 The extensional resource calculus
### Syntax of the calculus
We fix an infinite countable set \(\mathcal{V}\) of **value variables** (or, simply, **variables**), which we denote by letters \(x,y,z\). We also fix an infinite countable set \(\mathcal{V}_{\mathrm{s}}\) of **sequence variables**, which we denote by letters \(\vec{x},\vec{y},\vec{z}\), and with each sequence variable \(\vec{x}\), we associate a sequence \(\langle\vec{x}[i]\rangle_{i\in\mathbb{N}}\) of value variables, in such a way
that for each \(x\in{\cal V}\), there exists a unique pair \(\langle\vec{x},i\rangle\) such that \(x=\vec{x}[i]\): sequence variables partition value variables. We will in general identify \(\vec{x}\) with the corresponding sequence of value variables. We may also abuse notation and use \(\vec{x}\) for its image set: for instance we may write \(x\in\vec{x}\) instead of \(x\in\{\vec{x}[i]\mid i\in\mathbb{N}\}\). The use of sequence variables will allow us to manage infinite sequences of \(\lambda\)-abstractions, without needing to resort to De Bruijn indices or other techniques for dealing with \(\alpha\)-equivalence.
Terms.We define **value terms**\((m,n,p\in\Delta_{\rm v})\), **base terms**\((a,b,c\in\Delta_{\rm b})\), **bag terms**\((\bar{m},\bar{n},\bar{p}\in\Delta_{\rm i})\) and **stream terms**\((\vec{m},\vec{n},\vec{p}\in\Delta_{\rm s})\), inductively by the following rules:
\[\begin{array}{cc}\vec{x}\in{\cal V}_{\rm s}&a\in\Delta_{\rm b}\\ \hline\lambda\vec{x}.a\in\Delta_{\rm v}\end{array}\ \ {}^{(\lambda)}&\frac{m_{1}\in \Delta_{\rm v}\ \ \ \cdots\ \ \ \ m_{k}\in\Delta_{\rm v}}{[m_{1},\ldots,m_{k}]\in\Delta_{\rm i}}\ {}^{( \dagger)}\\ \\ \frac{m\in\Delta_{\rm v}\ \ \ \ \ \ \vec{n}\in\Delta_{\rm b}}{m\,\vec{n}\in \Delta_{\rm b}}\ {}^{(\emptyset)}&\frac{x\in{\cal V}\ \ \ \ \vec{n}\in\Delta_{\rm s}}{x\,\vec{n}\in\Delta_{\rm b}}\ {}^{({\cal V})}\.\end{array}\]
A **head expression**\((e,f,g\in\Delta_{\rm h})\) is a value term or variable, so that a base term is necessarily of the form \(e\,\vec{m}\) where \(e\) is a head expression and \(\vec{m}\) is a stream term. A **resource term** (denoted by \(u,v,w\)) is any of a value term, base term, stream term or bag term; and a **resource expression** (denoted by \(q,r,s\)) is any of a value variable or a resource term.
The actual objects of the calculus are value terms, as these will form the target of Taylor Expansion. Nonetheless, base terms, bag terms and stream terms also constitute meaningful computational entities, as will be clear in the next sections; and these other syntactic categories will also play a role in the relational semantics. On the other hand, plain variables _should not_ be considered as entities of the calculus by themselves: in the context of extensional Taylor expansion, a variable must always come with the stream it is applied to; and semantically, a single variable stands for a projection morphism, which does not have a finite semantics. We consider head expressions only because it sometimes allows us to treat uniformly both forms of base terms; and we consider resource expressions because, when we reason inductively on terms, it is often simpler to have variables as a base case.
We may write \(\Delta_{\rm t}\) (resp. \(\Delta_{\rm e}\)) for any of the four sets \(\Delta_{\rm v}\), \(\Delta_{\rm b}\), \(\Delta_{\rm i}\), or \(\Delta_{\rm s}\) (resp. \(\Delta_{\rm h}\), \(\Delta_{\rm b}\), \(\Delta_{\rm i}\), or \(\Delta_{\rm s}\)) -- rather than for the union of these sets: this will be especially relevant when we consider, e.g., sums of resource terms, which we always implicitly restrict to sums of terms in a given syntactic category.
The set of **free variables** of a resource expression is defined as follows:
\[\mathcal{V}(x)\coloneqq\{x\} \mathcal{V}(\lambda\vec{x}.a)\coloneqq\mathcal{V}(a)\setminus\vec{ x} \mathcal{V}([m_{1},\ldots,m_{k}])\coloneqq\bigcup_{1\leq i\leq k}\mathcal{V}(m_{i})\] \[\mathcal{V}(\iota)\coloneqq\emptyset \mathcal{V}(\bar{m}::\vec{n})\coloneqq\mathcal{V}(\bar{m})\cup \mathcal{V}(\vec{n}) \mathcal{V}(e\,\vec{m})\coloneqq\mathcal{V}(e)\cup\mathcal{V}(\vec{m})\]
which is always a finite set. Then we define \(\mathcal{V}_{\mathrm{s}}(u)\coloneqq\bigcup_{x\in\mathcal{V}(u)}\mathcal{V}_{ \mathrm{s}}(x)\) with \(\mathcal{V}_{\mathrm{s}}(\vec{x}[i])\coloneqq\vec{x}\), which is also finite. We can thus define \(\alpha\)-equivalence as usual, despite the fact that \(\lambda\)-abstractions bind infinite sequences of variables: in particular, given a value term \(m\) and a finite set \(V\) of sequence variables, we can always assume, up to \(\alpha\)-equivalence, that \(m\) is of the form \(\lambda\vec{y}.a\) where \(\vec{y}\) contains no variable \(\vec{x}[i]\) with \(\vec{x}\in V\).
In addition to \(\alpha\)-equivalence, we consider resource expressions up to permutations of elements in a bag \([m_{1},\ldots,m_{k}]\), and up to the identity \([\,]\mathrel{::}\iota=\iota\), so that \(\Delta_{\mathrm{!}}\) is identified with \(\mathfrak{M}_{\mathrm{f}}(\Delta_{\mathrm{v}})\) and \(\Delta_{\mathrm{s}}\) is identified with \(\mathcal{S}(\Delta_{\mathrm{v}})\).
We write \(q\{f/x\}\) for the ordinarily defined, capture avoiding **substitution** of a head expression \(f\) for a value variable \(x\) in any resource expression \(q\): note that this preserves the syntactic category of \(q\) in the sense that if \(q\in\Delta_{\mathrm{v}}\) (resp. \(\Delta_{\mathrm{h}}\), \(\Delta_{\mathrm{!}}\), \(\Delta_{\mathrm{b}}\), \(\Delta_{\mathrm{s}}\)) then \(q\{f/x\}\in\Delta_{\mathrm{v}}\) (resp. \(\Delta_{\mathrm{h}}\), \(\Delta_{\mathrm{!}}\), \(\Delta_{\mathrm{b}}\), \(\Delta_{\mathrm{s}}\)).
We define the **size**\(\|q\|\) of a resource expression \(q\) inductively as follows:
\[\|x\|\coloneqq 1 \|\lambda\vec{x}.a\|\coloneqq 1+\|a\|\qquad\|e\,\vec{m}\| \coloneqq 1+\|e\|+\|\vec{m}\|\] \[\|[m_{1},\ldots,m_{k}]\|\coloneqq \sum_{i=1}^{k}\|m_{i}\|\qquad\|\iota\|\coloneqq 0 \|\bar{m}::\vec{n}\|\coloneqq\|\bar{m}\|+\|\vec{n}\|\;.\]
In particular, \(\|\vec{m}\|=\sum_{i\in\mathbb{N}}\|\bar{m}_{i}\|\) for any stream term \(\vec{m}=\langle\bar{m}_{i}\rangle_{i\in\mathbb{N}}\). In short, \(\|q\|\) is nothing but the number of abstractions, applications and variable occurrences in \(q\). In particular, \(\|e\|\geq 1\) for any head expression \(e\), \(\|a\|\geq 2\) for any base term \(a\), \(\|m\|\geq 3\) for any value term \(m\), and \(\|\bar{m}\|\geq 3|\bar{m}|\) for any bag term \(\bar{m}\).
Sums of Terms.As in the ordinary resource calculus, the reduction of resource terms produces sums: a value (resp. base, bag, stream) term will reduce to a finite sum of value (resp. base, bag, stream) terms.
If \(X\) is a set, we write \(\Sigma X\) for the set of finite formal sums on \(X\) - those may be again finite multisets, but we adopt a distinct notation if only to impose a distinction with bag terms. Given a sum \(A=\sum_{i\in I}a_{i}\), we write \(\mathrm{supp}(A)\coloneqq\{a_{i}\mid i\in I\}\) for its support set. We may abuse notation and write \(a\in A\) instead of \(a\in\mathrm{supp}(A)\).
We call **value sums** (resp. **base sums**, **bag sums**, **stream sums**) the elements of \(\Sigma\Delta_{\mathrm{v}}\) (resp. \(\Sigma\Delta_{\mathrm{b}}\), \(\Sigma\Delta_{\mathrm{!}}\), \(\Sigma\Delta_{\mathrm{s}}\)), which we denote with capital letters \(M,N,P\) (resp. \(A,B,C\); \(\bar{M},\bar{N},\bar{P}\); \(\widetilde{M},\bar{N},\vec{P}\)). As announced we may write \(\Sigma\Delta_{\mathrm{t}}\) for any of \(\Sigma\Delta_{\mathrm{v}}\), \(\Sigma\Delta_{\mathrm{b}}\), \(\Sigma\Delta_{\mathrm{!}}\), or \(\Sigma\Delta_{\mathrm{s}}\); and we may call **term sum** any value sum, base sum, or stream sum, which we then denote by \(U,V,W\).
We also call **head sum** (resp. **expression sum**) any of a value sum (resp. term sum) or of a value variable, which we denote by \(E,F,G\) (resp. by \(Q,R,S\)).
We abuse notation and write \(\Sigma\Delta_{\mathrm{h}}\) for \(\mathcal{V}\cup\Sigma\Delta_{\mathrm{v}}\), and then write \(\Sigma\Delta_{\mathrm{e}}\) for any of \(\Sigma\Delta_{\mathrm{h}}\), \(\Sigma\Delta_{\mathrm{b}}\), \(\Sigma\Delta_{\mathrm{i}}\), or \(\Sigma\Delta_{\mathrm{s}}\). Again, we introduce head sums and expressions sums only as technical devices allowing us simplify some definitions or proofs. Note that we do not need to consider sums of head expressions mixing value terms and variables.
We then extend term formers and operations to all syntactic categories by linearity so that:
\[\lambda\vec{x}.\big{(}\sum_{i\in I}a_{i}\big{)}\coloneqq\sum_{i\in I}\lambda \vec{x}.a_{i}\qquad\big{(}\sum_{i\in I}m_{i}\big{)}\,\vec{N}\coloneqq\sum_{i \in I}m_{i}\,\vec{N}\qquad e\,\big{(}\sum_{j\in J}\vec{n}_{j}\big{)}\coloneqq \sum_{j\in J}e\,\vec{n}_{j}\]
\[\big{[}\sum_{i\in I}m_{i}\big{]}\coloneqq\sum_{i\in I}[m_{i}]\qquad\big{(} \sum_{i\in I}\bar{m}_{i}\big{)}*\big{(}\sum_{j\in J}\bar{n}_{j}\big{)}\coloneqq \sum_{(i,j)\in I\times J}\bar{m}_{i}*\bar{n}_{j}\]
\[\big{(}\sum_{i\in I}\bar{m}_{i}\big{)}\coloneqq\big{(}\sum_{j\in J}\vec{n}_{j} \big{)}\coloneqq\sum_{(i,j)\in I\times J}\bar{m}_{i}\coloneqq\vec{n}_{j}\]
where \(I\) and \(J\) are finite sets.
### Small-step dynamics
Resource substitution.We now define the resource substitution \(q[\bar{n}/x]\) of a bag \(\bar{n}=[n_{1},\ldots,n_{k}]\) for a variable \(x\) in a resource expression \(q\). As for the ordinary resource calculus, the definition amounts to enumerate the occurrences \(x_{1},\ldots,x_{l}\) of \(x\) in \(q\), and then set:
\[q[\bar{n}/x]\coloneqq\begin{cases}\sum_{\sigma\in\mathbb{S}_{k}}q\{n_{\sigma(1 )}/x_{1},\ldots,n_{\sigma(k)}/x_{k}\}&\text{if $k=l$}\\ 0&\text{otherwise}\end{cases}\]
where \(\mathbb{S}_{k}\) is the set of all permutations of \(\{1,\ldots,k\}\) and \(q\{m_{1}/x_{1},\ldots,m_{k}/x_{k}\}\) denotes the simultaneous capture avoiding substitution of value terms \(m_{i}\) for occurrences \(x_{i}\).
Note that, although it is intuitively clear, this definition relies on a notion of occurrence that is not well defined because the order of elements in a bag is not fixed. To be carried out formally, one should introduce a rigid variant of the calculus, and then show that the result does not depend on the choice of a rigid representative. A more annoying issue is that this global definition does not follow the inductive structure of expressions. We will prefer the following presentation:
**Definition 2.1**.: _We define the **resource substitution**\(q[\bar{n}/x]\) of a bag term \(\bar{n}\)
_for a value variable \(x\) in a resource expression \(q\) by induction on \(q\) as follows:_
\[y[\bar{n}/x] \coloneqq\begin{cases}n&\text{if $x=y$ and $\bar{n}=[n]$}\\ y&\text{if $x\neq y$ and $\bar{n}=[\,]$}\\ 0&\text{otherwise}\end{cases}\] \[(\lambda\vec{y}.a)[\bar{n}/x] \coloneqq\lambda\vec{y}.a[\bar{n}/x]\] \[(e\,\vec{m})[\bar{n}/x] \coloneqq\sum_{\bar{n}\lessdot\bar{n}_{1}\lessdot\bar{n}_{2}}e[ \bar{n}_{1}/x]\,\vec{m}[\bar{n}_{2}/x]\] \[[m_{1},\dots,m_{k}][\bar{n}/x] \coloneqq\sum_{\bar{n}\lessdot\bar{n}_{1}\lessdot\bar{n}_{1} \lessdot\cdots\lessdot\bar{n}_{k}}[m_{1}[\bar{n}_{1}/x],\dots,m_{k}[\bar{n}_{k} /x]]\] \[\iota[\bar{n}/x] \coloneqq\begin{cases}\iota&\text{if $\bar{n}=[\,]$}\\ 0&\text{otherwise}\end{cases}\] \[(\bar{m}::\vec{p})[\bar{n}/x] \coloneqq\sum_{p:\bar{n}\lessdot\bar{n}_{1}:\bar{n}_{2}}\bar{m} [\bar{n}_{1}/x]:\vec{p}[\bar{n}_{2}/x]\qquad\text{if $\bar{m}::\vec{p}\neq\iota$}\]
_where, in the abstraction case, \(\vec{y}\) is chosen so that \(x\not\in\vec{y}\) and \(\vec{y}\cap\mathcal{V}(\bar{n})=\emptyset\)._
Observe that if \(q\) is a variable (resp. value term, base term, bag term, stream term) then \(q[\bar{n}/x]\) is a head sum (resp. value sum, base sum, bag sum, stream sum). So we may write \(q[\bar{n}/x]\in\Sigma\Delta_{\mathrm{e}}\) (resp. \(u[\bar{n}/x]\in\Sigma\Delta_{\mathrm{t}}\)) if \(q\in\Delta_{\mathrm{e}}\) (resp. \(u\in\Delta_{\mathrm{t}}\)) keeping implicit the fact that the underlying syntactic category is the same.
Moreover note that the distinction between empty and non-empty streams is only made so that the definition is inductive and non-ambiguous. It is nonetheless obvious that:
\[\sum_{\bar{n}\lessdot\bar{n}_{1}:\bar{n}_{2}}\,[\,][\bar{n}_{1}/x]\asymp\iota [\bar{n}_{2}/x]=\begin{cases}[\,]\asymp\iota=\iota&\text{if $\bar{n}=[\,]$}\\ 0&\text{otherwise}\end{cases}\]
so that the condition \(\bar{m}::\vec{p}\neq\iota\) can be ignored in the last case of the definition.
More generally, if \(\vec{m}=\bar{m}_{1}::\dots::\bar{m}_{k}:\iota\), then
\[\vec{m}[\bar{n}/x]=\sum_{\bar{n}\lessdot\bar{n}_{1}\lessdot\cdots\lessdot \bar{n}_{k}}\bar{m}_{1}[\bar{n}_{1}/x]\asymp\dots\asymp\bar{m}_{k}[\bar{n}_{k}/x ]\asymp\iota\]
and, equivalently,
\[\langle\bar{m}_{i}\rangle_{i\in\mathbb{N}}[\bar{n}/x]=\sum_{p:\bar{n}\lessdot \mathbb{N}}\langle\bar{m}_{i}[\bar{n}\restriction_{p}i/x]\rangle_{i\in\mathbb{N}}\]
where we generalize \(k\)-partitionings to \(\mathbb{N}\)-partitionings in the obvious way. Also, one can check that:
\[q[[\,]/x]=\begin{cases}0&\text{if $x\in\mathcal{V}(q)$}\\ q&\text{otherwise}\end{cases}\,.\]
Contrary to what happens with ordinary substitution in the \(\lambda\)-calculus, the combinatorics of resource substitution is very regular. Indeed, except for the occurrences of the variable substituted for, nothing is erased or discarded from the substituted bag nor from the expression in which the substitution takes place. In particular, the size of the terms produced by a substitution is determined by the length and size of the substituted bag and the size of the term in which the substitution is performed; and free variables are preserved. Indeed, writing \(|q|_{x}\) for the number of occurrences of \(x\) in \(q\), we have:
**Lemma 2.2**.: _If \(q^{\prime}\in q[\bar{n}/x]\) then \(|q|_{x}=|\bar{n}|\) and \(\|q^{\prime}\|=\|q\|+\|\bar{n}\|-|\bar{n}|\). If moreover \(y\neq x\) then \(|q^{\prime}|_{y}=|q|_{y}+|\bar{n}|_{y}\) (in particular, \(y\in\mathcal{V}(q^{\prime})\) iff \(y\in\mathcal{V}(q)\cup\mathcal{V}(\bar{n})\))._
Proof.: We prove the first result by induction on \(q\) -- the other results follow a similarly straightforward pattern.
If \(q=x\) then \(\bar{n}=[n]\) and \(q^{\prime}=n\) and we conclude, observing that \(\|\bar{n}\|=\|n\|=\|q^{\prime}\|\), \(\|x\|=1=|\bar{n}|\).
If \(q=y\neq x\) then \(\bar{n}=[\,]\) and \(q^{\prime}=q\) and we conclude, observing that \(\|\bar{n}\|=|\bar{n}|=0\).
If \(q=\lambda y.a\) then \(q^{\prime}=\lambda y.a^{\prime}\) with \(a^{\prime}\in a[\bar{n}/x]\) and, inductively, \(\|a^{\prime}\|=\|a\|+\|\bar{n}\|-|\bar{n}|\). Hence \(\|a^{\prime}\|+1=\|a\|+1+\|\bar{n}\|-|\bar{n}|\) and we conclude.
If \(q=e\,\vec{m}\) then \(q^{\prime}=e^{\prime}\,\vec{m}^{\prime}\) and \(\bar{n}=\bar{n}_{1}*\bar{n}_{2}\), with \(e^{\prime}\in e[\bar{n}_{1}/x]\), \(\vec{m}^{\prime}\in\vec{m}[\bar{n}_{2}/x]\) and, inductively, \(\|e^{\prime}\|=\|e\|+\|\bar{n}_{1}\|-|\bar{n}_{1}|\) and \(\|\vec{m}^{\prime}\|=\|\vec{m}\|+\|\bar{n}_{2}\|-|\bar{n}_{2}|\). Hence
\[\|e^{\prime}\|+\|\vec{m}^{\prime}\|+1=\|e\|+\|\vec{m}\|+1+\|\bar{n}_{1}\|+\| \bar{n}_{2}\|-|\bar{n}_{1}|-|\bar{n}_{2}|\]
and we conclude.
If \(q=[m_{1},\ldots,m_{k}]\) then \(q^{\prime}=[m_{1}^{\prime},\ldots,m_{k}^{\prime}]\) and \(\bar{n}=\bar{n}_{1}*\cdots*\bar{n}_{k}\), with \(m_{i}^{\prime}\in m_{i}[\bar{n}_{i}/x]\) and, inductively, \(\|m_{i}^{\prime}\|=\|m_{i}\|+\|\bar{n}_{i}\|-|\bar{n}_{i}|\) for \(1\leq i\leq k\). We conclude by summing the identities.
The case of streams is similar to that of bags.
Resource substitution is extended to sums by linearity:
\[\bigg{(}\sum_{i=1}^{k}q_{i}\bigg{)}\bigg{[}\sum_{j=1}^{l}\bar{m}_{j}/x\bigg{]} \coloneqq\sum_{i=1}^{k}\sum_{j=1}^{l}q_{i}[\bar{m}_{j}/x]\;.\]
One can easily check that, with that extension, all the identities defining resource substitution in definition 2.1 also hold if we replace terms with sums.
The usual result on the commutation of substitution holds:
**Lemma 2.3**.: _We have \(q[\bar{m}/x][\bar{n}/y]=\sum_{\bar{n}\lhd\bar{n}_{1}*\bar{n}_{2}}q[\bar{n}_{1}/y ][\bar{m}[\bar{n}_{2}/y]/x]\) whenever \(x\not\in\mathcal{V}(\bar{n})\cup\{y\}\)._
Proof.: The proof is straightforward by induction on \(q\), using the associativity of sums over partitionings.
Shifting and erasing sequence variables.If \(q\) is a resource expression and \(\vec{x}\in\mathcal{V}_{\mathrm{s}}\), we write \(q[\vec{x}\,\mathord{\uparrow}]\) for the term obtained by replacing each \(\vec{x}[i]\) occurring free in \(q\) with \(\vec{x}[i+1]\). Similarly, if \(\vec{x}[0]\) does not occur in \(q\), we write \(q[\vec{x}\,\mathord{\downarrow}]\) for the expression obtained by replacing each \(\vec{x}[i+1]\) in \(q\) with \(\vec{x}[i]\), so that \(q=q[\vec{x}\,\mathord{\downarrow}][\vec{x}\,\mathord{\uparrow}]\) in this case -- and \(r=r[\vec{x}\,\mathord{\uparrow}]^{\vec{x}\,\mathord{\downarrow}}\) for any expression \(r\).
Given a variable \(x\) and a value term \(m=\lambda\vec{x}.a\), with \(\vec{x}\) chosen so that \(x\not\in\vec{x}\), we define \(\lambda x.m:=\lambda\vec{x}.a[\vec{x}\,\mathord{\uparrow}]\{\vec{x}[0]/x\}\). Thinking of \(\lambda\vec{x}.a\) as an infinite sequence of abstractions \(\lambda\vec{x}[0].\lambda\vec{x}[1]\dots\) over value variables of \(a\), \(\lambda x.m\) intuitively corresponds to adding a single ordinary abstraction at the start of the sequence.
We define the **erasure** of the sequence variable \(\vec{x}\) in an expression \(q\) by:
\[q\,\mathord{\downarrow}\,\vec{x}\coloneqq\begin{cases}q&\text{if }\vec{x}\cap \mathcal{V}(q)=\emptyset\\ 0&\text{otherwise}\end{cases}\;.\]
In other words, \(q\,\mathord{\downarrow}\,\vec{x}=0\) if some \(\vec{x}[i]\) occurs free in \(q\) and \(q\,\mathord{\downarrow}\,\vec{x}=q\) otherwise.
**Lemma 2.4**.: _We have \(q[\vec{x}\,\mathord{\uparrow}]\,\mathord{\downarrow}\,\vec{x}=q\,\mathord{ \downarrow}\,\vec{x}\). Moreover, if \(\vec{x}[0]\not\in\mathcal{V}(q)\) then \(q\,\mathord{\downarrow}\,\vec{x}=q[\vec{x}\,\mathord{\downarrow}]\,\mathord{ \downarrow}\,\vec{x}\)._
Proof.: Direct from the definitions, since \(\vec{x}[i+1]\in\mathcal{V}(q[\vec{x}\,\mathord{\uparrow}])\) iff \(\vec{x}[i]\in\mathcal{V}(q)\), and \(\vec{x}[i]\in\mathcal{V}(q[\vec{x}\,\mathord{\downarrow}])\) iff \(\vec{x}[i+1]\in\mathcal{V}(q)\).
Both shifts and erasure are linearly extended to resource sums:
\[(\sum_{i=1}^{k}q_{i})[\vec{x}\,\mathord{\uparrow}] \coloneqq\sum_{i=1}^{k}q_{i}[\vec{x}\,\mathord{\uparrow}]\] \[(\sum_{i=1}^{k}q_{i})[\vec{x}\,\mathord{\downarrow}] \coloneqq\sum_{i=1}^{k}q_{i}[\vec{x}\,\mathord{\downarrow}]\] \[(\sum_{i=1}^{k}q_{i})\,\mathord{\downarrow}\,\vec{x} \coloneqq\sum_{i=1}^{k}q_{i}\,\mathord{\downarrow}\,\vec{x}\]
and these operations commute with substitution in the following sense:
**Lemma 2.5**.: _Assume \(\vec{x}\not\in\mathcal{V}_{\mathrm{s}}(\bar{M})\). Then \(Q[\bar{M}/\vec{x}[i]][\vec{x}\,\mathord{\uparrow}]=Q[\vec{x}\,\mathord{ \uparrow}][\bar{M}/\vec{x}[i+1]]\), \(Q[\bar{M}/\vec{x}[i+1]][\vec{x}\,\mathord{\downarrow}]=Q[\vec{x}\,\mathord{ \downarrow}][\bar{M}/\vec{x}[i]]\) (assuming \(\vec{x}[0]\not\in\mathcal{V}(Q)\) in that case). And if moreover \(x\not\in\vec{x}\), then \(Q[\bar{M}/x][\vec{x}\,\mathord{\uparrow}]=Q[\vec{x}\,\mathord{\uparrow}][\bar{ M}/x]\), \(Q[\bar{M}/x][\vec{x}\,\mathord{\downarrow}]=Q[\vec{x}\,\mathord{\downarrow}][ \bar{M}/x]\) (assuming \(\vec{x}[0]\not\in\mathcal{V}(Q)\) in that case) and \(Q[\bar{M}/x]\,\mathord{\downarrow}\,\vec{x}=(Q\,\mathord{\downarrow}\,\vec{x})[ \bar{M}/x]\)._
Proof.: In case \(Q=q\in\Delta_{\mathrm{e}}\), each result follows by a straightforward induction on \(q\). We deduce the general result by linearity.
Resource reduction.Now, we can define small-step resource reduction by:
\[(\lambda\vec{x}.a)\,\bar{n} \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel \mathrelmathrelmathrelmathrelmathrelmathrelmathrel}}}}}}}}}}}}\mapsto_{\mathrm{r}_{0}} \left(\lambda\vec{x}.a[\bar{n}/\vec{x}[0]][\vec{x}\,\mathord{\downarrow}] \right)\vec{p}\qquad\text{(choosing $\vec{x}\not\in\mathcal{V}_{\mathrm{s}}(\bar{n})$)}\] \[(\lambda\vec{x}.a)\,\iota \mapsto_{\mathrm{r}_{0}}a\,\mathord{\downarrow}\,\vec{x}\;.\]
Note that, using the previously introduced notation for the abstraction of a single variable, the first reduction step might be rephrased as:
\[\left(\lambda x.m\right)\bar{n}::\vec{p}\mapsto_{\mathrm{r}_{0}}m[\bar{n}/x]\, \vec{p}\]
thanks to lemma 2.5.
We extend these base reduction steps by applying them under any context, and then apply them in parallel within sums of expressions. Formally:
**Definition 2.6**.: _(Small-step) resource reduction_ _is the relation from resource expressions to sums of resource expressions defined by the following rules:_
\[\frac{a\mapsto_{\mathrm{r}}A^{\prime}}{\lambda\vec{x}.a\mapsto_{\mathrm{r}} \lambda\vec{x}.A^{\prime}}\ \ \ \ \ \frac{m\mapsto_{\mathrm{r}}M^{\prime}}{m\,\vec{n}\mapsto_{\mathrm{r}}M^{\prime}\, \vec{n}}\ \ \ \ \ \ \frac{\vec{n}\mapsto_{\mathrm{r}}\vec{N}^{\prime}}{e\,\vec{n}\mapsto_{\mathrm{r}} e\,\vec{N}^{\prime}}\
_._
5. _if_ \(\bar{M}\to_{\mathrm{r}}\bar{M}^{\prime}\) _then_ \(\bar{M}*\bar{N}\to_{\mathrm{r}}\bar{M}^{\prime}*\bar{N}\) _and_ \(\bar{M}::\bar{N}\to_{\mathrm{r}}\bar{M}^{\prime}\::\bar{N}\)_._
_Moreover, \((\lambda\vec{x}.A)\left(\bar{N}::\vec{P}\right)\to_{\mathrm{r}}\left(\lambda \vec{x}.A[\bar{N}/\vec{x}[0]][\vec{x}\,\downarrow]\right)\vec{P}\) and \((\lambda\vec{x}.A)\,\iota\to_{\mathrm{r}}A\,\not\downarrow\vec{x}\)._
Proof.: Each statement is a direct consequence of the definitions.
We then obtain that \(\to_{\mathrm{r}}\) is compatible with substitution in the following sense:
**Lemma 2.8**.: _If \(U\to_{\mathrm{r}}U^{\prime}\) then for any bag sum \(\bar{N}\) we have \(U[\bar{N}/x]\to_{\mathrm{r}}U^{\prime}[\bar{N}/x]\). And if \(\bar{M}\to_{\mathrm{r}}\bar{M}^{\prime}\) then for any resource sum \(Q\) we have \(Q[\bar{M}/x]\to_{\mathrm{r}}Q[\bar{M}^{\prime}/x]\)._
Proof.: We establish the first statement in the case \(U=u\mapsto_{\mathrm{r}}U^{\prime}\), by induction on that reduction. For the two base cases, we use lemmas 2.3 and 2.5. The other cases follow directly from the induction hypothesis, and the extension to \(\to_{\mathrm{r}}\) is straightforward.
We establish the second statement in the case \(Q=q\in\Delta_{\mathrm{e}}\) and \(\bar{M}=\bar{m}\mapsto_{\mathrm{r}}\bar{M}^{\prime}\), by induction on \(q\). Note that we must have \(\bar{m}=[m]*\bar{n}\) and \(\bar{M}^{\prime}=[M^{\prime}]*\bar{n}\) with \(m\mapsto_{\mathrm{r}}M^{\prime}\): then all cases are straightforward, noting that any sum \(\sum_{\bar{m}\lhd\bar{m}_{1}*\bar{m}_{2}}f(\bar{m}_{1},\bar{m}_{2})\) can be written as \(\sum_{\bar{n}\lhd\bar{m}_{1}*\bar{m}_{2}}f([m]*\bar{m}_{1},\bar{m}_{2})+f(\bar {m}_{1},[m]*\bar{m}_{2})\). The general result follows by linearity.
Moreover, resource reduction preserves free variables and is compatible with shifts and erasure:
**Lemma 2.9**.: _If \(u\mapsto_{\mathrm{r}}U^{\prime}\) and \(u^{\prime}\in U^{\prime}\) then \(|u|_{x}=|u^{\prime}|_{x}\). In particular, \(\mathcal{V}(u)=\mathcal{V}(u^{\prime})\)._
Proof.: Straightforward by induction on the reduction \(u\mapsto_{\mathrm{r}}U^{\prime}\), using lemma 2.2 in the case of rule (r\({}_{\beta}\)).
**Lemma 2.10**.: _If \(U\to_{\mathrm{r}}U^{\prime}\) then \(U\not\downarrow\vec{x}\to_{\mathrm{r}}U^{\prime}\not\downarrow\vec{x}\) and \(U[\vec{x}\,\uparrow]\to_{\mathrm{r}}U^{\prime}[\vec{x}\,\uparrow]\). If moreover \(\vec{x}[0]\not\in\mathcal{V}(U)\) then \(U[\vec{x}\,\downarrow]\to_{\mathrm{r}}U^{\prime}[\vec{x}\,\downarrow]\)._
Proof.: Again, the result is proved first for a reduction \(u\mapsto_{\mathrm{r}}U^{\prime}\), then generalized by linearity. Each step is straightforward.
**Theorem 2.11** (Confluence of \(\to_{\mathrm{r}}\)).: _Resource reduction \(\to_{\mathrm{r}}\) has the diamond property: if \(U\to_{\mathrm{r}}U_{1}\) and \(U\to_{\mathrm{r}}U_{2}\) then there exists \(U^{\prime}\) such that \(U_{1}\to_{\mathrm{r}}U^{\prime}\) and \(U_{2}\to_{\mathrm{r}}U^{\prime}\)._
Proof.: We first establish by induction on resource terms that: if \(u\mapsto_{\mathrm{r}}U_{1}\) and \(u\mapsto_{\mathrm{r}}U_{2}\) then there exists \(U^{\prime}\) such that \(U_{1}\to_{\mathrm{r}}U^{\prime}\) and \(U_{2}\to_{\mathrm{r}}U^{\prime}\).
The crucial case is that of head reducible base terms. Assume, e.g., \(U_{1}\) is obtained by (r\({}_{\beta}\)): then \(u=(\lambda\vec{x}.a)\left(\bar{n}::\vec{p}\right)\), and \(U_{1}=\left(\lambda\vec{x}.a[\bar{n}/\vec{x}[0]][\vec{x}\,\downarrow]\right) \vec{p}\).
If \(U_{2}\) is obtained by (r\({}_{\beta}\)) too, then \(U_{1}=U_{2}\) and we conclude by the reflexivity of \(\to_{\mathrm{r}}\), setting \(U^{\prime}\coloneqq U_{1}\).
If \(U_{2}\) is obtained by (r\({}_{\iota}\)) then \(\bar{n}::\vec{p}=\iota\) and: either \(\vec{x}[0]\in\mathcal{V}(a)\) and \(U_{1}=U_{2}=0\), and we conclude again by reflexivity; or \(\vec{x}[0]\not\in\mathcal{V}(a)\) and \(U_{1}=(\lambda\vec{x}.a[\vec{x}\,\downarrow])\,\iota\mapsto_{\mathrm{r}}a[ \vec{x}\,\downarrow]\not\downarrow\vec{x}=U_{2}\) by lemma 2.4.
If \(U_{2}\) is obtained by \((\tau_{\!\!{\partial}l})\) then \(U_{2}=(\lambda\vec{x}.A^{\prime})\,(\bar{n}\div\vec{p})\) with \(a\mapsto_{\!\!{\tau}}A^{\prime}\), and we can set \(U^{\prime}\coloneqq\left(\lambda\vec{x}.A^{\prime}[\bar{n}/\vec{x}[0]][\vec{x} \,\downarrow]\right)\vec{p}\), to obtain \(U_{1}\to_{\!\!{\tau}}U^{\prime}\) by lemmas 2.7, 2.8 and 2.10, and \(U_{2}\to_{\!\!{\tau}}U^{\prime}\) by lemma 2.7.
If \(U_{2}\) is obtained by \((\tau_{\!\!{\partial}r})\) then either \(U_{2}=(\lambda\vec{x}.a)\,(\bar{N}^{\prime}\div\vec{p})\) with \(\bar{n}\mapsto_{\!\!{\tau}}\bar{N}^{\prime}\), and we can set \(U^{\prime}\coloneqq\left(\lambda\vec{x}.a[\bar{N}^{\prime}/\vec{x}[0]][\vec{x} \,\downarrow]\right)\vec{p}\), to obtain \(U_{1}\to_{\!\!{\tau}}U^{\prime}\) by lemmas 2.7, 2.8 and 2.10, and \(U_{2}\to_{\!\!{\tau}}U^{\prime}\) by lemma 2.7; or \(U_{2}=(\lambda\vec{x}.a)\,(\bar{n}\div\vec{P}^{\prime})\) with \(\vec{p}\mapsto_{\!\!{\tau}}\vec{P}^{\prime}\), and we can set \(U^{\prime}\coloneqq\left(\lambda\vec{x}.a[\bar{n}/\vec{x}[0]][\vec{x}\, \downarrow]\right)\vec{P}^{\prime}\), to obtain both \(U_{1}\to_{\!\!{\tau}}U^{\prime}\) and \(U_{2}\to_{\!\!{\tau}}U^{\prime}\) by lemma 2.7.
By symmetry, we have treated all the cases where at least one of \(U_{1}\) or \(U_{2}\) is obtained by \((\mathrm{r}_{\beta})\). Now assume, e.g., \(U_{1}\) is obtained by \((\mathrm{r}_{\iota})\): then \(u=(\lambda\vec{x}.a)\,\iota\), and \(U_{1}=a\,\,\underline{\iota}\,\,\vec{x}\). Note that \((\mathrm{r}_{\!\!{\partial}r})\) cannot be applied in this case. If \(U_{2}\) is also obtained by \((\mathrm{r}_{\iota})\), we have \(U_{1}=U_{2}\) and we conclude by reflexivity. This leaves only the case of \(U_{2}\) being obtained by \((\mathrm{r}_{\!\!{\partial}l})\): \(U_{2}=(\lambda\vec{x}.A^{\prime})\,\iota\), with \(a\mapsto_{\!\!{\tau}}A^{\prime}\). Then we set \(U^{\prime}\coloneqq A^{\prime}\,\underline{\iota}\,\,\vec{x}\) and obtain \(U_{1}\to_{\!\!{\tau}}U^{\prime}\) by lemma 2.10 and \(U_{2}\to_{\!\!{\tau}}U^{\prime}\) by lemma 2.7.
We are only left with contextuality rules, all falling in two cases: if both rules reduce the same subterm (_i.e._ the rules are the same and, in the case of \((\mathrm{r}_{\!\!{[}})\)), the reduced subterm is the same), we conclude by the induction hypothesis, together with lemma 2.7; otherwise, if the rules are distinct (_i.e._\((\mathrm{r}_{\!\!{\partial}l})\)_vs_\((\mathrm{r}_{\!\!{\partial}r})\) for base terms, or \((\mathrm{r}_{\!\!{:}l})\)_vs_\((\mathrm{r}_{\!\!{:}r})\) for stream terms) or are two instances of \((\mathrm{r}_{\!\!{[}})\)) on distinct subterms, (_i.e._\(u=[m_{1}]\,{*}\,[m_{2}]\,{*}\,\bar{n}\), \(U_{1}=[M_{1}^{\prime}]\,{*}\,[m_{2}]\,{*}\,\bar{n}\), and \(U_{2}=[m_{1}]\,{*}\,[M_{2}^{\prime}]\,{*}\,\bar{n}\), with each \(m_{i}\mapsto_{\!\!{\tau}}M_{i}^{\prime}\)) then we apply lemma 2.7 directly.
Now we extend the result to the reduction of term sums: assume \(U\to_{\!\!{\tau}}U_{1}\) and \(U\to_{\!\!{\tau}}U_{2}\). We can write
\[U =U_{0}+\sum_{i=1}^{k_{1}}v_{i}^{1}+\sum_{i=1}^{k_{2}}v_{i}^{2}+ \sum_{i=1}^{l}w_{i}\] \[U_{1} =U_{0}+\sum_{i=1}^{k_{1}}{V_{i}^{\prime}}^{1}+\sum_{i=1}^{k_{2}} v_{i}^{2}+\sum_{i=1}^{l}W_{i}^{1}\] \[U_{2} =U_{0}+\sum_{i=1}^{k_{1}}{v_{i}}^{1}+\sum_{i=1}^{k_{2}}{V_{i}^{ \prime}}^{2}+\sum_{i=1}^{l}W_{i}^{2}\]
so that for each \(i\), \(v_{i}^{1}\mapsto_{\!\!{\tau}}{V_{i}^{\prime}}^{1}\), \(v_{i}^{2}\mapsto_{\!\!{\tau}}{V_{i}^{\prime}}^{2}\), \(w_{i}\mapsto_{\!\!{\tau}}W_{i}^{1}\), and \(w_{i}\mapsto_{\!\!{\tau}}W_{i}^{2}\). Then we can set
\[U^{\prime}\coloneqq U_{0}+\sum_{i=1}^{k_{1}}{V_{i}^{\prime}}^{1}+\sum_{i=1}^{k _{2}}{V_{i}^{\prime}}^{2}+\sum_{i=1}^{l}W_{i}^{\prime}\]
where each \(W_{i}^{\prime}\) is obtained by the previous reasoning, from the reductions \(w_{i}\mapsto_{\!\!{\tau}}w_{i}^{1}\) and \(w_{i}\mapsto_{\!\!{\tau}}w_{i}^{2}\).
### Big-step dynamics
We now present a big-step variant of resource reduction whose main quality is that it enjoys strong normalization, from which we will deduce the weak
normalization of small-step reduction. In the passing, we also introduce a big-step variant of substitution where a stream is substituted to a sequence variable. This will play an important role in the remaining of the paper: as we will see in section 4, this will be used crucially in the simulation of \(\beta\)-reduction steps through Taylor expansion.
**Definition 2.12**.: _We define the **resource substitution**\(q[\vec{n}/\vec{x}]\) of a stream \(\vec{n}=\langle\bar{n}_{i}\rangle_{i\in\mathbb{N}}\) for a sequence variable \(\vec{x}\) in a resource expression \(q\) by induction on \(q\) as follows:_
\[y[\vec{n}/\vec{x}] \coloneqq\begin{cases}n&\text{if }y=\vec{x}[i]\text{, }\bar{n}_{i}=[n]\text{ and }\bar{n}_{j}=[\,]\text{ for }j\in\mathbb{N}\setminus\{i\}\\ y&\text{if }y\not\in\vec{x}\text{ and }\vec{n}=\iota\\ 0&\text{otherwise}\end{cases}\] \[(\lambda\vec{y}.a)[\vec{n}/\vec{x}] \coloneqq\lambda\vec{y}.a[\vec{n}/\vec{x}]\] \[(e\,\vec{m})[\vec{n}/\vec{x}] \coloneqq\sum_{\vec{n}\prec\vec{n}_{1}\ast\vec{n}_{2}}e[\bar{n}_ {1}/\vec{x}]\,\vec{m}[\vec{n}_{2}/\vec{x}]\] \[[m_{1},\dots,m_{k}][\vec{n}/\vec{x}] \coloneqq\sum_{\vec{n}\prec\vec{n}_{1}\ast\cdots\ast\vec{n}_{k}} [m_{1}[\bar{n}_{1}/\vec{x}],\dots,m_{k}[\vec{n}_{k}/\vec{x}]]\] \[\iota[\vec{n}/\vec{x}] \coloneqq\begin{cases}\iota&\text{if }\vec{n}=\iota\\ 0&\text{otherwise}\end{cases}\] \[(\bar{m}::\vec{p})[\vec{n}/\vec{x}] \coloneqq\sum_{\vec{n}\prec\vec{n}_{1}\mathrel{\mathop{:}}\vec{n} _{2}}\bar{m}[\vec{n}_{1}/\vec{x}]::\vec{p}[\vec{n}_{2}/\vec{x}]\qquad\text{if }\bar{m}::\vec{p}\neq\iota\]
_where, in the abstraction case, \(\vec{y}\) is chosen so that \(\vec{x}\not\in\vec{y}\) and \(\vec{y}\cap\mathcal{V}(a)=\emptyset\)._
As for small-step resource substitution, the condition in the last case of the definition can be dropped. Moreover, if \(\vec{m}=\bar{m}_{1}::\dots::\bar{m}_{k}::\iota\), then
\[\vec{m}[\vec{n}/\vec{x}]=\sum_{\vec{n}\prec\vec{n}_{1}\ast\cdots\ast\vec{n}_{ k}}\bar{m}_{1}[\vec{n}_{1}/\vec{x}]::\dots::\bar{m}_{k}[\vec{n}_{k}/\vec{x}]::\iota\]
and, equivalently,
\[\langle\bar{m}_{i}\rangle_{i\in\mathbb{N}}[\vec{n}/\vec{x}]=\sum_{p:\vec{n} \prec\mathbb{N}}\langle\bar{m}_{i}[\vec{n}\restriction_{p}i/\vec{x}]\rangle_{i \in\mathbb{N}}\;.\]
Also, one can check that:
**Lemma 2.13**.: _We have:_
\[q[\iota/\vec{x}]=q\;\not\iota\;\vec{x}\]
_and, assuming \(\vec{x}\not\in\mathcal{V}_{\mathrm{s}}(\bar{m})\), we have:_
\[q[\bar{m}::\vec{n}/\vec{x}]=q[\bar{m}/\vec{x}[0]][\vec{x}\,\downarrow][\vec{n} /\vec{x}]\;.\]
_More generally, if \(\vec{n}=\bar{n}_{0}::\dots::\bar{n}_{k-1}::\iota\), and \(\vec{x}\not\in\mathcal{V}_{\mathrm{s}}(\vec{n})\), we have:_
\[q[\vec{n}/\vec{x}] =q[\bar{n}_{0}/\vec{x}[0]][\vec{x}\,\downarrow]\cdots[\bar{n}_{k- 1}/\vec{x}[0]][\vec{x}\,\downarrow]\;\not\iota\;\vec{x}\] \[=q[\bar{n}_{0}/\vec{x}[0]]\cdots[\bar{n}_{k-1}/\vec{x}[k-1]]\; \not\iota\;\vec{x}\;.\]
Proof.: The first two statements are established directly from the definitions, by induction on \(q\). If \(\vec{n}=\bar{n}_{0}::\dots::\bar{n}_{k-1}::\iota\), and \(\vec{y}\cap\mathcal{V}(\vec{n})=\emptyset\), we can iterate \(k\) times the second statement then use the first one to obtain
\[q[\vec{n}/\vec{x}]=q[\bar{n}_{0}/\vec{x}[0]][\vec{x}\,\downarrow]\cdots[\bar{n}_ {k-1}/\vec{x}[0]][\vec{x}\,\downarrow]\,\not\!\iota\,\,\vec{x}\]
and then we obtain the final identity by iterating lemmas 2.4 and 2.5.
Substitution of streams enjoys the same regularity as substitution of bags, w.r.t. the size of expressions. Namely, setting \(|\vec{n}|\coloneqq\sum_{i\in\mathbb{N}}|\bar{n}_{i}|\) when \(\vec{n}=\langle\bar{n}_{i}\rangle_{i\in\mathbb{N}}\), we obtain:
**Lemma 2.14**.: _If \(q^{\prime}\in q[\vec{n}/\vec{x}]\) with \(\vec{n}=\langle\bar{n}_{i}\rangle_{i\in\mathbb{N}}\), then \(|q^{\prime}|_{\vec{x}[i]}=|\bar{n}_{i}|\) for \(i\in\mathbb{N}\), and \(\|q^{\prime}\|=\|q\|+\|\vec{n}\|-|\vec{n}|\). If moreover \(y\not\in\vec{x}\) then \(|q^{\prime}|_{y}=|q|_{y}+|\vec{n}|_{y}\) (in particular, \(y\in\mathcal{V}(q^{\prime})\) iff \(y\in\mathcal{V}(q)\cup\mathcal{V}(\vec{n})\))._
Proof.: Using lemma 2.13, it is sufficient to iterate lemma 2.2.
**Definition 2.15**.: _Big-step resource reduction is the relation from resource terms to resource sums defined by the following rules:_
\[\overline{(\lambda\vec{x}.a)\,\vec{n}\mapsto_{\mathrm{R}}a[\vec{n}/\vec{x}]}^{ \mathrm{\ (R}_{\beta})}\]
\[\begin{array}{c}a\mapsto_{\mathrm{R}}A^{\prime}\\ \overline{\lambda\vec{x}.a}\mapsto_{\mathrm{R}}\lambda\vec{x}.A^{\prime}\end{array} \mathrm{\ (R}_{\lambda})\qquad\begin{array}{c}m\mapsto_{\mathrm{R}}M^{\prime}\\ m\,\overline{n}\mapsto_{\mathrm{R}}M^{\prime}\,\vec{n}\end{array}\mathrm{\ (R}_{\alpha l})\qquad\begin{array}{c}\vec{n}\mapsto_{\mathrm{R}}\vec{N}^{ \prime}\\ e\,\vec{n}\mapsto_{\mathrm{R}}e\,\vec{N}^{\prime}\end{array}\mathrm{\ (R}_{\alpha r})\]
\[\begin{array}{c}m\mapsto_{\mathrm{R}}M^{\prime}\\ \overline{[m]*\bar{n}\mapsto_{\mathrm{R}}[M^{\prime}]*\bar{n}}^{\mathrm{\ (R}_{\|})}\end{array}\qquad\begin{array}{c}\bar{m}\mapsto_{\mathrm{R}}\bar{M }^{\prime}\\ \overline{m}::\vec{n}\mapsto_{\mathrm{R}}\bar{M}^{\prime}::\vec{n}\end{array} \mathrm{\ (R}_{\textsc{cl})}\qquad\begin{array}{c}\vec{n}\mapsto_{\mathrm{R}}\vec{N}^{ \prime}\\ \overline{m}::\vec{n}\mapsto_{\mathrm{R}}\bar{m}::\vec{N}^{\prime}\end{array}\]
_and then extended to a relation on term sums by setting \(U\to_{\mathrm{R}}U^{\prime}\) iff \(U=\sum_{i=0}^{k}u_{i}\) and \(U^{\prime}=\sum_{i=0}^{k}U^{\prime}_{i}\), with \(u_{0}\mapsto_{\mathrm{R}}U^{\prime}_{0}\) and \(u_{i}\mapsto_{\mathrm{R}}^{?}U^{\prime}_{i}\) for \(1\leq i\leq k\)._
Observe that, here, we require at least one element in a sum to be reduced. This, together with the fact that the reduction of a redex yields a sum of smaller terms ensures that big-step resource reduction is strongly normalizing:
**Lemma 2.16**.: _If \(u\mapsto_{\mathrm{R}}U^{\prime}\) and \(u^{\prime}\in U^{\prime}\) then \(\|u\|>\|u^{\prime}\|\)._
Proof.: The proof is direct by induction on the reduction, using lemma 2.14 in the redex case.
By a standard argument, we obtain:
**Corollary 2.17** (Strong normalization for \(\to_{\mathrm{R}}\)).: _There is no infinite sequence \(\langle U_{i}\rangle_{i\in\mathbb{N}}\) with \(U_{i}\to_{\mathrm{R}}U_{i+1}\) for \(i\in\mathbb{N}\)._
Proof.: To each term sum, we associate the multiset of the sizes of its elements: under big step reduction, this measure is strictly decreasing for the multiset order.
Moreover, big-step reduction is a particular case of iterated small-step reduction:
**Lemma 2.18**.: _If \(Q\to_{\mathrm{R}}Q^{\prime}\) then \(Q\to_{\mathrm{r}}^{*}Q^{\prime}\)._
Proof.: Is sufficient to consider the case of \(Q=q\mapsto_{\mathrm{R}}Q^{\prime}\). The proof is then by induction on this reduction: the case of (R\({}_{\beta}\)) (choosing \(\vec{x}\cap\mathcal{V}(\vec{n})=\emptyset\)) follows from lemma 2.13. All the other cases follow straightforwardly from the induction hypothesis, using lemma 2.7.
**Theorem 2.19** (Weak normalization for \(\to_{\mathrm{r}}\)).: _For every resource sum \(Q\) there exists a sum \(Q^{\prime}\) of \(\mapsto_{\mathrm{r}}\)-irreducible expressions such that \(Q\to_{\mathrm{r}}^{*}Q^{\prime}\), and this sum is uniquely defined._
Proof.: We obtain \(Q^{\prime}\) by the previous lemma, observing that an expression is \(\mapsto_{\mathrm{r}}\)-reducible iff it is \(\mapsto_{\mathrm{R}}\)-reducible. Unicity follows from the confluence of \(\to_{\mathrm{r}}\), together with the fact that if \(Q^{\prime}\) is a sum of \(\mapsto_{\mathrm{r}}\)-irreducible expressions and \(Q^{\prime}\to_{\mathrm{r}}Q^{\prime\prime}\) then \(Q^{\prime}=Q^{\prime\prime}\).
Given any expression sum \(Q\), we denote by \(\mathcal{N}(Q)\) the unique sum of irreducible expressions such that \(Q\to_{\mathrm{r}}^{*}\mathcal{N}(Q)\) and call \(\mathcal{N}(Q)\) the **normal form** of \(Q\). Note that we also have \(Q\to_{\mathrm{R}}^{*}\mathcal{N}(Q)\).
**Theorem 2.20** (Confluence of \(\to_{\mathrm{R}}^{*}\)).: _The reduction \(\to_{\mathrm{R}}\) is confluent._
Proof.: It is sufficient to observe that if \(Q\to_{\mathrm{R}}Q^{\prime}\) then \(\mathcal{N}(Q)=\mathcal{N}(Q^{\prime})\), by the previous two results.
## 3 Resource vectors
We fix a complete commutative semiring \(\mathbb{K}\)[14], _i.e._ a set \(\mathbb{K}\) equipped with: a sum operator \(\sum:\mathbb{K}^{I}\to\mathbb{K}\) on countable families that we denote by \(\sum_{i\in I}\alpha_{i}\coloneqq\sum\langle\alpha_{i}\rangle_{i\in A}\), satisfying \(\sum_{i\in\{j\}}\alpha_{i}=\alpha_{j}\), and \(\sum_{i\in I}\alpha_{i}=\sum_{j\in J}\sum_{i\in I_{j}}\alpha_{i}\) for any partitioning of \(I\) into \(\{I_{j}\mid j\in J\}\); and a monoid structure, denoted multiplicatively, which distributes over \(\sum\). The two basic instances are the semiring of booleans \(\mathbb{B}\), and the extended real half line \(\overline{\mathbb{R}^{+}}\).
A direct consequence of the axioms is that finite sums are associative and commutative. We write \(0\in\mathbb{K}\) for the empty sum and denote binary sums as usual. Equipped with finite sums and products, \(\mathbb{K}\) is then a commutative semiring in the usual sense. Moreover, \(\mathbb{K}\) is automatically **positive**: if \(\alpha_{1}+\alpha_{2}=0\) then \(\alpha_{1}=\alpha_{2}=0\).
We write \(\mathbb{K}^{X}\) for the semimodule of (possibly infinite) linear combinations of elements of \(X\) with coefficients in \(\mathbb{K}\): equivalently, these are the \(X\)-indexed families elements of \(\mathbb{K}\). We will call **vector** any such \(A\in\mathbb{K}^{X}\), and we write \(A_{@a}\in\mathbb{K}\) for the value of \(A\) at index \(a\). We write \(\operatorname{supp}(A)=\{a\in X\mid A_{@a}\neq 0\}\). We will often abuse notation and denote \(\operatorname{supp}(A)\) simply by \(A\), so that we may write \(a\in A\) for \(A_{a}\neq 0\). If \(\langle A_{i}\rangle_{i\in\mathbb{N}}\in(\mathbb{K}^{X})^{I}\) and \(\langle\alpha_{i}\rangle_{i\in\mathbb{N}}\in\mathbb{K}^{I}\), we
write \(\sum_{i\in I}\alpha_{i}A_{i}\in\mathbb{K}^{X}\) for the vector \(A\) defined in the obvious way: \(A_{@a}=\sum_{i\in I}\alpha_{i}(A_{i})_{@a}\in\mathbb{K}\) for \(i\in I\).
Using the additive monoid structure of \(\mathbb{K}\), each finite sum \(A\in\Sigma X\) (and in particular each element of \(X\)) induces a vector with finite support \(\widetilde{A}\in\mathbb{K}^{X}\). In particular, for any vector \(A\), we have \(A=\sum_{a\in A}A_{@a}\widetilde{a}\). Note that this embedding of \(\Sigma X\) in \(\mathbb{K}^{X}\) need not be injective: for instance if \(\mathbb{K}=\mathbb{B}\), \(\widetilde{A}\) is nothing but the support of \(A\). We will however abuse notation and generally write \(A\) instead of \(\widetilde{A}\): the implicit application of the embedding should be clear from the context. E.g., if we write a vector \(\sum_{i\in I}\alpha_{i}A_{i}\in\mathbb{K}^{X}\) where \(\alpha_{i}\in\mathbb{K}\) and \(A_{i}\in\Sigma X\) for every \(i\in I\), this should be read as \(\sum_{i\in I}\alpha_{i}\widehat{A_{i}}\).
### Vectors of resource terms
We call **value vector** any vector \(M\in\mathbb{K}^{\Delta_{\mathrm{v}}}\) such that \(\mathcal{V}_{\mathrm{s}}(M)\coloneqq\bigcup_{m\in M}\mathcal{V}_{\mathrm{s}} (m)\) is finite -- note that \(\mathcal{V}(M)\coloneqq\bigcup_{m\in M}\mathcal{V}(m)\) might very well be infinite, but the hypothesis on \(\mathcal{V}_{\mathrm{s}}(M)\) is sufficient to ensure that we can always find variables that are not free in \(M\). We use the same typographic conventions for value vectors as for value sums and write \(\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle\) for the set of value vectors (thus denoted by \(M,N,P\)). We similarly define **base vectors** (denoted by \(A,B,C\in\mathbb{K}\langle\Delta_{\mathrm{b}}\rangle\)), **bag vectors** (denoted by \(\bar{M},\bar{N},\bar{P}\in\mathbb{K}\langle\Delta_{\mathrm{i}}\rangle\)), and **stream vectors** (denoted by \(\bar{M},\bar{N},\bar{P}\in\mathbb{K}\langle\Delta_{\mathrm{s}}\rangle\)). Note that we do not impose any other bound on the shape of terms in the definition of vectors: e.g., the length of bags in \(\bar{M}\in\mathbb{K}\langle\Delta_{\mathrm{i}}\rangle\) is not bounded in general.
As for sums, we may call **head vector** (denoted \(E,F,G\in\mathbb{K}\langle\Delta_{\mathrm{h}}\rangle\)) any of a value vector or of a value variable. And we call **term vector** (resp. **expression vector**) any value vector (resp. head vector), base vector, bag vector, or stream vector, which we then denote by a letter among \(U,V,W\) (resp. \(Q,R,S\)). And we write \(\mathbb{K}\langle\Delta_{\mathrm{t}}\rangle\) (resp. \(\mathbb{K}\langle\Delta_{\mathrm{e}}\rangle\)) any of \(\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle\), \(\mathbb{K}\langle\Delta_{\mathrm{b}}\rangle\), \(\mathbb{K}\langle\Delta_{\mathrm{i}}\rangle\), or \(\mathbb{K}\langle\Delta_{\mathrm{s}}\rangle\) (resp. \(\mathbb{K}\langle\Delta_{\mathrm{h}}\rangle\), \(\mathbb{K}\langle\Delta_{\mathrm{b}}\rangle\), \(\mathbb{K}\langle\Delta_{\mathrm{i}}\rangle\), or \(\mathbb{K}\langle\Delta_{\mathrm{s}}\rangle\)).
We extend term constructors to term vectors by linearity as we did for sums, and we extend resource substitution to vectors by bilinearity, by setting:
\[Q[\bar{N}/x]\coloneqq\sum_{q\in\Delta_{\mathrm{e}}}\sum_{\bar{n}\in\Delta_{ \mathrm{l}}}Q_{@q}\bar{N}_{@\bar{n}}\,q[\bar{n}/x]\;.\]
Again, one can easily check that, with that extension, all the identities defining resource substitution in definition 2.1 also hold if we replace terms with vectors. Similarly, for big-step resource substitution, we set:
\[Q[\bar{N}/\vec{x}]\coloneqq\sum_{q\in\Delta_{\mathrm{e}}}\sum_{\bar{n}\in \Delta_{\mathrm{l}}}Q_{@q}\bar{N}_{@\bar{n}\bar{n}}\,q[\bar{n}/\vec{x}]\;.\]
Having extended term constructors to resource vectors, it is also straightforward to define the ordinary (capture avoiding) substitution \(q\{F/x\}\in\mathbb{K}\langle\Delta_{\mathrm{e}}\rangle\) of a head vector \(F\in\mathbb{K}\langle\Delta_{\mathrm{h}}\rangle\) for a value variable \(x\) in any resource expression \(q\in\Delta_{\mathrm{e}}\):
**Definition 3.1**.: _Let \(q\in\Delta_{\mathrm{e}}\) and \(F\in\mathbb{K}\langle\Delta_{\mathrm{h}}\rangle\). We define \(q\{F/x\}\in\mathbb{K}\langle\Delta_{\mathrm{e}}\rangle\) by induction on \(q\):_
\[y\{F/x\} \coloneqq\begin{cases}F&\text{if }x=y\\ y&\text{otherwise}\end{cases}\] \[(\lambda\vec{y}.a)\{F/x\} \coloneqq\lambda\vec{y}.a\{F/x\}\] \[(e\,\vec{m})\{F/x\} \coloneqq e\{F/x\}\,\vec{m}\{F/x\}\] \[[m_{1},\ldots,m_{k}]\{F/x\} \coloneqq[m_{1}\{F/x\},\ldots,m_{k}\{F/x\}]\] \[\iota\{F/x\} \coloneqq\iota\] \[(\vec{m}\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{ }}}}}}}}}}}}}}}\vec{p})\{F/x\} \coloneqq\bar{m}\{F/x\}\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{ \mathrel{{ }}}}}}}}}}}}}}}}\vec{p}\neq\iota\]
_where, in the abstraction case, \(\vec{y}\) is chosen so that \(x\not\in\vec{y}\) and \(\mathcal{V}(F)\cap\vec{y}=\emptyset\).1_
Footnote 1: The assumption that \(\mathcal{V}_{\mathrm{s}}(F)\) is finite ensures that this requirement can be fulfilled.
Note that
\[q\{0/x\}=q[[\,]/x]=\begin{cases}0&\text{if }x\in\mathcal{V}(q)\\ q&\text{otherwise}\end{cases}\;.\]
Then it is easy to check that, if \(|q|_{x}=1\), then \(q\{F/x\}=q[[F]/x]\), which is linear in \(F\). In general, however, \(q\{F/x\}\) is not linear in \(F\): \(q\{\sum_{i\in I}\alpha_{i}F_{i}/x\}\neq\sum_{i\in I}\alpha_{i}q\{F_{i}/x\}\), in general -- for instance, with \(x\not\in\mathcal{V}(q)\) and \(I=\emptyset\), the identity would entail \(q=q\{0/x\}=0\).
On the other hand, we can extend this definition to substitution inside a resource vector, by linearity: we set \(Q\{F/x\}\coloneqq\sum_{q\in\Delta_{\mathrm{e}}}Q_{\bar{\otimes}q}q\{F/x\}\in \mathbb{K}\langle\Delta_{\mathrm{e}}\rangle\) for any \(Q\in\mathbb{K}\langle\Delta_{\mathrm{e}}\rangle\). Then one can check that, with that extension, all the identities in the previous definition also hold if we replace terms with vectors.
Following a similar pattern, one defines the simultaneous substitution \(Q\{\vec{F}/\vec{x}\}\) of the tuple \(\vec{F}=\langle F_{1},\ldots,F_{k}\rangle\) (resp. the sequence \(\vec{F}=\langle F_{i}\rangle_{i\in\mathbb{N}}\), assuming \(\mathcal{V}_{\mathrm{s}}(\vec{F})\) is finite) of head vectors (note that, despite the similar notations, these are _not_ stream vectors) for the tuple \(\vec{x}=\langle x_{0},\ldots,x_{k-1}\rangle\) of variables (resp. the sequence \(\vec{x}=\langle x_{i}\rangle_{i\in\mathbb{N}}\)) of variables in \(Q\). In the finite case, assuming \(\vec{x}\cap\mathcal{V}(\vec{F})=\emptyset\), we have
\[Q\{\vec{F}/\vec{x}\}=Q\{F_{0}/x_{0}\}\cdots\{F_{k-1}/x_{k-1}\}\;.\]
And in the infinite case, we intuitively have
\[Q\{\vec{F}/\vec{x}\}=Q\{F_{0}/x_{0}\}\{F_{1}/x_{1}\}\cdots\;.\]
Formally, if we also assume \(\mathcal{V}(Q)\cap\vec{x}\) is finite (which is automatic when \(Q\in\Sigma\Delta_{\mathrm{e}}\)), then we have
\[Q\{\vec{F}/\vec{x}\}=Q\{F_{0}/x_{0}\}\cdots\{F_{k-1}/x_{k-1}\}\]
for any \(k\) such that \(x_{i}\in q\in\Delta_{\mathrm{e}}\) implies \(i<k\). We will most often consider the case where \(\vec{x}\) is in fact a sequence variable and \(\vec{x}\not\in\mathcal{V}_{\mathrm{s}}(\vec{F})\) -- identifying \(\vec{x}\) with \(\langle\vec{x}[i]\rangle_{i\in\mathbb{N}}\) as we announced. The latter condition is not restrictive: \(Q\) being a resource vector, the additional condition on \(\mathcal{V}_{\mathrm{s}}(\vec{F})\) ensures that we can always find \(\vec{y}\not\in\mathcal{V}_{\mathrm{s}}(Q)\cup\mathcal{V}_{\mathrm{s}}(\vec{F})\) and write \(Q\{\vec{F}/\vec{x}\}=Q\{\vec{y}/\vec{x}\}\{\vec{F}/\vec{y}\}\).
### Promotion and the Taylor expansion formula for substitution
From now on, we assume \(\mathbb{K}\) has fractions, meaning that each \(n\in\mathbb{N}\setminus\{0\}\) has an inverse in \(\mathbb{K}\): note that this assumption holds in both \(\mathbb{B}\) and \(\overline{\mathbb{R}^{+}}\).
Given a value vector \(M\), and \(\bar{m}=[m_{1},\dots,m_{k}]\in\Delta_{!}\), we write \(M^{\bar{\alpha}\bar{m}}\coloneqq\prod_{i=1}^{k}M_{\bar{\alpha}m_{i}}\). Then we define the bag vector \(M^{k}\coloneqq[M,\dots,M]\) (with \(k\) copies of \(M\)), and obtain:
\[M^{k}\coloneqq\sum_{(m_{1},\dots,m_{k})\in\Delta_{\mathrm{v}}^{k}}M^{\bar{ \alpha}[m_{1},\dots,m_{k}]}[m_{1},\dots,m_{k}]\;.\]
Then we set
\[M^{!}=\sum_{k\in\mathbb{N}}\frac{1}{k!}M^{k}\in\mathbb{K}\langle\Delta_{!}\rangle\]
which we call the **promotion** of \(M\).
It will be useful to compute the coefficient of a bag in the promotion of a value vector:
**Lemma 3.2**.: _If \(\bar{m}\in M^{!}\) and \(|\bar{m}|=k\) then \(M^{!}_{\bar{\alpha}\bar{m}}=\frac{M^{\bar{\alpha}m}}{\mathsf{d}(\bar{m})}\)._
Proof.: The proof is exactly the same as for the ordinary Taylor expansion [1, Lemma 4.4].
The promotion operator is compatible with substitution:
**Lemma 3.3**.: _For any \(M\) and \(N\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle\), \(N^{!}\{M/x\}=N\{M/x\}^{!}\). And for any \(\vec{M}\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle^{\mathbb{N}}\) such that \(\mathcal{V}_{\mathrm{s}}(\vec{M})\) is finite, we have \(N^{!}\{\vec{M}/\vec{x}\}=N\{\vec{M}/\vec{x}\}^{!}\)._
Proof.: By definition
\[N^{!}\{M/x\} =\sum_{k\in\mathbb{N}}\frac{1}{k!}N^{k}\{M/x\}\] \[=\sum_{k\in\mathbb{N}}\sum_{\langle n_{1},\dots,n_{k}\rangle\in \Delta_{\mathrm{v}}^{k}}\frac{N^{\bar{\alpha}[n_{1},\dots,n_{k}]}}{k!}[n_{1}, \dots,n_{k}]\{M/x\}\] \[=\sum_{k\in\mathbb{N}}\sum_{\langle n_{1},\dots,n_{k}\rangle\in \Delta_{\mathrm{v}}^{k}}\frac{N^{\bar{\alpha}[n_{1},\dots,n_{k}]}}{k!}[n_{1}\{ M/x\},\dots,n_{k}\{M/x\}]\] \[=\sum_{k\in\mathbb{N}}\frac{1}{k!}\bigg{[}\sum_{n_{1}\in\Delta_{ \mathrm{v}}}N_{\bar{\alpha}n_{1}}n_{1}\{M/x\},\dots,\sum_{n_{k}\in\Delta_{ \mathrm{v}}}N_{\bar{\alpha}n_{k}}n_{k}\{M/x\}\bigg{]}\] \[=\sum_{k\in\mathbb{N}}\frac{1}{k!}(N\{M/x\})^{k}\] \[=N\{M/x\}^{!}\;.\]
The proof of the second statement is the same.
**Lemma 3.4** (Taylor expansion of substitution).: _For any \(Q\in\mathbb{K}\langle\Delta_{\mathrm{e}}\rangle\) and \(M\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle\), \(Q\{M/x\}=Q[M^{\dagger}/x]\)._
Proof.: The proof is essentially the same as in the ordinary resource calculus [20, Lemma 4.8]. By linearity, it is sufficient to consider the case of \(Q=q\in\Delta_{\mathrm{e}}\). We first show that the identities defining \(q\mapsto q\{M/x\}\) (as in definition 3.1) are also valid for \(q\mapsto q[M^{\dagger}/x]\) (here the definition of sums over partitionings of bags, as used in definition 2.1, is crucial, in conjunction with fact 1.1 and lemma 3.2). The result follows by induction on \(q\).
Similarly, we associate a **promotion stream vector**\(\vec{M}^{\dagger}\in\mathbb{K}\langle\Delta_{\mathrm{s}}\rangle\) with each sequence \(\vec{M}=\langle M_{i}\rangle_{i\in\mathbb{N}}\in\mathbb{K}\langle\Delta_{ \mathrm{v}}\rangle^{\mathbb{N}}\) of value vectors such that \(\mathcal{V}_{\mathrm{s}}(\vec{M})\) is finite (again note that, despite the similar notation, the latter sequence is _not_ a stream vector). First observe that, by construction, \((M^{\dagger}_{i})_{@[\,]}=1\) for each \(i\in\mathbb{N}\). Then we define \(\vec{M}^{\dagger}\) by its coefficients: for every \(\vec{m}=\langle\vec{m}_{i}\rangle_{i\in\mathbb{N}}\in\Delta_{\mathrm{s}}\), we can set \((\vec{M}^{\dagger})_{@\vec{m}}\coloneqq\prod_{i\in\mathbb{N}}(M^{\dagger}_{i} )_{@\vec{m}_{i}}\), where only finitely many \((M^{\dagger}_{i})_{@\vec{m}_{i}}\) are distinct from \(1\). In particular, we have \((\vec{M}^{\dagger})_{@\iota}=1\) and \((M\mathrel{\mathop{:}}\vec{N}^{\dagger})_{@\vec{m}\mathrel{\mathop{:}}\vec{n} }=(M^{\dagger})_{@\vec{m}}\times(\vec{N}^{\dagger})_{@\vec{n}}\), so that \((M\mathrel{\mathop{:}}\vec{N})^{\dagger}=M^{\dagger}\mathrel{\mathop{:}}\vec{ N}^{\dagger}\).
Then we obtain the analogue of lemma 3.3 for the promotion of sequences of value vectors:
**Lemma 3.5**.: _For any \(M\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle\) and \(\vec{N}\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle^{\mathbb{N}}\) such that \(\mathcal{V}_{\mathrm{s}}(\vec{N})\) is finite, we have \(\vec{N}^{\dagger}\{M/x\}=\vec{N}\{M/x\}^{\dagger}\). And for any \(\vec{M}\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle^{\mathbb{N}}\) with \(\mathcal{V}_{\mathrm{s}}(\vec{M})\) finite, we have \(\vec{N}^{\dagger}\{\vec{M}/\vec{x}\}=\vec{N}\{\vec{M}/\vec{x}\}^{\dagger}\)._
Proof.: Fix \(\vec{p}\in\Delta_{\mathrm{s}}\): we can write \(\vec{p}=\bar{p}_{1}\mathrel{\mathop{:}}\cdots\mathrel{\mathop{:}}\bar{p}_{k} \mathrel{\mathop{:}}\iota\). Then write \(\vec{N}=N_{1}\mathrel{\mathop{:}}\cdots\mathrel{\mathop{:}}N_{k}\mathrel{ \mathop{:}}\bar{p}\). We obtain
\[(\vec{N}^{\dagger}\{M/x\})_{@\vec{p}} =(N^{\dagger}_{1}\{M/x\}\mathrel{\mathop{:}}\cdots\mathrel{ \mathop{:}}N^{\dagger}_{k}\{M/x\}\mathrel{\mathop{:}}\vec{P}^{\dagger}\{M/x\} )_{@\vec{p}}\] \[=(N^{\dagger}_{1}\{M/x\})_{@\bar{p}_{1}}\times\cdots\times(N^{ \dagger}_{k}\{M/x\})_{@\bar{p}_{k}}\times(\vec{P}^{\dagger}\{M/x\})_{@_{u}}\] \[=(N_{1}\{M/x\}^{\dagger})_{@\bar{p}_{1}}\times\cdots\times(N_{k} \{M/x\}^{\dagger})_{@\bar{p}_{k}}\times(\vec{P}\{M/x\}^{\dagger})_{@_{u}}\] \[=(N_{1}\{M/x\}^{\dagger}\mathrel{\mathop{:}}\cdots\mathrel{ \mathop{:}}N_{k}\{M/x\}^{\dagger}\mathrel{\mathop{:}}\bar{P}\{M/x\}^{\dagger} )_{@\bar{p}}\] \[=(\vec{N}\{M/x\}^{\dagger})_{@\vec{p}}\]
where each identity \((N^{\dagger}_{i}\{M/x\})_{@\bar{p}_{i}}=(N_{i}\{M/x\}^{\dagger})_{@\bar{p}_{i}}\) follows from lemma 3.3, and \((\vec{P}^{\dagger}\{M/x\})_{@_{u}}=(\vec{P}^{\dagger})_{@_{u}}=1=(\vec{P}\{M/x \}^{\dagger})_{@_{u}}\) by definition.
The proof of the second statement follows the same pattern.
We obtain the analogue of lemma 3.4 as well:
**Lemma 3.6**.: _For any \(Q\in\mathbb{K}\langle\Delta_{\mathrm{e}}\rangle\), and any \(\vec{M}\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle^{\mathbb{N}}\) such that \(\mathcal{V}_{\mathrm{s}}(\vec{N})\) is finite, we have \(Q\{\vec{M}/\vec{x}\}=Q[\vec{M}^{\dagger}/\vec{x}]\)._
Proof.: Write \(\vec{M}=\langle M_{i}\rangle_{i\in\mathbb{N}}\). By linearity, it is sufficient to consider the case of \(Q=q\in\Delta_{\mathrm{e}}\). It is possible to follow the same pattern as in the proof of
lemma 3.4, but we can also deduce the present result from lemma 3.4 itself. Indeed, \(\mathcal{V}(q)\) is finite, hence we can choose \(k\) such that \(i\geq k\) implies \(\vec{x}[i]\not\in\mathcal{V}(q)\), to obtain
\[q\{\vec{M}/\vec{x}\}=q\{M_{0}/\vec{x}[0]\}\cdots\{M_{k-1}/\vec{x}[k-1]\}\]
as discussed above, and also
\[q[\vec{M}^{!}/\vec{x}]=q[M_{0}^{!}/\vec{x}[0]]\cdots[M_{k-1}^{!}/\vec{x}[k-1]]\]
by lemma 2.13 -- assuming, w.l.o.g., that \(\vec{x}\cap\bigcup_{i<k}\mathcal{V}(M_{i})=\emptyset\). It is then sufficient to iterate lemma 3.4.
Alternatively to the above definition, we could introduce \(\vec{M}^{!}\) similarly to the promotion of value vectors, as follows. First, we call **degree stream** any sequence of integers \(\vec{k}=\langle k_{i}\rangle_{i\in\mathbb{N}}\in\mathbb{N}^{\mathbb{N}}\) with finite support: \(\{i\in\mathbb{N}\mid k_{i}\neq 0\}\) is finite. Note that degree streams are nothing but finite multisets of non-negative integers, but we use different notations to fit the way we use them. We will write \(\mathbb{N}_{\mathrm{s}}\) for the set of degree streams. We denote \(\iota\coloneqq\langle 0\rangle_{i\in\mathbb{N}}\in\mathbb{N}_{\mathrm{s}}\), and if \(k\in\mathbb{N}\) and \(\vec{l}\in\mathbb{N}_{\mathrm{s}}\), we write \(k::\vec{l}\in\mathbb{N}_{\mathrm{s}}\) for the stream obtained by pushing \(k\) at the head of \(\vec{k}\). Given \(\vec{M}\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle^{\mathbb{N}}\) and \(\vec{k}\in\mathbb{N}_{\mathrm{s}}\), we define \(\vec{M}^{\vec{k}}\) inductively as follows: \(\vec{M}^{\iota}\coloneqq\iota\) and \((M::\vec{N})^{k:\vec{l}}\coloneqq M^{k}::N^{\vec{l}}\) when \(k::\vec{l}\neq\iota\). We moreover define \(\vec{k}!\in\mathbb{N}\) by setting \(\vec{k}!=\prod_{i\in\mathbb{N}}k_{i}!\), which satisfies: \(\iota!=1\) and \((k::\vec{l})!=k!\times\vec{l}!\).
We obtain:
**Lemma 3.7**.: _For any sequence \(\vec{M}\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle^{\mathbb{N}}\) of value vectors such that \(\mathcal{V}_{\mathrm{s}}(\vec{M})\) is finite, we have \(\vec{M}^{!}=\sum_{\vec{k}\in\mathbb{N}_{\mathrm{s}}}\frac{1}{k!}\vec{M}^{ \vec{k}}\)._
Proof.: It is sufficient to check that:
\[\bigg{(}\sum_{\vec{k}\in\mathbb{N}_{\mathrm{s}}}\frac{1}{\vec{k}!}\vec{M}^{ \vec{k}}\bigg{)}_{\vec{\omega}_{\iota}}=1\]
and that
\[\bigg{(}\sum_{\vec{k}\in\mathbb{N}_{\mathrm{s}}}\frac{1}{\vec{k}!}(M::\vec{N} )^{\vec{k}}\bigg{)}_{\vec{\omega}_{\vec{m}}\coloneqq\vec{n}}=(M^{!})_{\vec{ \omega}_{\vec{m}}}\times\bigg{(}\sum_{\vec{k}\in\mathbb{N}_{\mathrm{s}}}\frac {1}{k!}\vec{N}^{\vec{k}}\bigg{)}_{\vec{\omega}_{\vec{n}}}\]
which follows directly from the definitions.
### Reduction of resource vectors
We define the **resource reduction on term vectors** by setting \(U\rightsquigarrow U^{\prime}\) if \(U=\sum_{i\in I}\alpha_{i}u_{i}\) and \(U^{\prime}=\sum_{i\in I}\alpha_{i}U^{\prime}_{i}\) with \(u_{i}\in\Delta_{\mathrm{t}}\), \(U^{\prime}_{i}\in\Sigma\Delta_{\mathrm{t}}\) and \(u_{i}\to_{\mathrm{r}}^{*}U^{\prime}_{i}\) for \(i\in I\) -- note in particular that we do note impose the terms \(u_{i}\) to be pairwise distinct, and that the number of \(\to_{\mathrm{r}}\) reductions is not bounded. We also define the **normal form of a term vector**, point-wise:
\[\mathcal{N}(U)\coloneqq\sum_{u\in U}U_{\vec{\omega}u}\mathcal{N}(u)\;.\]
**Lemma 3.8**.: _For any term vector \(U\), we have \(U\rightsquigarrow\mathcal{N}(U)\). If moreover \(U=\sum_{i\in I}\alpha_{i}U_{i}\) with \(U_{i}\in\mathbb{K}\langle\Delta_{\mathfrak{t}}\rangle\) for \(i\in I\), then \(\mathcal{N}(U)=\sum_{i\in I}\alpha_{i}\mathcal{N}(U_{i})\). Finally, if \(U\rightsquigarrow U^{\prime}\) then \(\mathcal{N}(U)=\mathcal{N}(U^{\prime})\)._
Proof.: The first statement follows from the definitions, observing that \(u\to_{\mathrm{r}}^{*}\mathcal{N}(u)\). The second one follows from the linearity of \(\mathcal{N}(-)\). And if \(U\rightsquigarrow U^{\prime}\) then we can write \(U=\sum_{i\in I}\alpha_{i}u_{i}\) and \(U^{\prime}=\sum_{i\in I}\alpha_{i}U^{\prime}_{i}\) with \(u_{i}\to_{\mathrm{r}}^{*}U^{\prime}_{i}\) for \(i\in I\): then, by confluence of \(\to_{\mathrm{r}}^{*}\), \(\mathcal{N}(u_{i})=\mathcal{N}(U^{\prime}_{i})\) for each \(i\in I\), and we conclude by the previous point.
The confluence of \(\rightsquigarrow\) follows directly.
**Lemma 3.9**.: _The relation \(\rightsquigarrow\) is reflexive and:_
1. _if_ \(U_{i}\rightsquigarrow U^{\prime}_{i}\) _with_ \(U_{i},U^{\prime}_{i}\in\mathbb{K}\langle\Delta_{\mathfrak{t}}\rangle\) _for_ \(i\in I\)_, and_ \(\bigcup_{i\in I}\mathcal{V}_{\mathrm{s}}(U_{i})\) _is finite, then_ \(\sum_{i\in I}\alpha_{i}U_{i}\rightsquigarrow\sum_{i\in I}\alpha_{i}U^{\prime}_{i}\)_;_
2. _if_ \(A\rightsquigarrow A^{\prime}\) _then_ \(\lambda\vec{x}.A\rightsquigarrow\lambda\vec{x}.A^{\prime}\)_; and if_ \(M\rightsquigarrow M^{\prime}\) _then_ \(\lambda x.M\rightsquigarrow\lambda x.M^{\prime}\)_;_
3. _if_ \(\vec{N}\rightsquigarrow\vec{N}^{\prime}\) _then_ \(x\,\vec{N}\rightsquigarrow x\,\vec{N}^{\prime}\)_; if moreover_ \(M\rightsquigarrow M^{\prime}\)_, then_ \(M\,\vec{N}\rightsquigarrow M^{\prime}\,\vec{N}^{\prime}\)_;_
4. _if_ \(M\rightsquigarrow M^{\prime}\) _then_ \([M]\rightsquigarrow[M^{\prime}]\)_; and if_ \(\bar{M}\rightsquigarrow\bar{M}^{\prime}\) _and_ \(\bar{N}\rightsquigarrow\bar{N}^{\prime}\) _then_ \(\bar{M}*\bar{N}\rightsquigarrow\bar{M}^{\prime}*\bar{N}^{\prime}\)_;_
5. _if_ \(\bar{M}\rightsquigarrow\bar{M}^{\prime}\) _and_ \(\vec{N}\rightsquigarrow\vec{N}^{\prime}\) _then_ \(\bar{M}::\vec{N}\rightsquigarrow\bar{M}^{\prime}::\vec{N}^{\prime}\)_._
_Moreover, \((\lambda\vec{x}.A)\,(\bar{N}::\vec{P})\rightsquigarrow\left(\lambda\vec{x}.A[ \bar{N}/\vec{x}[0]][\vec{x}\,\downarrow]\right)\vec{P}\) and \((\lambda\vec{x}.A)\,\vec{M}\rightsquigarrow A[\vec{M}/\vec{x}]\)._
Proof.: Each result follows from the definitions, also using lemma 2.7 for items 2 to 5, and lemma 2.18 for the big-step redex case.
The reduction of value vectors is moreover compatible with promotion. We first establish:
**Lemma 3.10**.: _If \(M_{i}\rightsquigarrow M^{\prime}_{i}\) for \(i\in\mathbb{N}\), then for all \(\vec{k}\in\mathbb{N}_{\mathrm{s}}\), \(\langle M_{i}\rangle_{i\in\mathbb{N}}^{\vec{k}}\rightsquigarrow\langle M^{ \prime}_{i}\rangle_{i\in\mathbb{N}}^{\vec{k}}\)._
Proof.: We reason by induction on \(\vec{k}\). If \(\vec{k}=\iota\), the result holds by reflexivity. Otherwise, we write \(\vec{k}=k::\vec{l}\), and \(N_{i}=M_{i+1}\) and \(N^{\prime}_{i}=M^{\prime}_{i+1}\) for \(i\in\mathbb{N}\), so that \(\langle M_{i}\rangle_{i\in\mathbb{N}}^{\vec{k}}=M^{k}_{0}::\langle N_{i}\rangle _{i\in\mathbb{N}}^{\vec{l}}\) and similarly for \(\langle M^{\prime}_{i}\rangle_{i\in\mathbb{N}}^{\vec{k}}\). We have \(M^{k}_{0}\rightsquigarrow M^{\prime}_{0}{}^{k}\) by item 4 of lemma 3.9, and \(\langle N_{i}\rangle_{i\in\mathbb{N}}^{\vec{l}}\rightsquigarrow\langle N^{ \prime}_{i}\rangle_{i\in\mathbb{N}}^{\vec{l}}\) by induction hypothesis. We conclude by items 1 and 5 of lemma 3.9.
We obtain:
**Lemma 3.11**.: _If \(M\rightsquigarrow M^{\prime}\) then \(M^{!}\rightsquigarrow M^{\prime}{}^{!}\). And if \(M_{i}\rightsquigarrow M^{\prime}_{i}\) with \(\bigcup_{i\in\mathbb{N}}\mathcal{V}_{\mathrm{s}}(M_{i})\) finite, then \((\langle M_{i}\rangle_{i\in\mathbb{N}})^{!}\rightsquigarrow(\langle M^{\prime}_{i }\rangle_{i\in\mathbb{N}})^{!}\)._
Proof.: We have \(M^{!}=\sum_{k\in\mathbb{N}}\frac{1}{k!}M^{k}\) and similarly for \(M^{\prime}{}^{!}\), with \(M^{k}\rightsquigarrow M^{\prime}{}^{k}\) for each \(k\) by item 4 of lemma 3.9, and we conclude by item 1 of lemma 3.9 again. The second statement is established similarly, thanks to lemmas 3.7 and 3.10.
Note that we do not establish the transitivity of \(\leadsto\). Consider two reductions \(U=\sum_{i\in I}\alpha_{i}u_{i}\leadsto\sum_{i\in I}\alpha_{i}U^{\prime}_{i}=U^{ \prime}\) and \(U^{\prime}=\sum_{j\in J}\beta_{j}u^{\prime}_{j}\leadsto\sum_{j\in J}\beta_{j}U^{ \prime\prime}_{j}=U^{\prime\prime}\). Intuitively, to deduce a reduction \(U\leadsto U^{\prime\prime}\) using the transitivity of \(\to^{*}_{\mathrm{r}}\), we would need to "synchronize" the two writings of \(U^{\prime}\) in a way that is compatible with the families of reductions \(u_{i}\to^{*}_{\mathrm{r}}U^{\prime}_{i}\)_and_\(u^{\prime}_{j}\to^{*}_{\mathrm{r}}U^{\prime\prime}_{j}\): there is no obvious way to perform this synchronization. Fortunately, we do not have to rely on the transitivity of \(\leadsto\): we rather use \(\leadsto\)", or resort to reason component-wise and use the transitivity of \(\to^{*}_{\mathrm{r}}\) instead.
## 4 Extensional Taylor Expansion
A first possible definition of extensional Taylor expansion amounts to perform infinite \(\eta\)-expansion on the fly, in order to produce vectors of values:
\[\mathcal{T}(x) =\lambda\vec{y}.x\,\vec{\mathcal{T}}(\vec{y})^{\dagger}\] \[\mathcal{T}(\lambda x.M) =\lambda x.\mathcal{T}(M)\] \[\mathcal{T}(M\,N) =\lambda\vec{y}.\mathcal{T}(M)\,\mathcal{T}(N)^{\dagger}\mathrel{ \mathop{:}}\vec{\mathcal{T}}(\vec{y})^{\dagger}\]
where \(\vec{\mathcal{T}}(\vec{y})\) denotes the sequence \(\langle\mathcal{T}(\vec{y}[i])\rangle_{i\in\mathbb{N}}\) of value vectors. Note that, although this definition seems to be circular, it can be done by defining the coefficient of a resource term in the Taylor expansion of an ordinary \(\lambda\)-term, by induction of the resource term.
Nonetheless, this recursive definition yields an infinite value vector even in the variable case. This was expected, as our resource terms have a finite semantics, whereas the identity defines an infinitary behaviour -- this is similar to the copycat strategy in game semantics.
A more annoying issue is the fact that this definition does not preserve normal forms: the application case always introduces redexes. This can be fixed by defining Taylor expansion based on the head structure of terms -- this is similar to the translation of \(\lambda\)-calculus in linear logic proof nets, where the naive definition generates cuts even when starting from \(\beta\)-normal terms.
In the following we thus define Taylor expansion in several steps:
* first the expansion of variables;
* then the straightforward expansion of pure \(\lambda\)-terms;
* then the expansion of terms based on their head structure.
Then we show that the former reduces to the latter, and that both allow to simulate \(\beta\)-reduction and \(\eta\)-reduction steps as a form of resource reduction on vectors.
### Infinitely \(\eta\)-expanded variables
We define simultaneously the **value expansion**\(x^{\eta}\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle\) of a variable \(x\) and the **stream expansion**\(\vec{x}^{\,\dagger}\in\mathbb{K}\langle\Delta_{\mathrm{s}}\rangle\) of a sequence variable \(\vec{x}\) so that:
\[x^{\eta} =\lambda\vec{y}.x\,\vec{y}^{\,\dagger}\quad\text{(choosing $\vec{y} \not\ni x$)}\] \[\text{and}\quad\vec{x}^{\,\dagger} =(\vec{x}^{\,\eta})^{\dagger}\quad\text{where $\vec{x}^{\,\eta}\coloneqq\langle\vec{x}[i]^{\eta}\rangle_{i\in\mathbb{N}}$}\;.\]
To be formal, we can define the coefficients of these vectors by mutual induction on resource values and on resource streams:
\[x^{\eta}_{\bar{\alpha}u} \coloneqq\begin{cases}\vec{y}^{\,\dagger}_{\bar{\alpha}\vec{m}}& \text{if $u=\lambda\vec{y}.x\,\vec{m}$ with $\vec{y}\not\ni x$},\\ 0&\text{otherwise}\end{cases}\] \[\vec{x}^{\,\dagger}_{\bar{\alpha}\vec{m}} \coloneqq\prod_{i\in\mathbb{N}}(\vec{x}[i]^{\eta})^{\dagger}_{ \bar{m}_{i}}\qquad\text{if $\vec{m}=\langle\bar{m}_{i}\rangle_{i\in\mathbb{N}}$}\]
which ensures that the previous two identities hold. We moreover write \(x^{\,\dagger}\coloneqq(x^{\eta})^{\dagger}\).
The stream expansion of a variable is subject to the following recursive characterization, where \(Q[\vec{x}\!\uparrow]^{k}\) denotes \(Q[\vec{x}\!\uparrow]\cdots[\vec{x}\!\uparrow]\) (with \(k\) applications of \(-[\vec{x}\!\uparrow]\)):
**Lemma 4.1**.: _We have_
\[\vec{x}^{\,\dagger}=\vec{x}[0]^{\,\dagger}\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \cdot}}}}}}{}{}{{{{{{{{{{{{{{{{ }}}}}}}}}}}{{{{{{{{ }}}}}}{{{{{{{{{{{{{ { { { { { }}}}}}}}}}{{{{{{{{{{ }}}}}}}}}{{{{{{{ {{{{{{{{{ {{ { { { { }}}}}}}}}}{{{{{{{{{ }}}}}}}}{{{{{{{ {{{{{{{{{{{{ { { { { }}}}}}}}}}}{{{{{{{{{ }}}}}}{{{{{{{ {{{{{{ {{ { { { { }}}}}}}}{{{{{{ {{ { { { { { { { }}}}}}}{{{{{{ { { { { { { { { }}}}}}{{{{ {{ { { { { { { { }}}}}}{{{{ { { { { { { { { { }}}}}}{{{{ { { { { { { { { { }}}}}{{{ { { { { { { { { }}}}{{ { { { }}}}}{{{ { { { { { { { { { { }}}}}{{{ { { { { }}}}}{{ { { { { { { { { { { { }}}}}}}{{{ { { { { { { { { { { { { }}}}}{{{ { { { }}}}}{{{ { { { { { { { { { }}}}}}{{{ { { { { { { { { { { }}}}}{{ { { { { { }}}}}{{ { { { { { { { { { { { }}}}}{{{ { { { { { }}}}{{ { { { { { { }}}}{{ { { { { { { }}}}{{ { { { { { { }}}}{{ { { { { { { }}}}{{ { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { {{ {{ {{{ {{ { {{{ {{{ { {{{{{{{{{{{{{{{{{{{{{{{{{{{ \
**Lemma 4.2**.: _If \(m\in x^{\eta}\) (resp. \(\bar{m}\in x^{\,{}^{\dagger}}\); \(\vec{m}\in\vec{x}^{\,{}^{\dagger}}\)) then \(x^{\eta}_{\bar{\alpha}m}=\frac{1}{\mathsf{m}(m)}\) (resp. \(x^{\,{}^{\dagger}}_{\bar{\alpha}\bar{m}}=\frac{1}{\mathsf{m}(\bar{m})}\); \(\vec{x}^{\,{}^{\dagger}}_{\bar{\alpha}\bar{m}}=\frac{1}{\mathsf{m}(\bar{m})}\))._
Proof.: The proof is straightforward by induction on resource terms, also using lemma 3.2 in the case of a bag.
The value expansion of a variable is meant to behave like an identity morphism on resource terms: e.g., we will rely on the fact that \(x^{\eta}\{m/x\}\rightsquigarrow m\) and \(m\{x^{\eta}/x\}\rightsquigarrow m\) for any \(m\in\Delta_{\mathrm{v}}\). To establish these properties, we will actually show that, for every \(m\in\Delta_{\mathrm{v}}\), there is exactly one element of \(x^{\eta}\) contributing to each of those reductions:
**Lemma 4.3**.: _For any resource term \(u\), any value variable \(x\), and any sequence variable \(\vec{x}\), the following holds:_
1. _there exists_ \(\mathsf{c}^{-}\langle x,u\rangle\in x^{\,{}^{\dagger}}\) _such that_ \(u[\mathsf{c}^{-}\langle x,u\rangle/x]\to_{\mathrm{r}}^{*}\mathsf{m}(\mathsf{c} ^{-}\langle x,u\rangle)u\)_, and_ \(u[\bar{p}/x]\to_{\mathrm{r}}^{*}0\) _for any other_ \(\bar{p}\in x^{\,{}^{\dagger}}\)_;_
2. _there exists_ \(\mathsf{c}^{-}\langle\vec{x},u\rangle\in\vec{x}^{\,{}^{\dagger}}\) _such that_ \(u[\mathsf{c}^{-}\langle\vec{x},u\rangle/\vec{x}]\to_{\mathrm{r}}^{*}\mathsf{m}( \mathsf{c}^{-}\langle\vec{x},u\rangle)u\)_, and_ \(u[\bar{p}/\vec{x}]\to_{\mathrm{r}}^{*}0\) _for any other_ \(\bar{p}\in\vec{x}^{\,{}^{\dagger}}\)_;_
3. _if_ \(u=m\in\Delta_{\mathrm{v}}\) _then there exists_ \(\mathsf{c}^{+}\langle x,m\rangle\in x^{\eta}\) _such that_ \(\mathsf{c}^{+}\langle x,m\rangle\{m/x\}\to_{\mathrm{r}}^{*}\mathsf{m}(\mathsf{ c}^{+}\langle x,m\rangle)m\)_, and_ \(p\{m/x\}\to_{\mathrm{r}}^{*}0\) _for any other_ \(p\in x^{\eta}\)_;_
4. _if_ \(u=\bar{m}\in\Delta_{\mathrm{l}}\) _then there exists_ \(\mathsf{c}^{+}\langle x,\bar{m}\rangle\in x^{\,{}^{\dagger}}\) _such that_ \(\mathsf{c}^{+}\langle x,\bar{m}\rangle[\bar{m}/x]\to_{\mathrm{r}}^{*}\mathsf{m}( \mathsf{c}^{+}\langle x,\bar{m}\rangle)\bar{m}\)_, and_ \(\bar{p}[\bar{m}/x]\to_{\mathrm{r}}^{*}0\) _for any other_ \(\bar{p}\in x^{\,{}^{\dagger}}\)_;_
5. _if_ \(u=\vec{m}\in\Delta_{\mathrm{s}}\) _then:_ * _there exists_ \(\mathsf{c}^{+}\langle\vec{x},\vec{m}\rangle\in\vec{x}^{\,{}^{\dagger}}\) _such that_ \(\mathsf{c}^{+}\langle\vec{x},\vec{m}\rangle[\bar{m}/\vec{x}]\to_{\mathrm{r}}^{ *}\mathsf{m}(\mathsf{c}^{+}\langle\vec{x},\vec{m}\rangle)\bar{m}\)_, and_ \(\bar{p}[\bar{m}/\vec{x}]\to_{\mathrm{r}}^{*}0\) _for any other_ \(\bar{p}\in\vec{x}^{\,{}^{\dagger}}\)_;_ * _and there exists_ \(\mathsf{c}\langle x,\vec{m}\rangle\in x^{\eta}\) _such that_ \(\mathsf{c}\langle x,\vec{m}\rangle\,\vec{m}\to_{\mathrm{r}}^{*}\mathsf{m}( \mathsf{c}\langle x,\vec{m}\rangle)(x\,\vec{m})\)_, and_ \(p\,m\to_{\mathrm{r}}^{*}0\) _for any other_ \(p\in x^{\eta}\)_._
Proof.: We prove all results simultaneously by induction on \(u\).
Note that item 2 is obtained by iterating item 1. Indeed, if \(\vec{p}=\langle\bar{p}_{i}\rangle_{i\in\mathbb{N}}\in\vec{x}^{\,{}^{\dagger}}\) and \(k\in\mathbb{N}\) is such that \(\vec{x}[i]\not\in\mathcal{V}(u)\) when \(i\geq k\), then \(u[\vec{p}/\vec{x}]\to_{\mathrm{r}}^{*}0\) as soon as \(\bar{p}_{i}\neq[\,{}]\) for some \(i\geq 0\). And, otherwise, \(u[\vec{p}/\vec{x}]=u[\bar{p}_{0}/\vec{x}[0]]\cdots[\bar{p}_{k-1}/\vec{x}[k-1]]\) -- observing that no \(\vec{x}[i]\) is free in \(\bar{p}_{j}\) when \(i\neq j\). We then write
\[u^{\prime}\coloneqq u[\mathsf{c}^{-}\langle\vec{x}[0],u\rangle/\vec{x}[0]] \cdots[\mathsf{c}^{-}\langle\vec{x}[k-1],u\rangle/\vec{x}[k-1]]\]
and
\[\mathsf{c}^{-}\langle\vec{x},u\rangle\coloneqq\mathsf{c}^{-}\langle\vec{x}[0 ],u\rangle\doteq\cdots\doteq\mathsf{c}^{-}\langle\vec{x}[k-1],u\rangle\doteq\iota\]
so that \(u^{\prime}=u[\mathsf{c}^{-}\langle\vec{x},u\rangle/\vec{x}]\). Applying \(k\) times item 3, together with lemma 2.8, we obtain \(u^{\prime}\to_{\mathrm{r}}^{*}(\prod_{i=0}^{k-1}\mathsf{m}(\mathsf{c}^{-} \langle\vec{x}[i],u\rangle))u\), and \(\prod_{i=0}^{k-1}\mathsf{m}(\mathsf{c}^{-}\langle\vec{x}[i],u\rangle)=\mathsf{ m}(\mathsf{c}^{-}\langle\vec{x},u\rangle)\) by construction. Finally, we have \(u[\bar{p}/\vec{x}[i]]\to_{\mathrm{r}}^{*}0\) for any \(\bar{p}\in\vec{x}[i]^{\,{}^{\dagger}}\) other than \(\mathsf{c}^{-}\langle\vec{x}[i],u\rangle\), for \(0\leq i<k\), by item 3 again; and recall that we have \(u[\bar{p}/\vec{x}[i]]\to_{\mathrm{r}}^{*}0\) for any \(\bar{p}\neq[\,{}]\) for \(i\geq k\). Hence \(\mathsf{c}^{-}\langle\vec{x},u\rangle\) is the only \(\vec{p}\in\vec{x}^{\,{}^{\dagger}}\) with \(\mathcal{N}(u[\bar{p}/\vec{x}])\neq 0\).
So in the following we only establish item 1, and possibly one of items 3 to 5, depending on the case.
If \(u=m\in\Delta_{\mathrm{v}}\), we can write \(m=\lambda\vec{y}.b\), assuming w.l.o.g. that \(x\not\in\vec{y}\). Applying the induction hypothesis (item 1) to \(b\) entails \(b[\mathsf{c}^{-}\langle x,b\rangle/x]\to_{\mathrm{r}}^{*}\mathsf{m}(\mathsf{c} ^{-}\langle x,b\rangle)b\) and \(b[\bar{p}/x]\to_{\mathrm{r}}^{*}0\) for any other \(\bar{p}\in x^{\uparrow}\): we set \(\mathsf{c}^{-}\langle x,m\rangle\coloneqq\mathsf{c}^{-}\langle x,b\rangle\), and deduce item 1 for \(m\) by lemma 2.7, observing that \(m[\bar{p}/x]=\lambda\vec{y}.b[\bar{p}/x]\) for any \(\bar{p}\in x^{\uparrow}\). Moreover, we can write \(x^{\eta}=\lambda\vec{y}.x\,\vec{y}^{\uparrow}\): for each \(p\in x^{\eta}\), we have \(p=\lambda\vec{y}.x\,\vec{p}\) with \(\vec{p}\in\vec{y}^{\uparrow}\). The induction hypothesis (item 2) yields \(b[\mathsf{c}^{-}\langle\vec{y},b\rangle/\vec{y}]\to_{\mathrm{r}}^{*}\mathsf{m }(\mathsf{c}^{-}\langle\vec{y},b\rangle)b\), and \(b[\bar{p}/\vec{y}]\to_{\mathrm{r}}^{*}0\) for any other \(\vec{p}\in\vec{y}^{\uparrow}\): we set \(\mathsf{c}^{+}\langle x,m\rangle\coloneqq\lambda\vec{y}.x\,\mathsf{c}^{-} \langle\vec{y},b\rangle\in x^{\eta}\), and deduce item 3 for \(m\) by lemma 2.7, observing that \((\lambda\vec{y}.x\,\vec{p})\{m/x\}=\lambda\vec{y}.m\,\vec{p}\to_{\mathrm{r}} \lambda\vec{y}.b[\bar{p}/\vec{y}]\) for any \(\vec{p}\in\vec{y}^{\uparrow}\).
If \(u=a\in\Delta_{\mathrm{b}}\), we only have to prove item 1. There are three possible cases: either \(a=y\,\vec{n}\) with \(y\neq x\), or \(a=x\,\vec{n}\), or \(a=m\,\vec{n}\). If \(a=y\,\vec{n}\) with \(y\neq x\), we apply the induction hypothesis (item 1) to \(\vec{n}\), set \(\mathsf{c}^{-}\langle x,a\rangle\coloneqq\mathsf{c}^{-}\langle x,\vec{n}\rangle\), and conclude as in the abstraction case.
If \(a=x\,\vec{n}\), then for each \(\bar{p}=[p_{1},\ldots,p_{k}]\in x^{\uparrow}\), we have \(a[\bar{p}/x]=0\) if \(k=0\) and, otherwise, \(a[\bar{p}/x]=\sum_{i=1}^{k}p_{i}\,\vec{n}[\vec{p}_{i}^{\prime}/x]\) where each \(\vec{p}_{i}^{\prime}\) is such that \(\bar{p}=[p_{i}]*\vec{p}_{i}^{\prime}\). The induction hypothesis (items 1 and 5) applied to \(\vec{n}\) yields \(\mathsf{c}^{-}\langle x,\vec{n}\rangle\in x^{\uparrow}\) and \(\mathsf{c}\langle x,\vec{n}\rangle\in x^{\uparrow}\), and we set \(\mathsf{c}^{-}\langle x,a\rangle\coloneqq[\mathsf{c}\langle x,\vec{n}\rangle]* \mathsf{c}^{-}\langle x,\vec{n}\rangle\in x^{\uparrow}\). If \(\bar{p}\neq\mathsf{c}^{-}\langle x,a\rangle\), then each element in the previous sum normalizes to \(0\). And, if \(\bar{p}=\mathsf{c}^{-}\langle x,a\rangle\), we obtain \(a[\bar{p}/x]\to_{\mathrm{r}}^{*}(l\times\mathsf{m}(\mathsf{c}^{-}\langle x, \vec{n}\rangle)\times\mathsf{m}(\mathsf{c}^{-}\langle x,\vec{n}\rangle))a\) where \(l=\#\{p:\bar{p}\lhd 2\mid\bar{p}\mid_{p}1=\mathsf{c}^{-}\langle x,m\rangle\}\). Again, we apply fact 1 to check that \(\mathsf{m}(\mathsf{c}^{-}\langle x,a\rangle)=l\times\mathsf{m}(\mathsf{c}^{-} \langle x,m\rangle)\times\mathsf{m}(\mathsf{c}^{-}\langle x,\vec{n}\rangle)\), which yields item 1 for \(a\).
If \(u=\bar{m}\in\Delta_{\mathrm{i}}\) then we can write \(\bar{m}=[m_{1},\ldots,m_{k}]\), and \(\bar{m}[\bar{p}/x]=\sum_{\bar{p}\lhd\bar{p}_{1}*\ldots*\bar{p}_{k}}[m_{1}[ \bar{p}_{1}/x],\ldots,m_{k}[\bar{p}_{k}/x]]\). The induction hypothesis (item 1) applied to each \(m_{i}\) yields \(\mathsf{c}^{-}\langle x,m_{i}\rangle\in x^{\uparrow}\) and we set \(\mathsf{c}^{-}\langle x,\bar{m}\rangle\coloneqq\mathsf{c}^{-}\langle x,m_{1} \rangle*\cdots*\mathsf{c}^{-}\langle x,m_{k}\rangle\). If \(\bar{p}\neq\mathsf{c}^{-}\langle x,\bar{m}\rangle\), then each element in the previous sum normalizes to \(0\). And, if \(\bar{p}=\mathsf{c}^{-}\langle x,\bar{m}\rangle\), we obtain \(a[\bar{p}/x]\to_{\mathrm{r}}^{*}(l\times\prod_{i=1}^{k}\mathsf{m}(\mathsf{c}^ {-}\langle x,m_{i}\rangle))\bar{m}\) where \(l=\#\{p:\bar{p}\lhd 8\mid\bar{p}\mid_{p}i=\mathsf{c}^{-}\langle x,m_{i}\rangle\}\) for \(1\leq i\leq k\). Item 1 for \(\bar{m}\) follows, again applying fact 1.1. The induction hypothesis (item 3) applied to each \(m_{i}\) also yields \(\mathsf{c}^{+}\langle x,m_{i}\rangle\in x^{\eta}\) and we set \(\mathsf{c}^{+}\langle x,\bar{m}\rangle\coloneqq[\mathsf{c}^{+}\langle x,m_{1} \rangle,\ldots,\mathsf{c}^{+}\langle x,m_{k}\rangle]\in x^{\uparrow}\). Assume \(\bar{p}=[p_{1},\ldots,p_{l}]\in x^{\uparrow}\). Since \(x\) occurs exactly once in each \(p_{i}\), we have \(\bar{p}[\bar{m}/x]\to_{\mathrm{r}}^{*}0\) when \(k\neq l\). And if \(k=l\), we have
\[\bar{p}[\bar{m}/x]=\sum_{\sigma\in\mathbb{S}_{k}}[p_{1}\{m_{\sigma(1)}/x\}, \ldots,p_{1}\{m_{\sigma(1)}/x\}]\;.\]
Hence, if \(\bar{p}\neq\mathsf{c}^{+}\langle x,\bar{m}\rangle\), then each element in the previous sum normalizes to \(0\). And, if \(\bar{p}=\mathsf{c}^{+}\langle x,\bar{m}\rangle\), we obtain \(a[\bar{p}/x]\to_{\mathrm{r}}^{*}(l\times\prod_{i=1}^{k}\mathsf{m}(\mathsf{c}^ {+}\langle x,m_{i}\rangle))\bar{m}\) where
\(l=\#\{\sigma\in\mathbb{S}_{k}\mid p_{i}=\mathsf{c}^{+}\langle x,m_{\sigma(i)}\rangle \text{ for }1\leq i\leq k\}\). Item 4 for \(\bar{m}\) follows since \(l\times\prod_{i=1}^{k}\mathsf{m}(\mathsf{c}^{+}\langle x,m_{i}\rangle)= \mathsf{m}(\mathsf{c}^{+}\langle x,\bar{m}\rangle)\) by definition.
Finally, if \(u=\vec{m}\in\Delta_{\mathrm{s}}\) then note that the first statement of item 5 entails the second one. Indeed, if \(p\in x^{\eta}\) then we can write \(p=\lambda\vec{y}.x\,\vec{p}\) with \(\vec{p}\in\vec{y}^{\,!}\). Then \(p\,\vec{m}\to_{\mathrm{r}}x\,\vec{p}\,[\!\vec{m}/\vec{y}]\), and we apply the first statement of item 5 to conclude: in particular, we set \(\mathsf{c}\langle x,\vec{m}\rangle\coloneqq\lambda\vec{y}.x\,\mathsf{c}^{+} \langle\vec{y},\vec{m}\rangle\).
In case \(u=\iota\), items 1 and 5 are straightforward, with \(\mathsf{c}^{-}\langle x,\iota\rangle\coloneqq[\,]\), and \(\mathsf{c}^{+}\langle\vec{x},\iota\rangle\coloneqq\iota\).
It remains only to establish item 1 and the first statement of item 5 for \(u=\bar{m}\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{ \mathrel{ }}}}}{}{}{}{}{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}
_;_
4. _if_ \(U=\vec{M}\in\mathbb{K}\langle\Delta_{\mathrm{i}}\rangle\) _then_ \((x^{n})^{!}[\vec{M}/x]\rightsquigarrow\vec{M}\)_;_
5. _if_ \(U=\vec{M}\in\mathbb{K}\langle\Delta_{\mathrm{s}}\rangle\) _then_ \(\vec{x}^{\,\mathrm{!}}[\vec{M}/\vec{x}]\rightsquigarrow\vec{M}\) _and_ \(x^{\eta}\,\vec{M}\rightsquigarrow x\,\vec{M}\)_._
6. _if_ \(U=A\in\mathbb{K}\langle\Delta_{\mathrm{b}}\rangle\) _then_ \((\lambda\vec{x}.A)\,\vec{x}^{\,\mathrm{!}}\rightsquigarrow A\)_._
_Finally, for any sequence \(\vec{M}\) of value vectors such that \(\mathcal{V}_{\mathrm{s}}(\vec{M})\) is finite, we have \(\vec{x}^{\,\mathrm{!}}\{\vec{M}/\vec{x}\}\rightsquigarrow\vec{M}^{\,\mathrm{!}}\)._
Proof.: Except for the second part of item 3, Items 1 to 6 follow directly from the corresponding items of lemma 4.4 by linearity, using lemma 3.9.
If \(M\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle\) and \(\vec{x}\not\in\mathcal{V}_{\mathrm{s}}(M)\), we can write \(M=\lambda\vec{x}.A\), and we obtain \(\lambda\vec{x}.M\,\vec{x}^{\,\mathrm{!}}\rightsquigarrow M\); by item 6 and lemma 3.9.
Finally, for any sequence \(\vec{M}\) of value vectors such that \(\mathcal{V}_{\mathrm{s}}(\vec{M})\) is finite, lemma 3.5 gives \(\vec{x}^{\,\mathrm{!}}\{\vec{M}/\vec{x}\}=\vec{x}^{\,\mathrm{!}}[\vec{M}^{ \,\mathrm{!}}/\vec{x}]\) and we conclude by item 5.
### Taylor expansion
Write \(\Lambda\) for the set of pure, ordinary \(\lambda\)-terms, that we denote by letters \(M,N,P\). We define the **structural Taylor expansion**\(\mathcal{T}_{\eta}(M)\) of an ordinary \(\lambda\)-term \(M\) by induction on \(M\) as follows:
\[\mathcal{T}_{\eta}(x) \coloneqq x^{\eta}\] \[\mathcal{T}_{\eta}(\lambda x.M) \coloneqq\lambda x.\mathcal{T}_{\eta}(M)\] \[\mathcal{T}_{\eta}(M\,N) \coloneqq\lambda\vec{y}.\mathcal{T}_{\eta}(M)\,\mathcal{T}_{ \eta}(N)^{\dagger}\mathrel{\mathop{:}}\vec{y}^{\,\mathrm{!}}\]
where \(\vec{y}\) is chosen fresh in the application case.
The **head Taylor expansion**\(\mathcal{T}_{h}(M)\) is defined inductively on the head structure of \(M\in\Lambda\), for which we first need to introduce some notations. Given a sequence \(\vec{N}=\langle N_{1},\ldots,N_{k}\rangle\) of \(\lambda\)-terms, we write the **iterated application**\(M\,\vec{N}\coloneqq M\,N_{1}\cdots N_{k}\). Similarly, if \(\vec{N}=\langle\bar{N}_{1},\ldots,\bar{N}_{k}\rangle\in\mathbb{K}\langle\Delta _{\mathrm{i}}\rangle^{k}\) is a sequence of bag vectors and \(\vec{M}\in\mathbb{K}\langle\Delta_{\mathrm{s}}\rangle\) is a stream vector, we write the **concatenation**\(\vec{N}\,\vec{M}\coloneqq\bar{N}_{1}\mathrel{\mathop{:}}\cdots\mathrel{ \mathop{:}}\bar{N}_{k}\mathrel{\mathop{:}}\vec{M}\).
Recall that if \(M\) is a \(\lambda\)-term, then:
* either \(M\) is an abstraction;
* or we can write \(M=x\,\vec{N}\);
* or we can write \(M=P\,\vec{N}\) where \(P\) is an abstraction and \(\vec{N}\neq\varepsilon\).
Then we define:
\[\mathcal{T}_{h}(\lambda x.M) \coloneqq\lambda x.\mathcal{T}_{h}(M)\] \[\mathcal{T}_{h}(x\,\vec{N}) \coloneqq\lambda\vec{y}.x\,\mathcal{T}_{h}^{\mathrm{!}}(\vec{N})\, \vec{y}^{\,\mathrm{!}}\] \[\mathcal{T}_{h}(M\,\vec{N}) \coloneqq\lambda\vec{y}.\mathcal{T}_{h}(M)\,\mathcal{T}_{h}^{ \mathrm{!}}(\vec{N})\,\vec{y}^{\,\mathrm{!}}\] if \[M\] is an abstraction and
\[\vec{N}\neq\varepsilon\]
where \(\mathcal{T}_{h}^{!}(\langle M_{1},\ldots,M_{k}\rangle)\coloneqq\langle\mathcal{T}_{ h}(M_{1})^{!},\ldots,\mathcal{T}_{h}(M_{k})^{!}\rangle\) -- we choose \(\vec{y}\) fresh in the last two cases. We may also write \(\mathcal{T}_{h}(\langle M_{1},\ldots,M_{k}\rangle)\coloneqq\langle\mathcal{T}_ {h}(M_{1}),\ldots,\mathcal{T}_{h}(M_{k})\rangle\). Observe that \(\mathcal{T}_{h}(x)=\mathcal{T}_{h}(x\,\varepsilon)=\lambda\vec{y}.x\, \varepsilon\,\vec{y}^{!}=x^{\eta}=\mathcal{T}_{\eta}(x)\).
Finally, if \(\vec{x}=\langle x_{1},\ldots,x_{k}\rangle\) we write \(\lambda\vec{x}.M\coloneqq\lambda x_{1}.\cdots\lambda x_{k}.M\) for \(M\) a \(\lambda\)-term or a value vector, so that: \(\mathcal{T}_{\eta}(\lambda\vec{x}.M)=\lambda\vec{x}.\mathcal{T}_{\eta}(M)\) and \(\mathcal{T}_{h}(\lambda\vec{x}.M)=\lambda\vec{x}.\mathcal{T}_{h}(M)\).
**Lemma 4.6**.: _For every \(\lambda\)-term \(M\), and every \(\vec{N}\in\Lambda^{k}\) such that \(\vec{y}\not\in\mathcal{V}_{\mathrm{s}}(M)\cup\mathcal{V}_{\mathrm{s}}(\vec{N})\), \(\lambda\vec{y}.\mathcal{T}_{h}(M)\,\mathcal{T}_{h}^{!}(\vec{N})\,\vec{y}^{!} \rightsquigarrow^{\star}\mathcal{T}_{h}(M\,\vec{N})\)._
Proof.: If \(k=0\), then we conclude directly by lemma 4.5. If \(M\) is an abstraction and \(k>0\), then we apply the reflexivity of \(\rightsquigarrow^{\ast}\).
If \(M=z\,\vec{P}\) then
\[\mathcal{T}_{h}(M)=\lambda\vec{y}.z\,\mathcal{T}_{h}^{!}(\vec{P})\,\vec{y}^{!}\]
hence
\[\lambda\vec{y}.\mathcal{T}_{h}(M)\,\mathcal{T}_{h}^{!}(\vec{N}) \,\vec{y}^{!} \rightsquigarrow\lambda\vec{y}.(z\,\mathcal{T}_{h}^{!}(\vec{P})\, \vec{y}^{!})[\mathcal{T}_{h}^{!}(\vec{N})\,\vec{y}^{!}/\vec{y}] \text{by lemma \ref{lem:def_def_def_def_def}}\] \[=\lambda\vec{y}.(z\,\mathcal{T}_{h}^{!}(\vec{P})\,\vec{y}^{!})\{ \mathcal{T}_{h}^{!}(\vec{N})\,\vec{y}^{!}/\vec{y}\} \text{by lemma \ref{lem:def_def_def_def_def}}\] \[=\lambda\vec{y}.z\,\mathcal{T}_{h}^{!}(\vec{P})\,\vec{y}^{!}\{ \mathcal{T}_{h}^{!}(\vec{N})\,\vec{y}^{?}/\vec{y}\}\] \[\rightsquigarrow\lambda\vec{y}.z\,\mathcal{T}_{h}^{!}(\vec{P})\, \mathcal{T}_{h}^{!}(\vec{N})\,\vec{y}^{?} \text{by lemma \ref{lem:def_def_def_def}}\] \[=\mathcal{T}_{h}(z\,\vec{P}\,\vec{N}) \text{by definition.}\]
If \(M=M^{\prime}\,\vec{P}\) where \(M^{\prime}\) is an abstraction and \(|\vec{P}|>0\), then
\[\mathcal{T}_{h}(M)=\lambda\vec{y}.\mathcal{T}_{h}(\vec{M}^{\prime})\,\mathcal{ T}_{h}^{!}(\vec{P})\,\vec{y}^{!}\]
and we reason as in the previous case.
**Theorem 4.7**.: _For every \(\lambda\)-term \(M\), \(\mathcal{T}_{\eta}(M)\rightsquigarrow^{\ast}\mathcal{T}_{h}(M)\)._
Proof.: We reason by induction on \(M\).
If \(M=x\in\mathcal{V}\), we have already observed that \(\mathcal{T}_{h}(M)=x^{\eta}=\mathcal{T}_{\eta}(M)\).
If \(M\) is an abstraction, we apply the induction hypothesis and conclude by lemma 3.9.
Otherwise, \(M=N\,P\) so that:
\[\mathcal{T}_{\eta}(M) =\lambda\vec{y}.\mathcal{T}_{\eta}(N)\,\mathcal{T}_{\eta}(P)^{!} \coloneqq\vec{y}^{!}\] \[\rightsquigarrow^{\ast}\lambda\vec{y}.\mathcal{T}_{h}(N)\, \mathcal{T}_{h}(P)^{!}\coloneqq\vec{y}^{!} \text{(by ind. hyp. and lemmas \ref{lem:def_def_def_def} and \ref{lem:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_def__def_def_def__def_def_def__def_def__def_def__def__def_def_def__def__def__def_def__def__def__def__def__def__def___def__def___def__def__def___def__def__def__def__def__def___def__def__def___def___def___def___def___def___def___def____def___def___def____def___def____def____def____def___def____def____def____def____def___def___def____def____def___def_____def____def____def____def___def___def____def____def____def___def___def____def___def___def____def___def____def____def___def____def___def____def____def____def____def____def___def_____def___def____def___def_____def___def____def____def____def_____def___def____def__def_____def____def____def_____def____def___def____def___def____def____def___def____def___def____def___def____def___def____def____def___def___def____def___def____def___def____def____def__def____def___def___def___def___def___def____def____def___def___def___def___def____def____def____def___def___def___def____def___def____def___def___def____def__def___def___def____def___def___def____def___def____def___def___def____def___def___def___def___def___def___def____def__def___def____def___def____def____def___def____def____def___def___def___def___def____def____def___def____def___def___def____def____def____def____def___def___def___def___def____def____def____def___def___def___def____def___def____def____def___def____def___def___def____def___def____def____def____def____def___def___def____def___def___def___def____def___def____def____def___def____def___def____def___def___def____def___def____def____def___def____def____def____def____def____def____def___def____def____def___def___def____def___def____def____def___def____def____def___def____def___def____def___def____def___def___def____def____def____def___def___def____def___def___def____def___def___def____def___def___def____def___def___def___def____def___def____def___def___def____def____def____def____def___def_____def___def____def___def_____def____def____def___def____def_____def____def____def____def____def____def____def____def___def____def___def____def___def_____def____def____def____def___def___def____def___def____def___def___def_____def___def___def___def____def___def___def____def____def____def____def_____def___def____def___def____def___def____def____def___def_____def___def___def_____def____def___def____def___def____def____def_____def____def____def___
**Lemma 4.9**.: _For all \(\lambda\)-terms \(M\) and \(N\), \(\mathcal{T}_{\eta}(M)\{\mathcal{T}_{\eta}(N)/x\}\rightsquigarrow^{*}\mathcal{T}_{ \eta}(M\{N/x\})\)._
Proof.: The proof is by induction on \(M\).
If \(M=x\) then
\[\mathcal{T}_{\eta}(M)\{\mathcal{T}_{\eta}(N)/x\}=x^{\eta}\{\mathcal{T}_{\eta}( N)/x\}\rightsquigarrow^{*}\mathcal{T}_{\eta}(N)\]
by lemma 4.5 and we conclude since \(M\{N/x\}=N\).
If \(M=y\neq x\) then
\[\mathcal{T}_{\eta}(M)\{\mathcal{T}_{\eta}(N)/x\}=y^{\eta}\{\mathcal{T}_{\eta}( N)/x\}=y^{\eta}=\mathcal{T}_{\eta}(y)\]
and we conclude since \(M\{N/x\}=y\).
If \(M=\lambda z.M^{\prime}\) then
\[\mathcal{T}_{\eta}(M)\{\mathcal{T}_{\eta}(N)/x\} =(\lambda z.\mathcal{T}_{\eta}(M^{\prime}))\{\mathcal{T}_{\eta}( N)/x\}\] \[=\lambda z.\mathcal{T}_{\eta}(M^{\prime})\{\mathcal{T}_{\eta}(N )/x\}\] \[\rightsquigarrow^{*}\lambda z.\mathcal{T}_{\eta}(M^{\prime}\{N/x\}) \qquad\text{ by ind. hyp. and lemma \ref{lem:prop}}\] \[=\mathcal{T}_{\eta}(M\{N/x\})\;.\]
If \(M=M^{\prime}\,M^{\prime\prime}\) then
\[\mathcal{T}_{\eta}(M)\{\mathcal{T}_{\eta}(N)/x\} =(\lambda\vec{y}.\mathcal{T}_{\eta}(M^{\prime})\,(\mathcal{T}_{ \eta}(M^{\prime\prime}))^{!}\mathrel{\mathop{:}}\vec{y}\,^{!})\{\mathcal{T}_{ \eta}(N)/x\}\] \[=\lambda\vec{y}.(\mathcal{T}_{\eta}(M^{\prime})\{\mathcal{T}_{ \eta}(N)/x\})\,(\mathcal{T}_{\eta}(M^{\prime\prime})\{\mathcal{T}_{\eta}(N)/x \})^{!}\mathrel{\mathop{:}}\vec{y}\,^{!}\] \[\text{ by lemma \ref{lem:prop}}\] \[\rightsquigarrow^{*}\lambda\vec{y}.(\mathcal{T}_{\eta}(M^{\prime} \{N/x\}))\,\mathcal{T}_{\eta}(M^{\prime\prime}\{N/x\})^{!}\mathrel{\mathop{:}} \vec{y}\,^{!}\] \[\text{ by ind. hyp. and lemmas \ref{lem:prop} and \ref{lem:prop}}\] \[=\mathcal{T}_{\eta}(M^{\prime}\{N/x\}\,M^{\prime\prime}\{N/x\})\;.\] \[=\mathcal{T}_{\eta}(M\{N/x\})\;.\]
**Theorem 4.10**.: _If \(M\to_{\beta}M^{\prime}\) then \(\mathcal{T}_{\eta}(M)\rightsquigarrow^{*}\mathcal{T}_{\eta}(M^{\prime})\)._
Proof.: In the case of a redex, \(M=(\lambda x.N)\,P\), we have
\[\mathcal{T}_{\eta}(M) =\lambda\vec{y}.(\lambda x.\mathcal{T}_{\eta}(N))\,\mathcal{T}_{ \eta}(P)^{!}\mathrel{\mathop{:}}\vec{y}\,^{!}\] \[\rightsquigarrow^{*}\lambda\vec{y}.\mathcal{T}_{\eta}(N)[ \mathcal{T}_{\eta}(P)^{!}/x]\,\vec{y}\,^{!}\] by lemma 3.9 \[=\lambda\vec{y}.\mathcal{T}_{\eta}(N)\{\mathcal{T}_{\eta}(P)/x\} \,\vec{y}\,^{!}\] by lemma 3.3 \[\rightsquigarrow^{*}\lambda\vec{y}.\mathcal{T}_{\eta}(N\{P/x\})\, \vec{y}\,^{!}\] by lemma 4.9 \[\rightsquigarrow^{*}\mathcal{T}_{\eta}(N\{P/x\}) \qquad\text{ by lemma \ref{lem:prop}}\] \[\rightsquigarrow^{*}\mathcal{T}_{\eta}(N\{P/x\}) \qquad\text{ by lemma \ref{lem:prop}}.\]
The general case follows by induction on \(M\to_{\beta}M^{\prime}\), using lemma 3.9 in each contextuality case, plus lemma 3.11 for a reduction in argument position.
**Corollary 4.11**.: _For any \(\lambda\)-term \(M\), \(\mathcal{N}(\mathcal{T}_{\eta}(M))=\mathcal{N}(\mathcal{T}_{h}(M))\). If moreover \(M\to_{\beta}M^{\prime}\) or \(M\to_{\eta}M^{\prime}\) then \(\mathcal{N}(\mathcal{T}_{h}(M))=\mathcal{N}(\mathcal{T}_{h}(M^{\prime}))\). In particular if \(M\) is normalizable then \(\mathcal{N}(\mathcal{T}_{h}(M))=\mathcal{T}_{h}(\mathcal{N}(M))\)._
## 5 Characterization of \(\mathcal{H}^{*}\)
For each \(M\in\Lambda\), we write \(\mathcal{NT}(M)\) for \(\mathcal{N}(\mathcal{T}_{\eta}(M))\). By corollary 4.11, we also have \(\mathcal{NT}(M)=\mathcal{N}(\mathcal{T}_{h}(M))\). Write \(M=_{\mathcal{T}}M^{\prime}\) if \(\mathcal{NT}(M)=\mathcal{NT}(M^{\prime})\).
**Lemma 5.1**.: _The equivalence relation \(=_{\mathcal{T}}\) on \(\Lambda\) is an extensional \(\lambda\)-theory._
Proof.: By corollary 4.11, \(=_{\mathcal{T}}\) contains \(\to_{\beta}\) and \(\to_{\eta}\).
For each \(M\in\mathbb{K}\langle\Delta_{\mathrm{v}}\rangle\), we have \(\mathcal{N}(\lambda x.M)=\lambda x\,\mathcal{N}(M)\): by lemma 3.8, it suffices to observe that \(\lambda x.M\rightsquigarrow\lambda x\,\mathcal{N}(M)\), which follows from lemma 3.9. Hence, for each \(M\in\Lambda\), we have \(\mathcal{NT}(\lambda x.M)=\lambda x\,\mathcal{NT}(M)\). It follows that \(\lambda x.M=_{\mathcal{T}}\lambda x.M^{\prime}\) as soon as \(M=_{\mathcal{T}}M^{\prime}\).
To show that \(=_{\mathcal{T}}\) is a congruence, it remains to check that if moreover \(N=_{\mathcal{T}}N^{\prime}\) then \(M\,N=_{\mathcal{T}}M^{\prime}\,N^{\prime}\). We have:
\[\mathcal{T}_{\eta}(M\,N) =\lambda\vec{y}.\mathcal{T}_{\eta}(M)\,\mathcal{T}_{\eta}(N)^{!} \mathrel{\mathop{:}}\vec{y}\,^{!}\] \[\rightsquigarrow\lambda\vec{y}.\mathcal{NT}(M)\,\mathcal{T}_{\eta} (N)^{!}\mathrel{\mathop{:}}\vec{y}\,^{!} \text{by lemma \ref{lemma:1}}\] \[\rightsquigarrow\lambda\vec{y}.\mathcal{NT}(M)\,\mathcal{NT}(N)^{!} \mathrel{\mathop{:}}\vec{y}\,^{!} \text{by lemmas \ref{lemma:1} and \ref{lemma:2}}\]
and similarly for \(M^{\prime}\) and \(N^{\prime}\). We obtain:
\[\mathcal{NT}(M\,N)=\mathcal{N}\big{(}\lambda\vec{y}.\mathcal{NT}(M)\,\mathcal{ NT}(N)^{!}\mathrel{\mathop{:}}\vec{y}\,^{!}\big{)}=\mathcal{NT}(M^{\prime}\,N^{ \prime})\;.\]
In the remaining of this section, we show that \(=_{\mathcal{T}}\) is nothing but the greatest consistent sensible \(\lambda\)-theory \(\mathcal{H}^{*}\), that we may also denote by \(=_{\mathcal{H}^{*}}\).
### Sensibility
We first show that, like ordinary Taylor expansion, extensional Taylor expansion allows to characterize the head normalizability of \(\lambda\)-terms.
Let us write \(M\to_{\beta\mathrm{h}}M^{\prime}\) if \(M\) has a head redex and \(M^{\prime}\) is obtained by reducing it: namely, either \(M=(\lambda x.P)\,N_{0}\mathrel{\mathop{:}}\vec{N}\) and \(M^{\prime}=P\{N_{0}/x\}\,\vec{N}\); or \(M=\lambda y.P\) and \(M^{\prime}=\lambda y.P^{\prime}\) with \(P\to_{\beta\mathrm{h}}P^{\prime}\) inductively. Note that there is at most one \(M^{\prime}\) such that \(M\to_{\beta\mathrm{h}}M^{\prime}\): write
\[\mathcal{H}_{\beta}(M)\mathrel{\mathop{:}}=\begin{cases}M^{\prime}\text{ s.t. }M\to_{\beta\mathrm{h}}M^{\prime}&\text{if $M$ is head reducible}\\ M&\text{if $M$ is in head normal form}\end{cases}\;.\]
We obtain that \(M\) is **head normalizable** (the sequence of \(\to_{\beta\mathrm{h}}\) reductions starting from \(M\) is finite) iff there exists some \(k\in\mathbb{N}\) such that \(\mathcal{H}_{\beta}^{k}(M)\) is in head normal form.
**Theorem 5.2**.: _For every \(\lambda\)-term \(M\), the following are equivalent:_
1. \(M\) _is_ \(\beta\eta\)_-equivalent to a head normal form;_
2. \(\mathcal{NT}(M)\neq 0\)_;_
3. \(M\) _is head normalizable._
The equivalence between item 1 and item 3 is well known but we keep them separate to fit the structure of the proof: the implication from item 3 to item 1 is trivial, and we prove separately the implications from item 1 to item 2 then from item 2 to item 3.
The first one is easy:
**Lemma 5.3**.: _If \(M\) is \(\beta\eta\)-equivalent to a head normal form then \(\mathcal{NT}(M)\neq 0\)._
Proof.: If \(M\) is \(\beta\eta\)-equivalent to a head normal form \(M^{\prime}\), we can write \(M^{\prime}=\lambda\vec{x}[0]\dotsm\lambda\vec{x}[k-1].y\,\vec{P}\), then observe that \(m\coloneqq\lambda\vec{x}.y\,\iota\in\mathcal{T}_{h}(M^{\prime})\). By corollary 4.11, \(\mathcal{NT}(M)=\mathcal{NT}(M^{\prime})\ni m\).
For the other implication, we need some results on head reduction in the resource calculus.
Similarly to the ordinary \(\lambda\)-calculus, we write \(a\mapsto_{\mathrm{Rh}}a^{\prime}\) if \(a\) is a **head redex** and \(a^{\prime}\) is obtained by reducing it: namely, \(a=(\lambda\vec{x}.c)\,\vec{n}\) and \(a^{\prime}=c\{\vec{n}/\vec{x}\}\). We then write
\[\mathcal{H}_{\mathrm{R}}(a)\coloneqq\begin{cases}a^{\prime}\text{ s.t. }a\mapsto_{\mathrm{Rh}}a^{\prime}&\text{if $a$ is a head redex}\\ a&\text{otherwise}\end{cases}\;.\]
On value terms, we set \(m\mapsto_{\mathrm{Rh}}m^{\prime}\) if \(m=\lambda\vec{y}.c\) and \(m^{\prime}=\lambda\vec{y}.c^{\prime}\) with \(c\mapsto_{\mathrm{Rh}}c^{\prime}\), and \(\mathcal{H}_{\mathrm{R}}(\lambda\vec{y}.c)\coloneqq\lambda\vec{y}.\mathcal{H} _{\mathrm{R}}(c)\): in this case, we say \(m\) is **head reducible** and \(c\) is the **head redex of \(m\)**; otherwise we say \(m\) is in **head normal form**. By lemma 2.14, if \(m\mapsto_{\mathrm{Rh}}M^{\prime}\ni m^{\prime}\) then \(\|m^{\prime}\|<\|m\|\): it follows that \(m\) is in head normal form iff \(\mathcal{H}_{\mathrm{R}}(m)=m\) (by contrast with the ordinary \(\lambda\)-calculus, where only the forward implication holds). It should be clear that if \(m\in\mathcal{T}_{h}(M)\), then \(m\) is in head normal form iff \(M\) is in head normal form (notice that this is no longer the case if we assume \(m\in\mathcal{T}_{\eta}(M)\) instead). Finally, we write
\[\mathcal{H}_{\mathrm{R}}(M)\coloneqq\sum_{m\in M}M_{\mathbb{R}m}\mathcal{H}_{ \mathrm{R}}(m)\]
for every \(M\in\mathbb{K}\langle\Delta_{\mathrm{t}}\rangle\).
**Lemma 5.4**.: _If \(A=(\lambda x_{1}.\dotsm\lambda x_{k}.\lambda\vec{y}.B)\,\bar{N}_{1}\,\,\,::\, \,\dotsm::\,\bar{N}_{k}\,::\,\vec{P}\) then \(\mathcal{H}_{\mathrm{R}}(A)=B[\bar{N}_{1}/x_{1}]\dotsm[\bar{N}_{k}/x_{k}][\vec{ P}/\vec{y}]\)._
Proof.: By linearity, it is sufficient to consider the case of a base term:
\[A=a=(\lambda x_{1}.\dotsm\lambda x_{k}.\lambda\vec{y}.b)\,\bar{n}_{1}\,\,::\, \dotsm::\,\bar{n}_{k}\,::\,\vec{p}\,\in\Delta_{\mathrm{b}}\;.\]
By definition,
\[a=\left(\lambda\vec{y}.b[\vec{y}\,\uparrow]\{\vec{y}[0]/x_{k}\}\dotsm[\vec{y} \,\uparrow]\{\vec{y}[0]/x_{0}\}\right)\bar{n}_{1}\,\,::\,\dotsm::\,\bar{n}_{k} \,::\,\vec{p}\]
hence
\[\mathcal{H}_{\mathrm{R}}(a)=\big{(}b[\vec{y}\,\uparrow]\{\vec{y}[0]/x_{k}\}\cdots[ \vec{y}\,\uparrow]\{\vec{y}[0]/x_{0}\}\big{)}[\bar{n}_{1}:=\cdots:\bar{n}_{k}: \bar{n}_{\bar{p}}/\vec{y}]\]
and we conclude by iterating lemma 2.13.
We also establish a variant of lemma 4.9 for \(\mathcal{T}_{h}(-)\):
**Lemma 5.5**.: _For all \(\lambda\)-terms \(M\) and \(N\), \(\mathcal{T}_{h}(M)\{\mathcal{T}_{h}(N)/x\}\rightsquigarrow^{*}\mathcal{T}_{h}(M \{N/x\})\)._
Proof.: The proof is by induction on \(M\). The case of \(\lambda x.N\) is settled by applying the induction hypothesis to \(N\) as in the proof of lemma 4.9.
If \(M=M^{\prime}\,\vec{P}\) where \(M^{\prime}\) is an abstraction and \(|\vec{P}|>0\) then
\[\mathcal{T}_{h}(M)\{\mathcal{T}_{h}(N)/x\}=\lambda\vec{y}.\big{(}\mathcal{T} _{h}(M^{\prime})\{\mathcal{T}_{h}(N)/x\}\big{)}\,\big{(}\mathcal{T}_{h}^{!}( \vec{P})\{\mathcal{T}_{h}(N)/x\}\,\vec{y}\,^{!}\big{)}\]
where, by lemma 3.5,
\[\mathcal{T}_{h}^{!}(\vec{P})\{\mathcal{T}_{h}(N)/x\}\,\vec{y}\,^{!}=(\mathcal{ T}_{h}(P_{0})\{\mathcal{T}_{h}(N)/x\})^{!}:\cdots:=(\mathcal{T}_{h}(P_{k})\{ \mathcal{T}_{h}(N)/x\})^{!}:\bar{y}\,^{!}\]
if \(\vec{P}=\langle P_{0},\ldots,P_{k}\rangle\). We obtain
\[\mathcal{T}_{h}(M)\{\mathcal{T}_{h}(N)/x\}\rightsquigarrow^{*}\lambda\vec{y}. \big{(}\mathcal{T}_{h}(M^{\prime}\{N/x\})\big{)}\,\big{(}\mathcal{T}_{h}^{!}( \vec{P}\{N/x\})\,\vec{y}\,^{!}\big{)}\]
by applying the induction hypothesis to \(M^{\prime}\) and to each \(P_{i}\), and then lemmas 3.9 and 3.11. We conclude, observing that \(M^{\prime}\{N/x\}\) is an abstraction and \(|\vec{P}\{N/x\}|>0\).
If \(M=z\,\vec{P}\) with \(z\neq x\) then
\[\mathcal{T}_{h}(M)\{\mathcal{T}_{h}(N)/x\}=\lambda\vec{y}.z\,\big{(}\mathcal{ T}_{h}^{!}(\vec{P})\{\mathcal{T}_{h}(N)/x\}\,\vec{y}\,^{!}\big{)}\]
and we obtain
\[\mathcal{T}_{h}(M)\{\mathcal{T}_{h}(N)/x\}\rightsquigarrow^{*}\lambda\vec{y}. z\,\big{(}\mathcal{T}_{h}^{!}(\vec{P}\{N/x\})\,\vec{y}\,^{!}\big{)}\]
as in the previous case, and then conclude, by definition of \(\mathcal{T}_{h}(-)\).
If \(M=x\,\vec{P}\) then
\[\mathcal{T}_{h}(M)\{\mathcal{T}_{h}(N)/x\}=\lambda\vec{y}.\mathcal{T}_{h}(N) \,\big{(}\mathcal{T}_{h}^{!}(\vec{P})\{\mathcal{T}_{h}(N)/x\}\,\vec{y}\,^{!} \big{)}\]
we obtain
\[\mathcal{T}_{h}(M)\{\mathcal{T}_{h}(N)/x\}\rightsquigarrow^{*}\lambda\vec{y}. \mathcal{T}_{h}(N)\,\big{(}\mathcal{T}_{h}^{!}(\vec{P}\{N/x\})\,\vec{y}\,^{!} \big{)}\]
as in the previous case, then conclude by lemma 4.6.
**Lemma 5.6**.: _For every \(M\in\Lambda\), there exists \(k\in\mathbb{N}\) such that \(\mathcal{H}_{\mathrm{R}}(\mathcal{T}_{h}(M))\rightsquigarrow^{*}\mathcal{T}_{h} (\mathcal{H}_{\beta}^{k}(M))\)_
Proof.: We reason by induction on \(M\). If \(M=\lambda y.N\) then we apply the induction hypothesis to \(N\), and conclude by lemma 3.9 and the definition of each operator. If \(M\) is in head normal form, then each \(m\in{\mathcal{T}}_{h}(M)\) is in head normal form too, so that \({\mathcal{H}}_{\rm R}({\mathcal{T}}_{h}(M))={\mathcal{T}}_{h}(M)\), and we conclude directly.
The only remaining case is that of \(M=P\,\vec{N}\) where \(P\) is an abstraction and \(\vec{N}=\langle N_{0},\ldots,N_{l}\rangle\) is a non empty sequence of \(\lambda\)-terms. We write \(P=\lambda\vec{x}.P^{\prime}\) where \(\vec{x}=\langle x_{0},\ldots x_{k}\rangle\) is a non empty tuple of variables and \(P^{\prime}\) is not an abstraction: either \(P^{\prime}=P^{\prime\prime}\,\vec{N}^{\prime}\) with \(P^{\prime\prime}\) an abstraction and \(|\vec{N}^{\prime}|>0\); or \(P^{\prime}=z\,\vec{N}^{\prime}\). By the definition of \({\mathcal{T}}_{h}(-)\), we can write
\[{\mathcal{T}}_{h}(P^{\prime})=\lambda\vec{z}.E\,{\mathcal{T}}_{h}^{!}(\vec{N} ^{\prime})\vec{z}^{!}\]
where \(E=z\) or \(E={\mathcal{T}}_{h}(P^{\prime\prime})\), and \(\vec{z}\not\in{\mathcal{V}}_{\rm s}(E)\cup{\mathcal{V}}_{\rm s}(\vec{N}^{ \prime})\). Then (assuming moreover that \(\vec{y}\not\in{\mathcal{V}}_{\rm s}(M)\) and \(\vec{y}\not=\vec{z}\)):
\[{\mathcal{T}}_{h}(M) =\lambda\vec{y}.{\mathcal{T}}_{h}(P)\,{\mathcal{T}}_{h}^{!}(\vec{ N})\,\vec{y}^{!}\] \[=\lambda\vec{y}.(\lambda\vec{x}.\lambda\vec{z}.E\,{\mathcal{T}}_{ h}^{!}(\vec{N}^{\prime})\vec{z}^{!})\,{\mathcal{T}}_{h}^{!}(\vec{N})\,\vec{y}^{!}\;.\]
If \(k\leq l\) then we write \(\vec{N}^{\prime\prime}\coloneqq\langle N_{1},\ldots,N_{k}\rangle\) and \(\vec{N}^{\prime\prime\prime}\coloneqq\langle N_{k+1},\ldots,N_{l}\rangle\), and we obtain \({\mathcal{H}}_{\beta}^{k}(M)=(P^{\prime}\{\vec{N}^{\prime\prime}/\vec{x}\})\, \vec{N}^{\prime\prime\prime}\). In this case, we obtain
\[{\mathcal{H}}_{\rm R}({\mathcal{T}}_{h}(M)) =\lambda\vec{y}.(E\,{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime})\,\vec{ z}^{!})[{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime\prime})/\vec{x}][{\mathcal{T}}_{h}^ {!}(\vec{N}^{\prime\prime\prime})\,\vec{y}^{!}/\vec{z}]\] \[\quad\quad\text{by lemma \ref{lem:2.1}}\] \[=\lambda\vec{y}.(E\,{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime})\,\vec{ z}^{!})\{{\mathcal{T}}_{h}(\vec{N}^{\prime\prime})/\vec{x}\}[{\mathcal{T}}_{h}^{!}( \vec{N}^{\prime\prime\prime})\,\vec{y}^{!}/\vec{z}]\] \[\quad\quad\text{by iterating lemma \ref{lem:2.1}}\] \[=\lambda\vec{y}.E\{{\mathcal{T}}_{h}(\vec{N}^{\prime\prime})/\vec{ x}\}\,{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime})\{{\mathcal{T}}_{h}(\vec{N}^{ \prime\prime})/\vec{x}\}\,\vec{z}^{!}[{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime \prime\prime})\,\vec{y}^{!}/\vec{z}]\] \[\quad\quad\text{since $\vec{z}$ is fresh}\] \[\leadsto\lambda\vec{y}.E\{{\mathcal{T}}_{h}(\vec{N}^{\prime\prime})/ \vec{x}\}\,{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime})\{{\mathcal{T}}_{h}(\vec{N}^ {\prime\prime})/\vec{x}\}\,{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime\prime\prime} )\,\vec{y}^{!}\] \[\quad\quad\text{by lemmas \ref{lem:2.1} and \ref{lem:2.1}}\] \[\leadsto^{*}\lambda\vec{y}.E\{{\mathcal{T}}_{h}(\vec{N}^{\prime \prime})/\vec{x}\}\,{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime}\{\vec{N}^{\prime \prime}/\vec{x}\})\,{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime\prime\prime})\,\vec{ y}^{!}\] \[\quad\quad\text{by iterating lemmas \ref{lem:2.1}, \ref{lem:2.1}, \ref{lem:2.1} and \ref{lem:2.1}}\] \[=\lambda\vec{y}.E\{{\mathcal{T}}_{h}(\vec{N}^{\prime\prime})/\vec{ x}\}\,{\mathcal{T}}_{h}^{!}(\vec{N}^{\prime}\{\vec{N}^{\prime\prime}/\vec{x}\} \,\vec{N}^{\prime\prime\prime})\,\vec{y}^{!}\;.\]
If \(P^{\prime}=z\,\vec{N}^{\prime}\) with \(z\not\in\vec{x}\) then \(E\{{\mathcal{T}}_{h}(\vec{N}^{\prime\prime})/\vec{x}\}=z\) and we conclude since \({\mathcal{H}}_{\beta}^{k}(M)=z\,\vec{N}^{\prime}\{\vec{N}^{\prime\prime}/\vec{x} \}\,\vec{N}^{\prime\prime\prime}\). If \(P^{\prime}=x_{i}\,\vec{N}^{\prime}\) then \(E\{{\mathcal{T}}_{h}(\vec{N}^{\prime\prime})/\vec{x}\}={\mathcal{T}}_{h}(N_{i})\) and we conclude by lemma 4.6 since \({\mathcal{H}}_{\beta}^{k}(M)=N_{i}\,\vec{N}^{\prime}\{\vec{N}^{\prime\prime}/\vec{ x}\}\,\vec{N}^{\prime\prime\prime}\). And if \(P^{\prime}=P^{\prime\prime}\,\vec{N}^{\prime}\) with \(P^{\prime\prime}\) an abstraction then \(E={\mathcal{T}}_{h}(P^{\prime\prime})\) hence \(E\{{\mathcal{T}}_{h}(\vec{N}^{\prime\prime})/\vec{x}\}\leadsto^{*}{\mathcal{T}}_{ h}(P^{\prime\prime}\{\vec{N}^{\prime\prime}/\vec{x}\})\) by lemma 5.5, hence
\[{\mathcal{H}}_{\rm R}({\mathcal{T}}_{h}(M))\leadsto^{*}\lambda\vec{y}.{\mathcal{T}}_{ h}(P^{\prime\prime}\{\vec{N}^{\prime\prime}/\vec{x}\})\,{\mathcal{T}}_{h}^{!}(\vec{N}^{ \prime}\{\vec{N}^{\prime\prime}/\vec{x}\}\,\vec{N}^{\prime\prime\prime})\,\vec{y}^{!}\]
by lemma 3.9, and we conclude again by lemma 4.6 since
\[{\mathcal{H}}_{\beta}^{k}(M)=\vec{P}^{\prime\prime}\{\vec{N}^{\prime\prime}/\vec{x} \}\,\vec{N}^{\prime}\{\vec{N}^{\prime\prime}/\vec{x}\}\,\vec{N}^{\prime\prime\prime }\;.\]
Now if \(k>l\), then we write \(\vec{x}^{\prime}\coloneqq\langle x_{1},\ldots,x_{l}\rangle\) and \(\vec{x}^{\prime\prime}\coloneqq\langle x_{l+1},\ldots,x_{k}\rangle\)\(\mathcal{H}^{l}_{\beta}(M)=\lambda\vec{x}^{\prime\prime}.P^{\prime}\{\vec{N}/\vec{x}^{ \prime}\}\). In this case, up to \(\alpha\)-equivalence, we can assume \(\vec{y}[i]=x_{i+l+1}\) for \(0\leq i\leq k-l-1\). Then we obtain:
\[\mathcal{H}_{\mathrm{R}}(\mathcal{T}_{h}(M)) =\lambda\vec{y}.(E\,\mathcal{T}^{l}_{h}(\vec{N}^{\prime})\,\vec{ z}^{!})[\mathcal{T}^{l}_{h}(\vec{N})/\vec{x}^{\prime}][x^{!}_{l+1}/x_{l+1}] \cdots[x^{!}_{k}/x_{k}][\vec{y}^{!}[\vec{y}^{\prime}\!]^{k-l}/\vec{z}]\] by lemmas 4.1 and 5.4 \[=\lambda\vec{y}.(E\,\mathcal{T}^{l}_{h}(\vec{N}^{\prime})\,\vec{ z}^{!}[\vec{y}^{!}[\vec{y}\!\uparrow]^{k-l}/\vec{z}])[\mathcal{T}^{l}_{h}( \vec{N})/\vec{x}^{\prime}][x^{!}_{l+1}/x_{l+1}]\cdots[x^{!}_{k}/x_{k}]\] since \(\vec{z}\) is fresh \[\leadsto^{*}\lambda\vec{y}.(E\,\mathcal{T}^{l}_{h}(\vec{N}^{ \prime})\,\vec{z}^{!}[\vec{y}^{!}[\vec{y}\!\uparrow]^{k-l}/\vec{z}])[\mathcal{T }^{l}_{h}(\vec{N})/\vec{x}^{\prime}]\] by iterating lemma 4.5 \[=\lambda\vec{y}.(E\,\mathcal{T}^{l}_{h}(\vec{N}^{\prime})\,\vec{ z}^{!}[\vec{y}^{!}[\vec{y}^{!}[\vec{y}\!\uparrow]^{k-l}/\vec{z}])\{\mathcal{T} _{h}(\vec{N})/\vec{x}^{\prime}\}\] by iterating lemma 3.4 \[=\lambda\vec{y}.E\{\mathcal{T}_{h}(\vec{N})/\vec{x}^{\prime}\} \,\mathcal{T}^{l}_{h}(\vec{N}^{\prime})\{\mathcal{T}_{h}(\vec{N})/\vec{x}^{ \prime}\}\,\vec{z}^{!}[\vec{y}^{!}[\vec{y}^{!}[\vec{y}\!\uparrow]^{k-l}/\vec{ z}]\] since \(\vec{y}\) is fresh \[\leadsto\lambda\vec{y}.E\{\mathcal{T}_{h}(\vec{N})/\vec{x}^{ \prime}\}\,\mathcal{T}^{l}_{h}(\vec{N}^{\prime})\{\mathcal{T}_{h}(\vec{N})/ \vec{x}^{\prime}\}\,\vec{y}^{!}[\vec{y}\!\uparrow]^{k-l}\] by lemma 4.5 \[=\lambda\vec{y}.(E\{\mathcal{T}_{h}(\vec{N})/\vec{x}^{\prime}\} \,\mathcal{T}^{l}_{h}(\vec{N}^{\prime})\{\mathcal{T}_{h}(\vec{N})/\vec{x}^{ \prime}\}\,\vec{y}^{!})[\vec{y}\!\uparrow]^{k-l}\] since \(\vec{y}\) is fresh \[=\lambda\vec{x}^{\prime\prime}.\lambda\vec{y}.E\{\mathcal{T}_{h}( \vec{N})/\vec{x}^{\prime}\}\,\mathcal{T}^{l}_{h}(\vec{N}^{\prime})\{\mathcal{T }_{h}(\vec{N})/\vec{x}^{\prime}\}\,\vec{y}^{!}\] since \(\vec{y}[i]=x_{i+l+1}\) for \(0\leq i\leq k-l-1\) \[\leadsto^{*}\lambda\vec{x}^{\prime\prime}.\lambda\vec{y}.E\{ \mathcal{T}_{h}(\vec{N})^{!}/\vec{x}^{\prime}\}\,\mathcal{T}^{l}_{h}(\vec{N}^ {\prime}\{\vec{N}/\vec{x}^{\prime}\})\,\vec{y}^{!}\] by iterating lemmas 3.3, 3.9, 3.11 and 5.5.
It will thus be sufficient to establish that
\[\lambda\vec{y}.E\{\mathcal{T}_{h}(\vec{N})^{!}/\vec{x}^{\prime}\}\,\mathcal{T }^{t}_{h}(\vec{N}^{\prime})\{\mathcal{T}_{h}(\vec{N})^{!}/\vec{x}^{\prime}\} \,\vec{y}^{!}\leadsto^{*}\mathcal{T}_{h}(P^{\prime}\{\vec{N}/\vec{x}^{\prime}\})\]
which is done by inspecting the shape of \(P^{\prime}\), similarly to the case of \(k\leq l\).
**Lemma 5.7**.: _If \(\mathcal{NT}(M)\neq 0\) then \(M\) is head normalizable._
Proof.: If \(\mathcal{NT}(M)\neq 0\) then there exists \(m\in\mathcal{T}_{h}(M)\) such that \(\mathcal{N}(m)\neq 0\). The proof is by induction on \(\|m\|\). If \(m\) is in head normal form, then \(M\) is in head normal form too and we conclude directly. Otherwise, \(\mathcal{N}(\mathcal{H}_{\mathrm{R}}(m))\neq 0\) and we pick \(m^{\prime}\in\mathcal{H}_{\mathrm{R}}(m)\) such that \(\mathcal{N}(m^{\prime})\neq 0\). Observe that \(m^{\prime}\in\mathcal{H}_{\mathrm{R}}(\mathcal{T}_{h}(M))\): by lemma 5.6, we obtain \(m^{\prime}\to_{\mathrm{r}}M^{\prime\prime}\subseteq\mathcal{T}_{\eta}( \mathcal{H}^{k}_{\beta}(M))\). Again, \(\mathcal{N}(M^{\prime\prime})=\mathcal{N}(m^{\prime})\neq 0\) and we can pick \(m^{\prime\prime}\in M^{\prime\prime}\) such that \(\mathcal{N}(m^{\prime\prime})\neq 0\). We also have \(m^{\prime\prime}\in\mathcal{T}_{\eta}(\mathcal{H}^{k}_{\beta}(M))\) and, moreover, \(\|m^{\prime\prime}\|\leq\|m^{\prime}\|<\|m\|\) so the induction hypothesis applies: \(\mathcal{H}^{k}_{\beta}(M)\) is head normalizable, hence \(M\) is head normalizable too.
### Bohm-out _via_ Taylor expansion
We have established that \(=_{\mathcal{T}}\) is a sensible extensional \(\lambda\)-theory. It is obviously consistent since, e.g., \(x\neq_{\mathcal{T}}y\) when \(x\neq y\).
To establish that \(=_{\mathcal{T}}\) is indeed maximum among sensible consistent \(\lambda\)-theories, we use the characterization of \(\mathcal{H}^{*}\) as the observational equivalence induced by head normal forms: writing \(\Lambda_{\mathrm{hn}}\) for the set of head normalizable \(\lambda\)-terms, we say \(M\) and \(N\) are **observationally equivalent**, and write \(M=_{\mathrm{hn}}N\) if, for all \(\lambda\)-term context \(C[\,]\), \(C[M]\in\Lambda_{\mathrm{hn}}\) iff \(C[M]\in\Lambda_{\mathrm{hn}}\). It is obvious that a sensible \(\lambda\)-theory identifying two \(=_{\mathrm{hn}}\)-distinct terms is inconsistent: it follows that all sensible consistent \(\lambda\)-theories are included in \(=_{\mathrm{hn}}\), so it remains only to prove that \(=_{\mathcal{T}}\) contains \(=_{\mathrm{hn}}\). Equivalently, we have to show that if \(M\neq_{\mathcal{T}}N\) then there is a context \(C[\,]\) such that one of \(C[M]\) and \(C[N]\) is head normalizable, and the other one is not -- in this case we say \(C[\,]\)**separates**\(M\) from \(N\).
Assuming \(M\neq_{\mathcal{T}}N\), we obtain a normal value term \(m\) such that \(m\in\mathcal{NT}(M)\setminus\mathcal{NT}(N)\) -- or _vice versa_. We show that the standard Bohm-out technique to separate \(\beta\eta\)-distinct normal \(\lambda\)-terms can be adapted to this setting, by reasoning on normal value terms instead. In particular, most of what follows is standard material about the \(\lambda\)-calculus, and only the final result relies on the properties of extensional Taylor expansion.
Following the Bohm-out technique, we will only use separating contexts corresponding to **Bohm transformations**, which are generated by composing the following basic transformations: \(N:M\mapsto M\,N\) for \(N\in\Lambda\) (corresponding to the context \([\,]\,N\)); and \(\sigma_{N}^{x}:M\mapsto M\{N/x\}\) for \(N\in\Lambda\) and \(x\in\mathcal{V}\) (corresponding to the context \((\lambda x.[\,])\,N\)). We use a postfix notation for the application of Bohm transformations and use the sequential order for their composition so that \(M\tau\rho=(M\tau)\rho\) for any Bohm transformations \(\tau\) and \(\rho\). In case a Bohm transformation \(\sigma\) is a composition of substitutions only, we may also apply it to a tuple of terms: \(\langle M_{1},\ldots,M_{k}\rangle\sigma\coloneqq\langle M_{1}\sigma,\ldots,M_{ k}\sigma\rangle\).
We say \(M\) and \(N\) are **strongly separable**, and write \(M\bowtie N\), if there exists a Bohm transformation \(\tau\) such that \(M\tau=_{\beta}\mathbf{1}\) and \(N\tau=_{\beta}\mathbf{0}\). And we say \(M\) is **separable** from \(N\), and write \(M\ltimes N\), if there exists a Bohm transformation \(\tau\) such that \(M\tau\in\Lambda_{\mathrm{hn}}\) and \(N\tau\not\in\Lambda_{\mathrm{hn}}\). Note that strong separability implies separability and is symmetric.
The following are direct consequences of the definitions or basic exercises in \(\lambda\)-calculus:
**Fact 5.8**.: _We have \(M\bowtie N\) as soon as one of the following holds:_
* \(M=x\,\vec{M}\) _and_ \(N=y\,\vec{N}\) _with_ \(x\neq y\) _or_ \(|\vec{M}|\neq|\vec{N}|\)_;_
* \(M=x\,\vec{M}\) _and_ \(N=\lambda\vec{y}.y\,\vec{N}\) _with_ \(y\in\vec{y}\)_;_
* \(M=\lambda\vec{y}.\lambda x.x\,\vec{M}\) _and_ \(N=\lambda\vec{z}.\lambda x.x\,\vec{N}\) _with_ \(|\vec{y}|\neq|\vec{z}|\) _or_ \(|\vec{M}|\neq|\vec{N}|\)_._
_And we have \(M\ltimes N\) as soon as one of the following holds:_
* \(M\in\Lambda_{\mathrm{hn}}\) _and_ \(N\not\in\Lambda_{\mathrm{hn}}\)_;_
* \(M\,P\ltimes N\,P\) _for some_ \(P\in\Lambda\)_;_
* \(M\{P/x\}\ltimes N\{P/x\}\) _for some_ \(x\in\mathcal{V}\) _and_ \(P\in\Lambda\)_;_
* \(M=_{\beta\eta}M^{\prime}\ltimes N^{\prime}=_{\beta\eta}N\)_;_
* \(M=\lambda x.M^{\prime}\) _and_ \(N=\lambda x.N^{\prime}\) _with_ \(M^{\prime}\ltimes N^{\prime}\)_._
For each \(k\in\mathbb{N}\), we write \(\rho_{k}\coloneqq\lambda x_{1}.\ldots\lambda x_{k}.\lambda y.y\,x_{1}\cdots x_ {k}\in\Lambda\). For \(l\in\mathbb{N}\), \(\vec{x}=\langle x_{1},\ldots,x_{l}\rangle\in\mathcal{V}^{l}\) and \(\vec{k}=\langle k_{1},\ldots,k_{l}\rangle\in\mathbb{N}^{l}\), we define the Bohm transformation \(\sigma_{\vec{k}}^{\vec{x}}\coloneqq\sigma_{\rho_{k_{1}}}^{x_{1}}\cdots\sigma_{ \rho_{k_{l}}}^{x_{l}}\), which is a sequence of substitutions. We say \(M\) is **Bohm-separable** from \(N\), and write \(M\ltimes_{\mathcal{B}}N\), if, for each \(l\in\mathbb{N}\), each tuple \(\vec{x}\in\mathcal{V}^{l}\) of pairwise distinct variables and each tuple \(\vec{k}\in\mathbb{N}^{l}\) of pairwise distinct and sufficiently large integers, we have \(M\sigma_{\vec{k}}^{\vec{x}}\ltimes N\sigma_{\vec{k}}^{\vec{x}}\). Taking \(l=0\), Bohm-separability implies separability.
The results of the following lemma are, again, standard material. They adapt the setup used by Krivine for the strong separation of \(\eta\)-distinct \(\beta\)-normal forms [10, Chapter 5].
**Lemma 5.9**.: _We have \(M\ltimes_{\mathcal{B}}N\) as soon as one of the following holds:_
1. \(M\in\Lambda_{\mathrm{hn}}\) _and_ \(N\not\in\Lambda_{\mathrm{hn}}\)_;_
2. \(M=_{\beta\eta}M^{\prime}\ltimes_{\mathcal{B}}N^{\prime}=_{\beta\eta}N\)_;_
3. \(M=\lambda x.M^{\prime}\) _and_ \(N=\lambda x.N^{\prime}\) _with_ \(M^{\prime}\ltimes_{\mathcal{B}}N^{\prime}\)_;_
4. \(M=x\,\vec{M}\) _and_ \(N=y\,\vec{N}\) _with_ \(x\neq y\) _or_ \(|\vec{M}|\neq|\vec{N}|\)_;_
5. \(M=x\,M_{0}\cdots M_{k}\) _and_ \(N=x\,N_{0}\cdots N_{k}\) _with_ \(M_{i}\ltimes_{\mathcal{B}}N_{i}\) _for some_ \(i\leq k\)_._
Proof.: First observe that head reductions are preserved by Bohm transformations: if \(M\to_{\beta\mathrm{h}}M^{\prime}\) then \(M\tau\to_{\beta\mathrm{h}}M^{\prime}\tau\) for any Bohm transformation \(\tau\). It follows that \(P\tau\not\in\Lambda_{\mathrm{hn}}\) as soon as \(P\not\in\Lambda_{\mathrm{hn}}\).
We consider each hypothesis in turn, fix a tuple \(\vec{x}=\langle x_{1},\ldots,x_{l}\rangle\in\mathcal{V}^{l}\) of pairwise distinct variables and a tuple \(\vec{k}=\langle k_{1},\ldots,k_{l}\rangle\in\mathbb{N}^{l}\) of pairwise distinct integers, write \(\sigma\coloneqq\sigma_{\vec{k}}^{\vec{x}}\), and prove \(M\sigma\ltimes N\sigma\) provided the \(k_{i}\)'s are large enough.
1. If \(M\in\Lambda_{\mathrm{hn}}\) and \(N\not\in\Lambda_{\mathrm{hn}}\). Then we can write \(M=\lambda\vec{y}.y\,\vec{M}\) (choosing \(\vec{y}\cap\vec{x}=\emptyset\)), and we have already observed that \(N\sigma\not\in\Lambda_{\mathrm{hn}}\). If \(y\not\in\vec{x}\), then \(M\sigma=\lambda\vec{y}.y\,\vec{M}\sigma\). And if \(y=x_{i}\), we assume \(k_{i}\geq|M|\), whence: \(M\sigma=\lambda\vec{y}.\rho_{k_{i}}\,\vec{M}\sigma\to_{\beta}^{*}\lambda\vec{ y}.\lambda\vec{z}.\lambda z.z\,(\vec{M}\sigma)\vec{z}\) with \(|\vec{z}|=k_{i}-|\vec{M}|\). In both cases, we obtain \(M\sigma\in\Lambda_{\mathrm{hn}}\), hence \(M\sigma\ltimes N\sigma\).
2. Assume \(M=_{\beta\eta}M^{\prime}\ltimes_{\mathcal{B}}N^{\prime}=_{\beta\eta}N\): if the \(k_{i}\)'s are large enough then \(M^{\prime}\sigma\ltimes N^{\prime}\sigma\) and we conclude.
3. If \(M=\lambda x.M^{\prime}\), \(N=\lambda x.N^{\prime}\), we can assume \(x\not\in\vec{x}\). Then if \(M^{\prime}\ltimes_{\mathcal{B}}N^{\prime}\) and the \(k_{i}\)'s are large enough we obtain \(M^{\prime}\sigma\ltimes N^{\prime}\sigma\), whence \(M\sigma=\lambda x.M^{\prime}\sigma\ltimes\lambda x.N^{\prime}\sigma=N\sigma\).
4. If \(M=x\,\vec{M}\) and \(N=y\,\vec{N}\) with \(x\neq y\) or \(|\vec{M}|\neq|\vec{N}|\). Let us first consider the case of \(x\neq y\): * If \(x,y\not\in\vec{x}\), then \(M\sigma=x\,(\vec{M}\sigma)\) and \(N\sigma=y\,(\vec{N}\sigma)\) with \(x\neq y\), hence \(M\sigma\bowtie N\sigma\). * If, e.g., \(x=x_{i}\) and \(y\not\in\vec{x}\), provided \(k_{i}\geq|\vec{M}|\), we obtain \(M\sigma=\rho_{k_{i}}\,(\vec{M}\sigma)\to_{\beta}^{*}\lambda\vec{z}.\lambda z.z \,(\vec{M}\sigma)\vec{z}\) and \(N\sigma=y\,(\vec{N}\sigma)\), hence \(M\sigma\bowtie N\sigma\). * If \(x=x_{i}\) and \(y=x_{j}\) then \(i\neq j\) and, provided \(k_{i}\geq|\vec{M}|\) and \(k_{j}\geq|\vec{N}|\), we obtain \(M\sigma=\rho_{k_{i}}\,(\vec{M}\sigma)\to_{\beta}^{*}\lambda\vec{z}.\lambda z.z \,(\vec{M}\sigma)\vec{z}\) and \(N\sigma=\rho_{k_{j}}\,(\vec{N}\sigma)\to_{\beta}^{*}\lambda\vec{z}^{\prime}. \lambda z.z\,(\vec{N}\sigma)\vec{z}^{\prime}\), with \(|(\vec{M}\sigma)\vec{z}|=k_{i}\) and \(|(\vec{N}\sigma)\vec{z}^{\prime}|=k_{j}\). Hence \(M\sigma\bowtie N\sigma\). And if \(x=y\): * If \(x\not\in\vec{x}\) then \(M\sigma=x\,(\vec{M}\sigma)\) and \(N\sigma=x\,(\vec{N}\sigma)\) with \(|\vec{M}|\neq|\vec{N}|\), hence \(M\sigma\bowtie N\sigma\). * If \(x=x_{i}\) then \(M\sigma=\rho_{k_{i}}\,(\vec{M}\sigma)\to_{\beta}^{*}\lambda\vec{z}.\lambda z.z \,(\vec{M}\sigma)\vec{z}\) and \(N\sigma=\rho_{k_{i}}\,(\vec{N}\sigma)\to_{\beta}^{*}\lambda\vec{z}^{\prime}. \lambda z.z\,(\vec{N}\sigma)\vec{z}^{\prime}\) with \(|\vec{z}|=k_{i}-|\vec{M}|\) and \(|\vec{z}^{\prime}|=k_{i}-|\vec{N}|\). Since \(|\vec{M}|\neq|\vec{N}|\) we have \(|\vec{z}|\neq|\vec{z}^{\prime}|\), hence \(M\sigma\bowtie N\sigma\).
5. If \(M=x\,\vec{M}\) and \(N=x\,\vec{N}\) with \(\vec{M}=\langle M_{1},\ldots,M_{k}\rangle\) and \(\vec{N}=\langle N_{1},\ldots,N_{k}\rangle\), and moreover \(M_{i}\ltimes N_{i}\), then: * If \(x\not\in\vec{x}\) then \(M\sigma=x\,(\vec{M}\sigma)\) and \(N\sigma=x\,(\vec{N}\sigma)\). Let \(k^{\prime}\geq k\) and write \(\sigma^{\prime}\coloneqq\sigma_{\vec{k},k^{\prime}}^{\vec{x},x}=\sigma\sigma_{ \rho_{k^{\prime}}}^{x}\) and \(\tau\coloneqq\sigma^{\prime}(P)^{k^{\prime}-|\vec{M}|}\pi_{k^{\prime},i}\) where \(P\) is an arbitrary term. Then \(M\tau=\rho_{k^{\prime}}\,(\vec{M}\sigma^{\prime})\,\vec{P}\,\pi_{k^{\prime},i}\) with \(|(\vec{M}\sigma^{\prime})\vec{P}|=k^{\prime}\), hence \(M\tau\to_{\beta}^{*}\pi_{k^{\prime},i}\,(\vec{M}\sigma^{\prime})\,\vec{P}\to_ {\beta}^{*}M_{i}\sigma^{\prime}\). Similarly, \(N\tau\to_{\beta}^{*}N_{i}\sigma^{\prime}\). Since \(M_{i}\ltimes_{\mathcal{B}}N_{i}\), if the \(k_{i}\)'s and \(k^{\prime}\) are sufficiently large, we obtain \(M_{i}\sigma^{\prime}\ltimes N_{i}\sigma^{\prime}\), hence \(M\tau\ltimes N\tau\). Since \(M\tau=(M\sigma)\{\rho_{k^{\prime}}/x\}\,\vec{P}\,\pi_{k^{\prime},i}\) and \(N\tau=(N\sigma)\{\rho_{k^{\prime}}/x\}\,\vec{P}\,\pi_{k^{\prime},i}\), we deduce \(M\sigma\ltimes N\sigma\) * If \(x=x_{h}\) then \(M\sigma=\rho_{k_{h}}\,(\vec{M}\sigma)\) and \(N\sigma=\rho_{k_{h}}\,(\vec{N}\sigma)\). If \(k_{h}\geq k\), let \(\tau\coloneqq\sigma(P)^{k_{h}-|\vec{M}|}\pi_{k_{h},i}\) where \(P\) is an arbitrary term. Then \(M\tau\to_{\beta}^{*}\pi_{k_{h},i}\,(\vec{M}\sigma)\,\vec{P}\to_{\beta}^{*}M_{i}\sigma\). Similarly, \(N\tau\to_{\beta}^{*}N_{i}\sigma\). Since \(M_{i}\ltimes_{\mathcal{B}}N_{i}\), if the \(k_{i}\)'s are sufficiently large, we obtain \(M_{i}\sigma\ltimes N_{i}\sigma\), hence \(M\tau\ltimes N\tau\). Since \(M\tau=(M\sigma)\,\vec{P}\,\pi_{k_{h},i}\) and \(N\tau=(N\sigma)\,\vec{P}\,\pi_{k_{h},i}\), we deduce \(M\sigma\ltimes N\sigma\)
We are now ready to establish the separation theorem. The canonical structure of normal extensional resource terms allows us to proceed by a simple induction, very similar to the proof of strong separation on \(\beta\)-normal \(\lambda\)-terms (compare with [13, Lemma 5.10]).
**Theorem 5.10** (Separation).: _If \(M\neq\!\!\tau\,N\) then \(M\ltimes N\) or \(N\ltimes M\)._
Proof.: We prove by induction on \(m\) that, if \(m\in\mathcal{NT}(M)\setminus\mathcal{NT}(N)\), then \(M\ltimes_{\mathcal{B}}N\). In this case \(\mathcal{NT}(M)\neq 0\), hence \(M\in\Lambda_{\mathrm{hn}}\) by lemma 5.7. If \(N\not\in\Lambda_{\mathrm{hn}}\), we conclude directly by item 1 of lemma 5.9.
Otherwise, since \(=\!\!\tau\) is a \(\lambda\)-theory, item 2 allows us to \(\beta\)-reduce both \(M\) and \(N\) to bring them into head normal form: \(M=\lambda\vec{x}.x\,\vec{M}\) and \(N=\lambda\vec{x}^{\prime}.x^{\prime}\,\vec{N}\), with \(\vec{M}=\langle M_{1},\ldots,M_{k}\rangle\) and \(\vec{N}=\langle N_{1},\ldots,N_{k^{\prime}}\rangle\). Then we can write \(m=\lambda\vec{x}.\lambda\vec{y}.x\,\vec{m}\). And since \(=\!\!\tau\) is extensional, item 2 again allows to \(\eta\)-expand inside \(M\) and \(N\). We can thus ensure that \(|\vec{x}|=|\vec{x}^{\prime}|\), and that \(k\) is large enough to write \(\vec{m}=\vec{m}_{1}::\cdots::\vec{m}_{k}::\). By \(\alpha\)-equivalence, we can further assume \(\vec{x}=\vec{x}^{\prime}\). By iterating item 3, and observing that \(\lambda z.p\in\mathcal{NT}(\lambda z.P)\) iff \(p\in\mathcal{NT}(P)\), we can assume that \(M=x\,\vec{M}\), \(N=x^{\prime}\,\vec{N}\) and \(m=\lambda\vec{y}.x\,\vec{m}\).
If \(x\neq x^{\prime}\) or \(|\vec{M}|\neq|\vec{N}|\), we conclude by item 4. Otherwise, observe that \(\mathcal{NT}(M)=\lambda\vec{y}.x\,\mathcal{NT}(M_{1})^{!}::\cdots::\mathcal{NT }(M_{k})^{!}::\vec{y}^{!}\) and \(\mathcal{NT}(N)=\lambda\vec{y}.x\,\mathcal{NT}(N_{1})^{!}::\cdots::\mathcal{NT }(N_{k})^{!}::\vec{y}^{!}\). Since \(\iota\in\vec{y}^{!}\) and \(m\not\in\mathcal{NT}(N)\), there must be \(i\) such that \(\vec{m}_{i}\not\in\mathcal{NT}(N_{i})^{!}\). Hence there must be \(m_{i}\in\vec{m}_{i}\) such that \(m_{i}\not\in\mathcal{NT}(N_{i})\). Since moreover \(\vec{m}_{i}\in\mathcal{NT}(M_{i})^{!}\), we have \(m_{i}\in\mathcal{NT}(M_{i})\). By applying the induction hypothesis to \(m_{i}\), we obtain \(M_{i}\ltimes_{\mathcal{B}}N_{i}\), and conclude using item 5.
We have thus established that, if \(M\neq_{\mathcal{T}}N\) then \(M\neq_{\mathrm{hn}}N\), which is sufficient to ensure that \(=_{\mathcal{T}}\) contains every consistent sensible \(\lambda\)-theory, hence \(=_{\mathcal{T}}\) is \(\mathcal{H}^{*}\). To construct a model of \(\mathcal{H}^{*}\), it is thus sufficient to give a model of the extensional resource calculus! We exploit this approach to give a new proof of the fact that \(\mathcal{H}^{*}\) is the \(\lambda\)-theory induced by a particular extensional reflexive object in the relational model of the \(\lambda\)-calculus, first studied in detail by Bucciarelli, Ehrhard and Manzonetto. [1]. The fact that this \(\lambda\)-theory characterizes \(\mathcal{H}^{*}\) was later proved by Manzonetto [14].
## 6 A relational model
We define the set \(\mathcal{D}\) of **relational types** as \(\mathcal{D}\coloneqq\bigcup_{k\in\mathbb{N}}\mathcal{D}_{k}\) with \(\mathcal{D}_{0}\coloneqq\emptyset\) and \(\mathcal{D}_{k+1}\coloneqq\mathcal{S}(\mathcal{D}_{k})\). In other words, \(\mathcal{D}\) is inductively defined as follows:
* \(\iota\in\mathcal{D}\),
* if \(\bar{\alpha}\in\mathfrak{M}_{t}(\mathcal{D})\) and \(\beta\in\mathcal{D}\) then \(\bar{\alpha}::\beta\in\mathcal{D}\),
subject to \([]::\iota=\iota\). Note that \(\mathcal{D}\) is nothing but the extensional reflexive object of the cartesian closed category **MRel** put forward by Bucciarelli, Ehrhard and Manzonetto [1] -- as an example of the construction of a extensional \(\lambda\)-theory based on reflexive object in a cartesian closed category having "not enough points".
### Relational semantics of \(\lambda\)-terms as a type system
Let us recall that the interpretation of \(\lambda\)-terms in this model can be described by a kind of non-idempotent intersection type system: the "\(::\)" constructor acts
as an arrow type, while the monoid structure of multisets induces the non-idempotent intersection.
More explicitly, we first define **relational typing contexts** (denoted by \(\Gamma,\Delta,\Phi\)) as functions \(\mathcal{V}\to\mathfrak{M}_{\mathfrak{f}}(\mathcal{D})\) with finite support, _i.e._\(\Gamma:\mathcal{V}\to\mathfrak{M}_{\mathfrak{f}}(\mathcal{D})\) is a relational typing context if \(\{x\in\mathcal{V}\mid\Gamma(x)\neq[\,]\}\) is finite. We write \(\star\) for the empty context: \(\star(x)=[\,]\) for each \(x\in\mathcal{V}\). We write \(x:\bar{m}\) for the context \(\Gamma\) such that \(\Gamma(x)=\bar{m}\) and \(\Gamma(y)=[\,]\) when \(x\neq y\). And we define the concatenation of contexts point-wise: \((\Gamma*\Delta)(x)\coloneqq\Gamma(x)*\Delta(x)\). We also write \(\Gamma-x\) for the context such that \((\Gamma-x)(x)=[\,]\) and \((\Gamma-x)(y)=\Gamma(y)\) when \(x\neq y\).
Similarly, if \(\vec{\alpha}=\langle\bar{\alpha}_{i}\rangle_{i\in\mathbb{N}}\in\mathcal{S}( \mathcal{D})\), we write \(\vec{x}:\vec{\alpha}\) for the context \(\Gamma\) such that \(\Gamma(\vec{x}[i])=\bar{\alpha}_{i}\) for \(i\in\mathbb{N}\), and \(\Gamma(y)=[\,]\) if \(y\not\in\vec{x}\). We also write \(\Gamma-\vec{x}\) for the context such that \((\Gamma-\vec{x})(\vec{x}[i])=[\,]\) and \((\Gamma-\vec{x})(y)=\Gamma(y)\) when \(y\not\in\vec{x}\). Finally, \(\Gamma(\vec{x})\) denotes the sequence \(\langle\Gamma(\vec{x}[i])\rangle_{i\in\mathbb{N}}\), which is a stream because \(\Gamma\) has finite support.
The relational semantics \([\![M]\!]_{\vec{x}}\) can then be computed as the set
\[\{\langle\Gamma,\alpha\rangle\mid\Gamma\vdash M:\alpha\}\]
where the type system is described by the following rules:
\[\frac{\Gamma\vdash M:\beta}{x:[\alpha]\vdash x:\alpha}\qquad\frac{\Gamma \vdash M:\beta}{\Gamma-x\vdash\lambda x.M:\Gamma(x):\beta}\]
\[\frac{\Gamma\vdash M:\bar{\alpha}::\beta\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
* value expressions will be typed with elements of \(\mathcal{D}\);
* bag terms with elements of \(\mathfrak{M}_{\mathrm{f}}(\mathcal{D})\);
* stream terms with elements of \(\mathcal{S}(\mathcal{D})\);
* base terms with a single, newly introduced base type \(o\).
The underlying idea is that a value expression of type \(\alpha=\langle\bar{\alpha}_{i}\rangle_{i\in\mathbb{N}}\) expects a sequence of bags \(\vec{m}=\langle\bar{m}_{i}\rangle_{i\in\mathbb{N}}\) with \(\bar{m}_{i}\) of type \(\bar{\alpha}_{i}\) to produce a successful interaction (of type \(o\)). Although the types are incidentally taken from the same set \(\mathcal{D}=\mathcal{S}(\mathcal{D})\), the intuitive meaning of typing for value expressions and for stream terms is thus quite different. To fit this intuition, we redefine \(\mathcal{D}\) simply to be able to distinguish between \(\mathcal{D}\) and \(\mathcal{S}(\mathcal{D})\): we believe this will clarify the presentation, although all the remaining of the paper might be carried out identically without this notational trick.
We define the sets \(\mathcal{D}_{\mathrm{v}}\) of **value types** and \(\mathcal{D}_{\mathrm{s}}\) of **stream types**, simultaneously by mutual induction as follows:
* \(\alpha\in\mathcal{D}_{\mathrm{v}}\) if \(\alpha=\vec{\alpha}\multimap o\) with \(\vec{\alpha}\in\mathcal{D}_{\mathrm{s}}\);
* \(\vec{\alpha}\in\mathcal{D}_{\mathrm{s}}\) if \(\vec{\alpha}\in\mathcal{S}(\mathcal{D}_{\mathrm{v}})\);
so that \(\vec{\alpha}\mapsto\vec{\alpha}\multimap o\) defines a bijection from \(\mathcal{D}_{\mathrm{s}}=\mathcal{S}(\mathcal{D}_{\mathrm{v}})\) to \(\mathcal{D}_{\mathrm{v}}\). We will also write \(\mathcal{D}_{\mathrm{!}}\coloneqq\mathfrak{M}_{\mathrm{f}}(\mathcal{D}_{ \mathrm{v}})\) for the set of **bag types** and \(\mathcal{D}_{\mathrm{b}}\coloneqq\{o\}\) for the singleton containing the **base type**. We call **type term** (denoted by \(\rho,\sigma,\tau\)) any of a relational type, bag type, stream type or the base type.
A type context is now a function \(\Gamma:\mathcal{V}\to\mathcal{D}_{\mathrm{!}}\), whose value is almost always the empty bag. The type system involves four kinds of judgements:
\[\Gamma\vdash_{\mathrm{v}}m:\alpha\qquad\Gamma\vdash_{\mathrm{!}}\bar{m}:\vec{ \alpha}\qquad\Gamma\vdash_{\mathrm{s}}\vec{m}:\vec{\alpha}\qquad\Gamma \vdash_{\mathrm{b}}a:o\]
and we denote by \(\Gamma\vdash u:\rho\) any judgement as above. The rules are as follows:
\[\frac{\Gamma\vdash_{\mathrm{v}}m:\vec{\alpha}\multimap o\qquad\Delta\vdash_{ \mathrm{s}}\vec{n}:\vec{\alpha}}{\Gamma+\Delta\vdash_{\mathrm{b}}m\,\vec{n}:o }\qquad\frac{\Gamma\vdash_{\mathrm{s}}\vec{m}:\vec{\alpha}}{\Gamma\ast x:[ \alpha\multimap o]\vdash_{\mathrm{b}}x\,\vec{m}:o}\]
\[\frac{\Gamma\vdash_{\mathrm{s}}\iota:}{\Gamma\ast\vdash_{\mathrm{s}}\iota:} \qquad\frac{\Gamma\vdash_{\mathrm{l}}\bar{m}:\bar{\alpha}\qquad\Delta \vdash_{\mathrm{s}}\vec{n}:\vec{\beta}}{\Gamma\ast\Delta\vdash_{\mathrm{s}} \bar{m}:\vec{n}:\bar{\alpha}::\vec{\beta}}\]
\[\frac{\Gamma\vdash_{\mathrm{b}}a:o}{\Gamma-\vec{x}\vdash_{\mathrm{v}}\lambda \vec{x}.a:\Gamma(\vec{x})\multimap o}\qquad\frac{\Gamma_{\mathrm{l}}\vdash_{ \mathrm{v}}m_{1}:\alpha_{1}\qquad\cdots\qquad\Gamma_{k}\vdash_{\mathrm{v}}m_{ k}:\alpha_{k}}{\Gamma_{1}\ast\cdots\ast\Gamma_{k}\vdash_{\mathrm{l}}[m_{1}, \ldots,m_{k}]:[\alpha_{1},\ldots,\alpha_{k}]}\]
where, in particular, the last rule allows to derive \(\star\vdash_{\mathrm{l}}[\;]:[\;]\).
**Lemma 6.1**.: _Each resource term \(u\) admits at most one derivable typing derivation. If moreover \(u\) is normal, then it is typable._
Proof.: The proof is straightforward, by induction on \(u\). Note that \(u\) being normal forbids the first rule for base terms, which is the only one in which a constraint on the premises is imposed.
We write \(\vdash u\) when \(u\) is typable, and in this case we write \(\mathsf{cont}(u)\) and \(\mathsf{type}(u)\) respectively for the unique context and type term such that \(\mathsf{cont}(u)\vdash u:\mathsf{type}(u)\) is derivable.
**Lemma 6.2**.: _If \(\vdash u\) then \(|\mathsf{cont}(u)(x)|\) is the number of occurrences of \(x\) in \(u\)._
Proof.: By a straightforward induction on \(u\).
Note that the functions \(\mathsf{type}(-)\) and \(\mathsf{cont}(-)\) are not injective, even jointly:
_Example 6.3_.: Consider
\[\vec{m}_{1}\coloneqq[\lambda\vec{y}.x\,\iota]\vcentcolon[\lambda\vec{y}.x\,[m ]\vcentcolon\iota]\vcentcolon\iota\]
and
\[\vec{m}_{2}\coloneqq[\lambda\vec{y}.x\,[m]\vcentcolon\iota]\vcentcolon[\lambda \vec{y}.x\,\iota]\vcentcolon\iota\]
where \(m\) is a typeable closed value and \(x\not\in\vec{y}\), so that \(\vec{m}_{1}\) and \(\vec{m}_{2}\) differ only by the order of their first two bags. Then
\[\mathsf{type}(\vec{m}_{1})=\mathsf{type}(\vec{m}_{2})=[\iota\multimap o] \vcentcolon[\iota\multimap o]\vcentcolon\iota\]
because the variables of \(\vec{y}\) have no occurrences in subterms, while
\[\mathsf{cont}(\vec{m}_{1})=\mathsf{cont}(\vec{m}_{2})=x\vcentcolon[\iota \multimap o,([\mathsf{type}(m)]\vcentcolon\iota)\multimap o]\]
because \(x\) occurs in twice in each of \(\vec{m}_{1}\) and \(\vec{m}_{2}\), once applied to \(\iota\), and then applied to \([m]\vcentcolon\iota\). The relational model cannot _"see"_ that these occurrences are swapped.
Moreover, not all types are inhabited by a closed term. This is easily observed on normal terms:
**Proposition 6.4**.: _There is no normal term \(u\) such that_
* \(u=a\) _with_ \(\star\vdash_{\mathrm{b}}a:o\)_, or_
* \(u=m\) _with_ \(\star\vdash_{\mathrm{v}}m:\iota\multimap o\)_, or_
* \(u=\vec{m}\) _with_ \(\star\vdash_{\mathrm{l}}\vec{m}:[\iota\multimap o]\)_, or_
* \(u=\vec{m}\) _with_ \(\star\vdash_{\mathrm{v}}\vec{m}:[\iota\multimap o]\vcentcolon\iota\)_._
Proof.: Given the shape of the unique applicable rule for a normal base term, we have \(\mathsf{cont}(a)\neq\star\) when \(\vdash a\) and \(a\) is normal. Each of the other three statements follows directly from the previous one.
The same result holds for non necessarily normal terms, as will follow from lemma 6.8, which ensures the invariance of typing under reduction. We first characterize typing in substitutions, in the next two lemmas.
**Lemma 6.5**.: _If \(u^{\prime}\in u[\bar{n}/x]\) and \(\vdash u^{\prime}\) then \(\vdash u\), \(\vdash\bar{n}\), \(\mathsf{type}(\bar{n})=\mathsf{cont}(u)(x)\), \(\mathsf{type}(u^{\prime})=\mathsf{type}(u)\), and \(\mathsf{cont}(u^{\prime})=(\mathsf{cont}(u)-x)*\mathsf{cont}(\bar{n})\)._
Proof.: The proof is by induction on \(u\).
If \(u=[m_{1},\ldots,m_{k}]\) then we must have \(u^{\prime}=[m^{\prime}_{1},\ldots,m^{\prime}_{k}]\) and \(\bar{n}=\bar{n}_{1}*\cdots*\bar{n}_{k}\) with \(m^{\prime}_{i}\in\bar{m}[\bar{n}_{i}/x]\) for \(1\leq i\leq k\). Then \(\vdash m^{\prime}_{i}\) and we apply the induction hypothesis to each \(m_{i}\) for \(1\leq i\leq k\). We obtain that \(\vdash m_{i}\) and \(\vdash\bar{n}_{i}\) for \(1\leq i\leq k\), and it follows directly that \(\vdash\bar{n}\). Moreover, \(\mathsf{type}(\bar{n}_{i})=\mathsf{cont}(m_{i})(x)\) for \(1\leq i\leq k\), so \(\mathsf{type}(\bar{n})=\mathsf{type}(\bar{n}_{1})*\cdots*\mathsf{type}(\bar{n} _{k})=(\mathsf{cont}(m_{1})*\cdots*\mathsf{cont}(m_{k}))(x)=\mathsf{cont}(u)(x)\). We also have \(\mathsf{type}(m^{\prime}_{i})=\mathsf{type}(m_{i})\) for \(1\leq i\leq k\), hence \(\mathsf{type}(u^{\prime})=\mathsf{type}(u)\). Finally, we have \(\mathsf{cont}(m^{\prime}_{i})=(\mathsf{cont}(m_{i})-x)*\mathsf{cont}(\bar{n} _{i})\) for \(1\leq i\leq k\), hence \(\mathsf{cont}(u^{\prime})=\mathsf{cont}(m^{\prime}_{1})*\cdots*\mathsf{cont}(m ^{\prime}_{k})=(\mathsf{cont}(u)-x)*\mathsf{cont}(\bar{n})\),
If \(u=\iota\) then \(u^{\prime}=\iota\) and \(\bar{n}=[\,]\), which entails all the desired properties. The cases of \(u=\bar{m}::\vec{p}\neq\iota\) and \(u=m\,\vec{p}\) are similar to that of \(u=[m_{1},m_{2}]\). The case of \(u=\lambda\vec{y}.a\) (choosing \(\vec{y}\not\ni x\) and \(\vec{y}\cap\mathcal{V}(\bar{n})=\emptyset\)) is similar to that of \(u=[m_{1}]\).
The only remaining case is that of \(u=z\,\vec{m}\). If \(z\neq x\), the treatment is, again, similar to that of \(u=[m_{1}]\). Now assume \(z=x\). Then we can write \(u^{\prime}=n\,\vec{m}^{\prime}\) and \(\bar{n}=[n]*\bar{n}_{1}\) with \(\vec{m}^{\prime}\in\vec{m}[\bar{n}_{1}/x]\). We have \(\vdash n\) and \(\vdash\vec{m}^{\prime}\), and moreover \(\mathsf{type}(n)=\mathsf{type}(\vec{m}^{\prime})\multimap o\). We apply the induction hypothesis to \(\vec{m}\). We obtain \(\vdash\vec{m}\) and \(\vdash\bar{n}_{1}\), and it follows that \(\vdash\bar{n}\). Moreover, \(\mathsf{type}(\vec{m}^{\prime})=\mathsf{type}(\vec{m})\) and \(\mathsf{type}(\bar{n}_{1})=\mathsf{cont}(\vec{m})(x)\). So \(\mathsf{type}(\bar{n})=[\mathsf{type}(n)]*\mathsf{type}(\bar{n}_{1})=[\mathsf{ type}(\vec{m}^{\prime})\multimap o]*\mathsf{cont}(\vec{m})(x)=\mathsf{cont}(u)(x)\) and \(\mathsf{cont}(u^{\prime})=\mathsf{cont}(n)*\mathsf{cont}(\vec{m}^{\prime})= \mathsf{cont}(n)*(\mathsf{cont}(\vec{m})-x)*\mathsf{cont}(\bar{n}_{1})=( \mathsf{cont}(u)-x)*\mathsf{cont}(\bar{n})\) since \(\mathsf{cont}(u)-x=\mathsf{cont}(\vec{m})-x\).
**Lemma 6.6**.: _If \(\vdash u\), \(\vdash\bar{n}\), and \(\mathsf{type}(\bar{n})=\mathsf{cont}(u)(x)\), then there exists \(u^{\prime}\in u[\bar{n}/x]\) such that \((\mathsf{cont}(u)-x)*\mathsf{cont}(\bar{n})\vdash u^{\prime}:\mathsf{type}(u)\)._
Proof.: Write \(U^{\prime}\coloneqq u[\bar{n}/x]\). The proof is by induction on \(u\).
If \(u=[m_{1},\ldots,m_{k}]\) then \(\vdash m_{i}\) for \(1\leq i\leq k\). Since \(\mathsf{type}(\bar{n})=\mathsf{cont}(u)(x)=\mathsf{cont}(m_{1})(x)*\cdots* \mathsf{cont}(m_{k})(x)\), we can write \(\bar{n}=\bar{n}_{1}*\cdots*\bar{n}_{k}\) with \(\mathsf{type}(\bar{n}_{i})=\mathsf{cont}(m_{i})(x)\) for \(1\leq i\leq k\). We apply the induction hypothesis to \(m_{i}\) for \(1\leq i\leq k\): we obtain \(m^{\prime}_{i}\in m_{i}[\bar{n}_{i}/x]\) such that \((\mathsf{cont}(m_{i})-x)*\mathsf{cont}(\bar{n}_{i})\vdash_{\mbox{\tiny{v}}}m^{ \prime}_{i}:\mathsf{type}(m_{i})\). We conclude by setting \(u^{\prime}\coloneqq[m^{\prime}_{1},\ldots,m^{\prime}_{k}]\).
As in the previous lemma, the cases of streams, values, and base terms \(p\,\vec{m}\) or \(z\,\vec{m}\) with \(z\neq x\) follow the same pattern as for bags.
Finally, if \(u=x\,\vec{m}\) then \(\vdash\vec{m}\) and \(\mathsf{type}(\bar{n})=[\mathsf{type}(\vec{m})\multimap o]*\mathsf{cont}(\vec{m} )(x)\). Then we can write \(\bar{n}=[n]*\bar{n}_{1}\) with \(\mathsf{type}(n)=\mathsf{type}(\vec{m})\multimap o\) and \(\mathsf{type}(\bar{n}_{1})=\mathsf{cont}(\vec{m})(x)\). We apply the induction hypothesis to \(\vec{m}\), and obtain \(\vec{m}^{\prime}\in\vec{m}[\bar{n}_{1}/x]\) such that \((\mathsf{cont}(\vec{m})-x)*\mathsf{cont}(\bar{n}_{1})\vdash_{\mbox{\tiny{s}}}\vec {m}^{\prime}:\mathsf{type}(\vec{m})\). We conclude by setting \(u^{\prime}\coloneqq n\,\vec{m}^{\prime}\).
**Lemma 6.7**.: _We have \(\vdash m\) iff \(\vdash\lambda x.m\) and then:_
* \(\mathsf{type}(\lambda x.m)=(\mathsf{cont}(m)(x):\vec{\exists})\multimap o\) _iff_ \(\mathsf{type}(m)=\vec{\beta}\multimap o\)_;_
* \(\mathsf{cont}(\lambda x.m)=\mathsf{cont}(m)-x\)_._
Proof.: Direct application of the definitions.
Now we extend the typing system to sums: if \(U\in\Sigma\Delta_{\mathrm{t}}\), we set \(\Gamma\vdash U:\rho\) when there exists \(u\in\operatorname{supp}(U)\) such that \(\Gamma\vdash u:\rho\). We obtain:
**Lemma 6.8**.: _Assume \(U\to_{\mathrm{r}}U^{\prime}\). We have \(\Gamma\vdash U:\rho\) iff \(\Gamma\vdash U^{\prime}:\rho\)._
Proof.: We first treat the case of redexes. First assume \(U=(\lambda x.m)\,\bar{n}\mathrel{\mathop{:}}\vec{p}\) and \(U^{\prime}=(m[\bar{n}/x])\,\vec{p}\).
If \(\Gamma\vdash_{\mathrm{b}}U:o\) then \(\vdash m\), \(\vdash\bar{n}\) and \(\vdash\vec{p}\), and moreover: \(\operatorname{\mathsf{type}}(\bar{n})=\operatorname{\mathsf{cont}}(m)(x)\), \(\operatorname{\mathsf{type}}(m)=\operatorname{\mathsf{type}}(\vec{p})\rightharpoonup o\) and \(\Gamma=(\operatorname{\mathsf{cont}}(m)-x)\ast\operatorname{\mathsf{cont}}( \bar{n})\ast\operatorname{\mathsf{cont}}(\vec{p})\). Lemma 6.6 yields \(m^{\prime}\in m[\bar{n}/x]\) such that \((\operatorname{\mathsf{cont}}(m)-x)\ast\operatorname{\mathsf{cont}}(\bar{n}) \vdash_{\mathrm{v}}m^{\prime}:\operatorname{\mathsf{type}}(m)\), and we set \(u^{\prime}\mathrel{\mathop{:}}=m^{\prime}\,\vec{p}\) to obtain \(u^{\prime}\in U^{\prime}\) and \(\Gamma\vdash_{\mathrm{b}}u^{\prime}:o\).
Conversely, if \(\Gamma\vdash_{\mathrm{b}}U^{\prime}:o\), there exists \(m^{\prime}\in m[\bar{n}/x]\) such that \(\Gamma\vdash_{\mathrm{b}}m^{\prime}\,\vec{p}:o\). It follows that \(\vdash\,m^{\prime}\) and \(\vdash\vec{p}\), and moreover \(\Gamma=\operatorname{\mathsf{cont}}(m^{\prime})\ast\operatorname{\mathsf{cont }}(\vec{p})\) and \(\operatorname{\mathsf{type}}(m^{\prime})=\operatorname{\mathsf{type}}(\vec{p})\rightharpoonup o\). Lemma 6.5 entails that \(\vdash m\), \(\vdash\bar{n}\), and moreover \(\operatorname{\mathsf{type}}(\bar{n})=\operatorname{\mathsf{cont}}(m)(x)\), \(\operatorname{\mathsf{type}}(m)=\operatorname{\mathsf{type}}(m^{\prime})\), and \(\operatorname{\mathsf{cont}}(m^{\prime})=(\operatorname{\mathsf{cont}}(m)-x) \ast\operatorname{\mathsf{cont}}(\bar{n})\). It follows that \(\operatorname{\mathsf{type}}(\lambda x.m)=(\operatorname{\mathsf{type}}( \bar{n})\mathrel{\mathop{:}}\operatorname{\mathsf{type}}(\vec{p}))\rightharpoonup o\) and \(\Gamma=\operatorname{\mathsf{cont}}(\lambda x.m)\ast\operatorname{\mathsf{ cont}}(\bar{n})\ast\operatorname{\mathsf{cont}}(\vec{p})\). hence \(\Gamma\vdash_{\mathrm{b}}U:o\).
Now assume \(U=(\lambda\vec{x}.a)\,\iota\) and \(U^{\prime}=a\,\mathord{\mathord{\downarrow}}\,\vec{x}\).
If \(\Gamma\vdash_{\mathrm{b}}U:o\) then \(\vdash a\) and \(\operatorname{\mathsf{cont}}(a)(\vec{x})=\iota\). By lemma 6.2, \(\vec{x}\cap\mathcal{V}(a)=\emptyset\) so \(U^{\prime}=a\). Moreover, \(\operatorname{\mathsf{cont}}(a)-\vec{x}=\Gamma\).
Conversely, if \(\Gamma\vdash_{\mathrm{b}}U^{\prime}:o\) then \(U^{\prime}\neq 0\) hence \(U^{\prime}=a\) with \(\vec{x}\cap\mathcal{V}(a)=\emptyset\). Hence, by lemma 6.2, \(\operatorname{\mathsf{cont}}(a)(\vec{x})=\iota\) so \(\operatorname{\mathsf{type}}(\lambda\vec{x}.a)=\iota\rightharpoonup o\) and \(\operatorname{\mathsf{cont}}(\lambda\vec{x}.a)=\Gamma\). We obtain \(\Gamma\vdash_{\mathrm{b}}U:o\).
Next, we treat the case of \(U=u\in\Sigma\Delta_{\mathrm{t}}\), and \(U\mapsto_{\mathrm{r}}U^{\prime}\). We obtain the result by a straightforward induction on the definition of \(\mapsto_{\mathrm{r}}\).
Finally, if \(U=u+V\) and \(U^{\prime}=U^{\prime}+V\), with \(u\mapsto_{\mathrm{r}}U^{\prime}\), we conclude directly from the previous case, observing that \(\Gamma\vdash u+V:\rho\) (resp. \(\Gamma\vdash U^{\prime}+V:\rho\)) iff \(\Gamma\vdash u:\rho\) or \(\Gamma\vdash V:\rho\) (resp. \(\Gamma\vdash U^{\prime}:\rho\) or \(\Gamma\vdash V:\rho\)).
**Lemma 6.9**.: _For any \(u\in\Delta_{\mathrm{t}}\):_
* _either_ \(\not\vdash u\) _and_ \(\mathcal{N}(u)=0\)_;_
* _or_ \(\vdash\) _\(u\)_, and then_ \(\Gamma\vdash\mathcal{N}(u):\rho\) _iff_ \(\Gamma=\operatorname{\mathsf{cont}}(u)\) _and_ \(\rho=\operatorname{\mathsf{type}}(u)\) _-- in particular_ \(\mathcal{N}(u)\neq 0\)_._
Proof.: Since \(u\to_{\mathrm{r}}^{*}\mathcal{N}(u)\), we can apply the previous lemma:
* if \(\not\vdash u\) then \(\not\vdash\mathcal{N}(u)\), hence \(\mathcal{N}(u)=0\) by lemma 6.1;
* if \(\vdash u\) then \(\Gamma\vdash\mathcal{N}(u):\rho\) iff \(\Gamma\vdash u:\rho\) iff \(\Gamma=\operatorname{\mathsf{cont}}(u)\) and \(\rho=\operatorname{\mathsf{type}}(u)\), again by lemma 6.1.
### Taylor expansion of relational semantics
Since the relational semantics is concerned only with the support of resource vectors, we assume \(\mathbb{K}=\mathbb{B}\) for the remaining of this section, and identify any vector \(U\in\mathbb{K}\langle\Delta_{\mathrm{t}}\rangle\) with its support set. We then set \(\Gamma\vdash U:\rho\) iff there exists \(u\in U\) with \(\Gamma\vdash u:\rho\). The results of the previous section entail:
**Lemma 6.10**.: _If \(U,U^{\prime}\in\mathbb{B}\langle\Delta_{\mathrm{t}}\rangle\) and \(U\rightsquigarrow U^{\prime}\) then \(\Gamma\vdash U:\rho\) iff \(\Gamma\vdash U^{\prime}:\rho\)._
We reformulate the relational type system for \(\lambda\)-terms, changing the rules for abstraction and application, to take into account the distinction between \(\mathcal{D}_{\mathrm{v}}\) and \(\mathcal{D}_{\mathrm{s}}\):
\[\infer{x:[\alpha]\vdash x:\alpha}{\Gamma\vdash M:\vec{\beta}\rightharpoonup o} \infer{\Gamma\vdash M:\vec{\alpha}\vdash M:(\Gamma(x)::\vec{\beta}) \rightharpoonup o}\infer{\Gamma\vdash M:\alpha_{1}\mathrel{\cdots}\mathrel{ \cdots}\mathrel{\Gamma_{k}\vdash M:\alpha_{k}}}{\sum_{i=1}^{k}\Gamma_{i} \vdash_{!}M:[\alpha_{1},\ldots,\alpha_{k}]}\]
and settle to prove:
**Theorem 6.11**.: _For any \(\lambda\)-term \(M\), the following are equivalent:_
* \(\Gamma\vdash M:\alpha\)_;_
* \(\Gamma\vdash\mathcal{T}_{\eta}(M):\alpha\)_;_
* \(\Gamma\vdash\mathcal{T}_{h}(M):\alpha\)_._
The equivalence between the last two items follow from lemma 6.10 and theorem 4.7. A first step is to consider the expansion of variables.
**Lemma 6.12**.: _Let \(x\in\mathcal{V}\) and \(\vec{x}\in\mathcal{V}_{\mathrm{s}}\). Then:_
* _for each_ \(\alpha\in\mathcal{D}_{\mathrm{v}}\)_, there is a unique_ \(\mathsf{w}_{x}(\alpha)\in x^{\eta}\) _with_ \(\mathsf{type}(\mathsf{w}_{x}(\alpha))=\alpha\)_;_
* _for each_ \(\bar{\alpha}\in\mathcal{D}_{\mathrm{l}}\)_, there is a unique_ \(\mathsf{w}_{x}^{!}(\alpha)\in(x^{\eta})^{!}\) _with_ \(\mathsf{type}(\mathsf{w}_{x}^{!}(\bar{\alpha}))=\bar{\alpha}\)_;_
* _for each_ \(\vec{\alpha}\in\mathcal{D}_{\mathrm{s}}\)_, there is a unique_ \(\mathsf{w}_{\vec{x}}(\vec{\alpha})\in\vec{x}^{!}\) _with_ \(\mathsf{type}(\mathsf{w}_{\vec{x}}(\vec{\alpha}))=\vec{\alpha}\)_._
_Moreover:_
\[x:[\alpha]\vdash_{\mathrm{v}}\mathsf{w}_{x}(\alpha):\alpha,\quad x:\bar{ \alpha}\vdash_{\mathrm{v}}\mathsf{w}_{x}^{!}(\bar{\alpha}):\bar{\alpha},\quad \vec{x}:\vec{\alpha}\vdash_{\mathrm{v}}\mathsf{w}_{\vec{x}}(\vec{\alpha}): \vec{\alpha},\]
_and:_
* _if_ \(m\in x^{\eta}\) _then_ \(m=\mathsf{w}_{x}(\mathsf{type}(m))\)_;_
* _if_ \(\bar{m}\in(x^{\eta})^{!}\) _then_ \(\bar{m}=\mathsf{w}_{x}^{!}(\mathsf{type}(\bar{m}))\)_;_
* _if_ \(\bar{m}\in\vec{x}^{!}\) _then_ \(\bar{m}=\mathsf{w}_{\vec{x}}(\mathsf{type}(\bar{m}))\)_._
Proof.: We define \(\mathsf{w}_{x}(\alpha)\), \(\mathsf{w}^{!}_{x}(\bar{\alpha})\) and \(\mathsf{w}_{\vec{x}}(\vec{\alpha})\) by mutual induction on \(\alpha\), \(\bar{\alpha}\) and \(\vec{\alpha}\). Given \(\alpha=\vec{\alpha}\mathbin{\rightharpoonup}o\), we chose \(\vec{y}\not\ni x\) and set \(\mathsf{w}_{x}(\alpha)\coloneqq\lambda\vec{y}.x\,\mathsf{w}_{\vec{y}}(\vec{ \alpha})\). If \(\bar{\alpha}=[\alpha_{1},\ldots,\alpha_{k}]\), we set \(\mathsf{w}^{!}_{x}(\bar{\alpha})\coloneqq[\mathsf{w}_{x}(\alpha_{1}),\ldots, \mathsf{w}_{x}(\alpha_{k})]\). Finally, if \(\vec{\alpha}=\langle\bar{\alpha}_{i}\rangle_{i\in\mathbb{N}}\), we set \(\mathsf{w}_{\vec{x}}(\vec{\alpha})\coloneqq\langle\mathsf{w}^{!}_{\vec{x}[i]} (\bar{\alpha}_{i})\rangle_{i\in\mathbb{N}}\).
The fact that these expressions are the only ones satisfying the requirements, together with the induced typing judgements, are easily established by induction on type terms. Finally, the last three items are obtained by mutual induction on resource terms.
Proof of theorem 6.11.: We establish that:
* \(\Gamma\vdash M:\alpha\) iff there exists \(m\in\mathcal{T}_{\eta}(M)\) with \(\Gamma\vdash_{\mathrm{v}}m:\alpha\), and
* \(\Gamma\vdash_{!}M:\bar{\alpha}\) iff there exists \(\bar{m}\in\mathcal{T}_{\eta}(M)^{!}\) with \(\Gamma\vdash_{!}\bar{m}:\bar{\alpha}\);
by induction on \(M\). The second statement follows directly from the first: if \(\bar{\alpha}=[\alpha_{1},\ldots,\alpha_{k}]\) then
\[\begin{array}{ll}\Gamma\vdash_{!}M:\bar{\alpha}&\text{iff}&\Gamma=\Gamma_{1 }*\cdots*\Gamma_{k}\text{ with }\Gamma_{i}\vdash M:\alpha_{i}\text{ for }1\leq i\leq k\\ &\text{iff}&\Gamma=\Gamma_{1}*\cdots*\Gamma_{k}\text{ with }\Gamma_{i}\vdash_{ \mathrm{v}}\mathcal{T}_{\eta}(M):\alpha_{i}\text{ for }1\leq i\leq k\\ &\text{iff}&\Gamma\vdash_{!}\mathcal{T}_{\eta}(M):\bar{\beta}.\end{array}\]
First assume \(M=x\). If \(\Gamma\vdash M:\alpha\) then \(\Gamma=x:[\alpha]\) and we have \(x:[\alpha]\vdash_{\mathrm{v}}\mathsf{w}_{x}(\alpha):\alpha\) with \(\mathsf{w}_{x}(\alpha)\in x^{\eta}\). Conversely, if \(m\in x^{\eta}\) with \(\Gamma\vdash_{\mathrm{v}}m:\alpha\), we must have \(m=\mathsf{w}_{x}(\alpha)\), hence \(\Gamma=x:[\alpha]\).
Now assume \(M=\lambda x.N\). If \(\Gamma\vdash M:\alpha\) then \(\Gamma=\Delta-x\) and \(\alpha=(\Delta(x)\mathbin{\rightharpoonup}\vec{\gamma})\multimap o\), with \(\Delta\vdash N:\vec{\gamma}\multimap o\). By induction hypothesis, this holds iff there exists \(n\in\mathcal{T}_{\eta}(N)\) with \(\Delta\vdash_{\mathrm{v}}n:\vec{\gamma}\multimap o\), which is equivalent to \(\Delta-x\vdash_{\mathrm{v}}\lambda x.n:(\Delta(x)\mathbin{\rightharpoonup} \vec{\gamma})\multimap o\), _i.e._\(\Gamma\vdash_{\mathrm{v}}\lambda x.n:\alpha\).
Finally, assume \(M=N\,P\). If \(\Gamma\vdash M:\vec{\alpha}\multimap o\) then we can write \(\Gamma=\Delta*\Phi\) and there exists \(\bar{\gamma}\in\mathcal{D}_{!}\) such that \(\Delta\vdash N:\bar{\gamma}\mathbin{\rightharpoonup}\vec{\alpha}\multimap o\) and \(\Phi\vdash_{!}P:\bar{\gamma}\). By induction hypothesis, we obtain \(n\in\mathcal{T}_{\eta}(N)\) and \(\bar{p}\in\mathcal{T}_{\eta}(P)^{!}\) such that such that \(\Delta\vdash_{\mathrm{v}}n:\bar{\gamma}\mathbin{\rightharpoonup}\vec{\alpha} \multimap o\) and \(\Phi\vdash_{!}\bar{p}:\bar{\gamma}\). Let \(\vec{x}\) be a fresh sequence variable. We have \(\Phi*\vec{x}:\vec{\alpha}\mathbin{\rightharpoonup}_{\mathrm{s}}\bar{p}:\mathsf{ w}_{\vec{x}}(\vec{\alpha}):\bar{\gamma}\mathbin{\rightharpoonup}\vec{\alpha}\) and then \(\Delta*\Phi*\vec{x}:\vec{\alpha}\vdash_{\mathrm{b}}n\,\bar{p}\mathbin{\rightharpoonup} \mathsf{w}_{\vec{x}}(\vec{\alpha}):o\), and finally \(\Gamma\vdash_{\mathrm{v}}\lambda\vec{x}.n\,\bar{p}:\mathsf{w}_{\vec{x}}(\vec{ \alpha}):\vec{\alpha}\multimap o\), We conclude since \(\lambda\vec{x}.n\,\bar{p}:\mathsf{w}_{\vec{x}}(\vec{\alpha})\in\mathcal{T}_{ \eta}(M)\).
Conversely, if \(m\in\mathcal{T}_{\eta}(M)\) with \(\Gamma\vdash_{\mathrm{v}}m:\vec{\alpha}\multimap o\), then we must have \(m=\lambda\vec{x}.n\,\bar{p}\mathbin{\rightharpoonup}\vec{m}\) with \(n\in\mathcal{T}_{\eta}(N)\), \(\bar{p}\in\mathcal{T}_{\eta}(P)^{!}\) and \(\vec{m}\in\vec{x}^{!}\), so that \(\Gamma=\Gamma^{\prime}-\vec{x}\) with \(\Gamma^{\prime}(\vec{x})=\vec{\alpha}\) and \(\Gamma^{\prime}\vdash_{\mathrm{b}}n\,\bar{p}\mathbin{\rightharpoonup}\vec{m}:o\). Necessarily, \(\Gamma^{\prime}(\vec{x})=\mathsf{cont}(\vec{m})(\vec{x})\), hence \(\vec{m}=\mathsf{w}_{\vec{x}}(\vec{\alpha})\) and \(\vec{x}:\vec{\alpha}\vdash_{\mathrm{s}}\vec{m}:\vec{\alpha}\). Then there must exist \(\bar{\gamma}\in\mathcal{D}_{!}\) and contexts \(\Delta\) and \(\Phi\) such that \(\Gamma=\Delta*\Phi\), \(\Delta\vdash_{\mathrm{v}}n:(\bar{\gamma}\mathbin{\rightharpoonup}\vec{\alpha}) \multimap o\) and \(\Phi\vdash_{!}\bar{p}:\bar{\gamma}\). By induction hypothesis, we obtain \(\Delta\vdash N:(\bar{\gamma}\mathbin{\rightharpoonup}\vec{\alpha})\multimap o\)\(\Phi\vdash_{!}P:\bar{\gamma}\), hence \(\Gamma\vdash M:\vec{\alpha}\multimap o\).
Corollary 4.11, lemma 6.10, and theorems 5.2 and 6.11, together with the inductive definition of the relational semantics entail that:
**Corollary 6.13**.: _Setting \(M=_{\mathcal{D}}N\) when \(\llbracket M\rrbracket=\llbracket N\rrbracket\) defines a sensible extensional \(\lambda\)-theory._
As we have stated above, this property is already well known [1]: the originality of our approach is to relate this semantics with extensional Taylor expansion. In particular, this allows us to give a new proof that \(=_{\mathcal{D}}\) coincides with \(=_{\mathcal{U}^{*}}\): this was first established by Manzonetto [14], who gave sufficient axiomatic conditions on a cartesian closed category to host a reflexive object modelling \(\mathcal{H}^{*}\). Here the result comes directly from the properties of extensional Taylor expansion:
**Corollary 6.14**.: _The relations \(=_{\mathcal{T}}\) and \(=_{\mathcal{D}}\) coincide. Hence \(=_{\mathcal{D}}\) also coincides with \(=_{\mathcal{U}^{*}}\)._
Proof.: We have just established that \(=_{\mathcal{D}}\) is a consistent sensible \(\lambda\)-theory, hence \(=_{\mathcal{D}}\subseteq=_{\mathrm{hn}}\) and we obtain \(=_{\mathcal{D}}\subseteq=_{\mathcal{T}}\) by theorem 5.10. For the reverse inclusion, assume \(M=_{\mathcal{T}}N\): by corollary 4.11 and lemma 6.10 we obtain \([\![\mathcal{T}_{\eta}(M)]\!]=[\![\mathcal{T}_{\eta}(N)]\!]\), and theorem 6.11 yields \([\![M]\!]=[\![N]\!]\).
|
2304.07493 | OliVe: Accelerating Large Language Models via Hardware-friendly
Outlier-Victim Pair Quantization | Transformer-based large language models (LLMs) have achieved great success
with the growing model size. LLMs' size grows by $240\times$ every two years,
which outpaces the hardware progress and makes model inference increasingly
costly. Model quantization is a promising approach to mitigate the widening gap
between LLM size and hardware capacity. However, the existence of outliers,
values with significant magnitudes, in LLMs makes existing quantization methods
less effective. Prior outlier-aware quantization schemes adopt sparsity
encoding techniques to separate outliers from normal values where the process
requires global coordination (e.g., a global sparsity coordination list). This
incurs complex encoding/decoding hardware logics and an extra orchestration
controller for the computation between outlier and normal values. As such, it
is not hardware-efficient and hence only achieves sub-optimal quantization
benefits.
We propose OliVe, an algorithm/architecture co-designed solution that adopts
an outlier-victim pair (OVP) quantization and handles outlier values locally
with low hardware overheads and high performance gains. The key insight of
OliVe is that outliers are important while the normal values next to them are
not. Thus those normal values (called victims) can be sacrificed to accommodate
outliers. This enables a memory-aligned OVP encoding scheme, which can be
efficiently integrated to the existing hardware accelerators like systolic
array and tensor core. As a result, OliVe-based accelerator surpasses the
existing outlier-aware accelerator, GOBO, by 4.5$\times$ speedup and
4.0$\times$ energy reduction, respectively, with a superior model accuracy. | Cong Guo, Jiaming Tang, Weiming Hu, Jingwen Leng, Chen Zhang, Fan Yang, Yunxin Liu, Minyi Guo, Yuhao Zhu | 2023-04-15T07:12:05Z | http://arxiv.org/abs/2304.07493v1 | # OliVe: Accelerating Large Language Models via
###### Abstract
Transformer-based large language models (LLMs) have achieved great success with the growing model size. LLMs' size grows by \(240\)x every two years, which outpaces the hardware progress and makes model inference increasingly costly. Model quantization is a promising approach to mitigate the widening gap between LLM size and hardware capacity. However, the existence of outliers, values with significant magnitudes, in LLMs makes existing quantization methods less effective. Prior outlier-aware quantization schemes adopt sparsity encoding techniques to separate outliers from normal values where the process requires _global_ coordination (e.g., a global sparsity coordination list). This incurs complex encoding/decoding hardware logics and an extra orchestration controller for the computation between outlier and normal values. As such, it is not hardware-efficient and hence only achieves sub-optimal quantization benefits.
Large Language Model, Outlier-Victim Pair, Quantization 2023
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
## 1. Introduction
Transformer-based large language models (LLMs) (Wang et al., 2017) have demonstrated great success in the past years. Such success is often achieved with the increasingly larger model size: the model size grows by \(240\times\) every two years, significantly outpacing the hardware progress (\(3.1\times\) per two years) (Kumar et al., 2017). As a result, the inference of LLMs becomes challenging and costly. For instance, OPT-175B (Kumar et al., 2017), a recent Transformer-based LLM, has \(175\) billion parameters, which cannot fit in the latest high-end H100 GPU with 80GB memory.
Quantization (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017) is one of the most hardware-efficient ways to reduce inference costs for large models. It uses low-precision data types to compress models and accelerate the computation with practical hardware implementations, e.g., TPU (Kumar et al., 2017) and GPU tensor core (Kumar et al., 2017).
However, existing quantization schemes (Grover et al., 2016; Wang et al., 2017; Wang et al., 2017) are less effective in Transformer-based LLMs. Recent studies show when the model size exceeds a threshold (e.g., 6 billion), the model performance is vulnerable to only a tiny fraction (\(<0.1\%\)) of outliers, whose values are much more significant than normal values (Grover et al., 2016). Indiscriminately clipping both outlier and normal values will lead to significant drops in model accuracy (Grover et al., 2016; Wang et al., 2017). As a result, the common practice is to adopt a larger bit-width, e.g., 8-bit or 16-bit, to quantize Transform-based models, compared to convolutional networks (CNNs).
Researchers have proposed various quantization/architecture co-design works (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017) to deal with the outliers in Transformer models. For example, outlier suppression (Wang et al., 2017) proposes to suppress the outliers. But it still has significant accuracy loss in the lower bit-width (4-bit), suggesting the difficulty in accommodating the effects of outliers. In addition, architecture researchers have designed sophisticated outlier-aware hardware architectures to store outliers with high precision to maintain model accuracy. These outlier-aware quantization frameworks divide the tensor into normal and outlier values, and encode them separately using different ways. For normal values, a dense matrix with low precision (e.g., 4-bit) quantization is adopted. And the sparse and high-precision (e.g., 8-bit and 16-bit) outlier values can be compressed with sparsity-based encoding. Such encoding unfortunately leads to unaligned memory access. For example, GOBOs (Kumar et al., 2017) and OLAccels (Kumar et al., 2017) use the coordinate list to indicate the location of each outlier value in the matrix, as shown in Fig. 1a. BiScaled-DNNs (Kumar et al., 2017) exploits block sparse indices format to store the outlier indices, and DRQ (Kumar et al., 2017) uses the direct bitmap for outliers. These outlier-aware solutions require complex architectural designs with significant hardware overheads to accommodate outliers. Moreover, due to the random and unaligned memory access, the sparsity-based encoding is incompatible with the memory sub-systems of existing accelerators, such as GPU and TPU. Specifically, GOBO (Kumar et al., 2017) can only de/compress weight tensors on the off-chip DRAM, it still relies on the original on-chip memory and computation architecture of GPU with high precision FP16/32.
The aforementioned outlier-aware architectures separate normal values from outliers in a _global_ way. For instance, GOBO (Kumar et al., 2017) involves a global sparse coordinate list in the quantization and computation, leading to a large hardware overhead and low performance benefits. In this work, we aim to design an architecture to handle outliers in a _localized_ way with high hardware efficiency. To achieve that, we group two consecutive fixed-size values in a tensor and analyze their impact to model accuracy. There can be three kinds of pairs: i) a normal pair with two normal values, ii) one-outlier pair with one normal value and one outlier value, iii) two-outlier pair with two outlier values. We observe that the third two-outlier pair almost never shows up in well-trained LLMs. For the second one-outlier pair, we find that _only keeping its outlier value while pruning its normal value_ (i.e., treating it as zero) is sufficient to maintain the model accuracy.
Based on the above observations, we propose a novel outlier-aware quantization architecture, called Olive, based on the outlier-victim pair (OVP) encoding. The salient feature of Olive is memory-aligned and therefore hardware-friendly. As illustrated in Fig. 1b, Olive first prunes normal values that are adjacent to the outliers as zero. These pruned normal values are called **victims**, which sacrifice themselves and make space for outliers. Then, we exploit the extra space provided by victims and embed the outliers into the low-precision matrix.
Olive is able to maintain a high accuracy for large Transformer models with a low hardware overhead due to the following reasons. First, Olive incorporates victims to tackle outliers in LLMs. The effects of victims resemble model pruning (Wang et al., 2017). Although clipping a few (\(0.1\%\)) outliers will lead to a disastrous accuracy drop (Grover et al., 2016; Wang et al., 2017), pruning the same amount of "normal" values will only impact model accuracy slightly (\(<0.1\%\) drop). Therefore, Olive sacrifices ("prunes") those insignificant values as victims for the outliers, allowing a more aggressive encoding scheme to accommodate extremely significant values. Second, the OVP encoding follows a specific outlier-victim (or victim-outlier) pattern to achieve memory alignment with little hardware overheads. Each victim is adjacent to an outlier, and the outlier-victim pair must align the memory access pattern. For example, in Fig. 1b, right outlier \(-98\) in the OV pair needs a left victim, and left outliers \(17.6\) and \(30.7\) require the right victims. That can align 8-bit (1-byte) memory accesses with high efficiency. This design enables a completely localized outlier decoding/encoding process.
Figure 1. Outlier-aware encoding comparison. (a) Prior quantization works adopt sparsity-based encoding that store normal and outlier values separately. (b) Our proposed outlier-victim pair encoding stores normal and outlier values locally.
To implement O1iVe, different data types are employed for outliers and normal values, which have different dynamic ranges and representation formats, including int4 and FP4. As shown in Fig. 0(b), we propose a novel encoding method (Sec. 3) for the 4-bit OV pair, which composes a 4-bit outlier and a 4-bit victim into a special 8-bit format and differs from the original int8 or FP8. Due to its hardware-friendly and compatible design, O1iVe can be easily integrated into existing quantization frameworks and accelerator architectures such as systolic array in Google TPUs (Wang et al., 2019) and tensor core in NVIDIA GPUs (Wang et al., 2019; Wang et al., 2020). O1iVe can also inherently support the mixed-precision and mixed-type architecture, showing its flexibility and practicality for larger-scale Transformer models.
To the best of our knowledge, O1iVe is the first work pushing the limit of Transformer post-training quantization (PTQ) (Beng et al., 2019), which requires no retraining after quantization, to the 4-bit level for both the weight and activation tensors with the accuracy loss of \(<1\%\). Surprisingly, O1iVe's 4-bit PTQ accuracies for BERT (Liu et al., 2020) and BART (Wang et al., 2020) models outperform the 6-bit PTQ results of outlier suppression (Wang et al., 2020), a state-of-the-art Transformer quantization method. O1iVe-based accelerator surpasses the existing outlier-aware accelerators OLAccel (Wang et al., 2019) and GOBO (Wang et al., 2020) by \(3.8\times\) and \(4.5\times\) performance improvement, and \(2.1\times\) and \(4.0\times\) energy reduction, respectively. More importantly, the O1iVe-based accelerator has more comprehensive and practical applicability than other outlier-specific architectures.
We make the following contributions in this paper.
* We conduct the pair-wise importance analysis and show that outliers are important while their adjacent normal values are not, revealing the algorithmic opportunity of outlier-victim pair (OVP) that sacrifices the colocated normal values (called victims) to accommodate the outliers.
* We propose the OVP-based quantization framework, called O1iVe, which includes an efficient hardware encoding and novel outlier representation data type.
* We propose the efficient architectural implementation and integration of O1iVe quantization, and show that its efficiency and benefits outperform the existing outlier-aware quantization algorithms and hardware accelerators.
## 2. Motivation: aligned outlier
In this section, we first show that the outlier of the Transformer model is much more significant and important compared to convolution neural networks (CNN). Previous works (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) propose the outlier-aware quantization microarchitecture with adaptive bit length to accomplish the low-bit quantization but necessitate substantial hardware resources to deal with the variable-length data, which cause unaligned memory accesses and are incompatible with the memory sub-system of existing accelerators, e.g., GPU (Wang et al., 2020). In contrast, we propose a memory-aligned and hardware-friendly method, called outlier-victim pair mechanism, which is inspired by DNN pruning and our outlier group location analysis for Transformers. We can prune some "victimes" to make space to embed high-precision outliers into the memory-aligned low-bit tensor with ignorable accuracy loss.
### Outlier Matters
We visually demonstrate how significant the Transformer's outlier is in Fig. 2. We adopt the empirical \(3\sigma\)**rules**(Wang et al., 2020) of the normal distribution to divide the values into outlier and normal values. We employ the ResNet-18 (Wang et al., 2020) as the representative for the CNN model and the BERT\({}_{base}\)(Liu et al., 2020) for the Transformer model. We fit the DNN tensors with normal distribution, i.e., Equation 1, where \(x\) is the value, \(\mu\) is the mean, and \(\sigma\) is the standard deviation. We convert the tensor into a standard normal distribution.
\[f(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-y}{\sigma}\right) ^{2}} \tag{1}\]
We collect all tensors' maximum values and normalize them by the \(\sigma\) (Max \(\sigma\)). We sort and plot the tensors by their Max \(\sigma\) in Fig. 2.
Most tensors can fit the normal distribution \(3\sigma\) rules, i.e., about 99.7% of the values lie within three standard deviations of the mean. The outlier (\(>3\sigma\)) ratio of most tensors is lower than \(0.5\%\), and the values of \(>6\sigma\) are extremely few in tensors. Therefore, normal values are relatively concentrated, indicating that we can quantize the normal values with a narrow range to enhance the resolution of quantization.
The more obvious observation is that the Max \(\sigma\) of the Transformer is larger than that of CNN by one order of magnitude. Some research (Wang et al., 2019; Wang et al., 2020) shows that although the outliers are clipped for CNN models, the accuracy can still be restored to the original value with the retraining algorithm under ultra-low-bit precision, e.g., 4-bit. However, it is challenging for Transformer models, which have much more significant outliers. The state-of-the-art quantization works (Wang et al., 2020; Wang et al., 2020) also demonstrate a similar observation and only can achieve the original accuracy with higher-precision quantization for large-scale Transformer models due to the outliers. Therefore, keeping the outlier without clipping will significantly benefit quantizing Transformer models.
### Outlier Is Unaligned
The importance of outliers has attracted many research interests, which sparked several outlier-aware architectures, as depicted in Tbl. 1. OLAccel (Wang et al., 2019) and GOBO (Wang et al., 2020) are similar and exploit the coordinate list to indicate the location of outliers, which use high-precision (8-bit or 16-bit) quantization. BiScaled-DNN (Wang et al., 2020) and DRQ (Wang et al., 2020) employ block sparse index and bitmap, respectively.
Figure 2. Outlier Comparison of CNN model and Transformer model. The \(\sigma\) is the standard deviation of the tensor. We normalize the maximum number by \(\sigma\) to plot the Max \(\sigma\) curve (left y-axis). The \(>3\sigma\%\) and \(>6\sigma\) (right y-axis) are the percentage of the values of \(>3\sigma\) and \(>6\sigma\), respectively.
BiScaled-DNN quantizes all values with the same bit-width but different scale factors for normal values and outliers, which are aligned. However, the extra index compressed in the block sparsity method is unaligned. On the contrary, DRQ's bitmap is aligned, but data is stored by mixed and thus unaligned 4- & 8-bit values.
In summary, prior works design the outlier-aware architecture based on the sparsity of outliers, which leads to unaligned memory storage and accesses. More seriously, the indices of sparsity-based encoding and the outliers are separate. As such, they need the extra outlier controller to parse indices for the outliers and orchestrate the computation between normal values and outlier values. For example, the extra outlier controllers of GOBO and OLAccel count up to \(55\%\) and \(71\%\) overhead to the total area of the processing element (PE) array (Ross et al., 2017; Zhang et al., 2018). The sparsity-based encoding for outliers is also **incompatible** with the memory sub-system of existing accelerators. For the GOBO design (Zhang et al., 2018), it can only compress and decompress the memory at the DRAM level for GPU. This greatly limits the applicability of its proposed outlier-aware architecture.
Therefore, a more hardware-friendly and applicable outlier decoding/encoding method should be proposed to fit the outlier-aware quantization. Our proposed O11Ve architecture is able to align memory accesses and is also compatible with existing accelerators based on the OVP mechanism.
### Outlier and Victim Analysis
Generally, the sparsity-based encoding borrowed from DNN pruning is a straightforward and effective solution for sparse outliers. However, these works ignored that quantization is different from pruning. For pruning, the pruned zero values do not participate in the computation. As such, the pruning method has to compress the sparse values with sparsity-based encoding. For quantization, the quantized normal values are the majority and need computation. Naturally, the outlier values can exploit the normal values to achieve memory alignment instead of sparsity-based encoding.
As depicted in Fig. 1b in Sec. 1, we employ the insight of pruning but with a different perspective from prior works. The new method employs the **outlier-victim pair** (OVP) mechanism. We first prune some quantized low-precision normal values, which we call **victims**. These victims are adjacent to the outliers and make extra space for the high-precision outliers. Therefore, we can embed the outliers in their original location without explicit sparse indexing. That can avoid the complex indexing hardware and make it compatible with GPU. To align the memory, we distinguish the "right outlier" and "left outlier" according to their position in the pair. We assign a right victim for the left outlier (e.g., \(17.6\) in Fig. 1b) and a left victim for the right outlier (e.g., \(-98\) in Fig. 1b).
The OVP mechanism is based on our observation of large Transformer models, including BERT-base (He et al., 2017), BERT-large (He et al., 2017), GPT2-XL (Wang et al., 2018), and OPT-6.7B (Wang et al., 2018). We collect all tensors, calculate their standard variance \(\sigma\), and divide the values into normal values (\(<3\sigma\)) and outlier values (\(>3\sigma\)) by the \(3\sigma\) rule. We then pair every two adjacent values (no overlapping), which leads to three types: normal-normal pair, outlier-normal pair, and outlier-outlier pair, as shown in Tbl. 2. These three types have two normal values, one normal value and one outlier value, and two outlier values, respectively.
Tbl. 2 demonstrates that most (about \(99\%\)) pairs are normal-normal pairs, with only around \(1\%\) of outlier-normal pairs. Outlier-outlier pairs need to prune the smaller outlier in the pair. Fortunately, the outlier-outlier pairs only have an extremely low probability of less than \(0.06\%\) in all studied models. Therefore, the outlier distribution is extremely dispersed, and we can retain most outliers.
We also conducted the accuracy experiments with the BERT\({}_{base}\) model (Wang et al., 2018) on the GLUE dataset (Zhang et al., 2018), as depicted in Fig. 3. First, we clip the outliers to the \(3\sigma\), where clipping is the common method adopted by quantization. Then, we prune the victims and normal values to zero. The victims are adjacent to the outliers, and normal values are randomly pruned with the same amount as the outliers. We keep the rest values with full precision (FP32). Although such few outliers (about \(1\%\)) are clipped, as shown in Fig. 3 clipping outlier, the accuracy loss is unacceptable for the BERT model. The results emphasize the importance of outliers in Transformer-based model. For comparison, pruning random normal values has almost no accuracy loss than the source accuracy. The pruning of victim values only shows a negligible accuracy decrease than the pruning of normal values because the victims include
\begin{table}
\begin{tabular}{c|c|c|c}
**Accelerator** & **Encoding** & **Aligned Memory?** & **GPU Compatible?** \\ \hline OLAccel (Ross et al., 2017) & Coordinate list & No & No \\ \hline BiScaled-DNN (Wang et al., 2018) & Block sparse index & Allied data Unaligned index & No \\ \hline DRQ (Zhang et al., 2018) & Binary mask map & Unaligned index & No \\ \hline GOBO (Zhang et al., 2018) & Coordinate list & No & DRAM-only \\ \hline
**O1iVe (Ours)** & **Outlier-victim pair** & Yes & Yes \\ \end{tabular}
\end{table}
Table 1. Comparison between existing outlier-aware accelerators and our proposed method O1iVe.
Figure 3. Accuracy comparison of multiple pruning methods.
the outlier-outlier pair and have specific locations corresponding to the adjacent outlier.
In summary, our analysis indicates that outliers are important while the victims are not, so that we can sacrifice victims to accommodate the outliers. This motivates us to design the hardware-friendly OVP mechanism that provides aligned outlier-aware quantization to accelerate the large Transformer models. In the next section, we will introduce the outlier-victim pair encoding design.
## 3. Outlier-victim pair encoding
In this section, we present the details of outlier-victim pair (OVP) encoding that is _globally identical but locally distinguishable_ for outlier and normal values. The OVP encoding can maintain globally aligned memory access and distinguish the outliers locally with ignorable overhead. For normal values, we can support multiple data types to fit the adaptive data type. For encoding outliers, we design an outlier-specific data type, adaptive bias float, abfloat, which can avoid range overlapping between normal values and outliers, thus improving the utilization ratio of the numerical representation space of outlier encoding. Finally, based on the OVP encoding, we propose a framework that can automatically select the outlier threshold for OVP encoding to determine a suitable ratio of the outlier-victim pair.
### OVP Encoding Algorithm
Based on the previous pair-wise tenor value analysis, there are three pair types: normal-normal, outlier-normal, and outlier-outlier. For outlier-normal, the normal value in the pair will be pruned and turned into a victim. For outlier-outlier, we remain the large one and prune the other. Then, we get the normal-normal pairs and outlier-victim pairs in the DNN tensors.
**Outlier Identifier**. To distinguish from the normal-normal pair, we need a special identifier for the outlier-victim pair. And this distinct identifier cannot appear in the normal-normal pair, which means we need to eliminate one number in the representation of normal values. For example, as shown in Fig. 4, we employ the signed int4 (4-bit integer) for the normal value quantization. The original int4 can represent the integers in the range of \([-8,7]\), where \(1000_{2}\) represents the value of \(-8\). First, we make \(1000_{2}\) the outlier identifier and remove the value of \(1000_{2}\) from int4, whose encoding range becomes \([-7,7]\). Second, we quantize the outlier-victim pairs with 4-bit OVP encoding. We set the victims with the outlier identifier \(1000_{2}\) and quantize the outlier with the outlier-specific data type (Sec. 3.3). Naturally, there are two types of OVP pair, i.e., left outlier (O-V) and right outlier (V-O) pair. Due to the distinct outlier identifier design, we can implicitly distinguish them without using an extra index bit (Sec. 4.2).
Algo. 1 shows the 4-bit OVP encoding algorithm, which needs to read two values simultaneously, where the requirement is very easy to meet. For the hardware implementation, we can add a buffer for the encoder. Also, the OVP encoder can be implemented by embedding in the quantization unit with ignorable overheads. For the software implementation, we can make a thread handle two values simultaneously. As a result, the encoding algorithm can be implemented efficiently in both hardware and software, which we describe more details later.
### Data Type for Normal Values
For normal values, we build upon prior work (Zhu et al., 2017), which can support multiple data types, including int4, flint4 (4-bit flint), and int8, as shown in Tbl. 3. The int4 type is one of the most widely used data types for 4-bit quantization with integers in the value range of \([-7,7]\). The flint4 type is proposed by prior work ANT (Zhu et al., 2017), which has shown that selecting the data type according to a tensor's distribution achieves the state-of-the-art performance and accuracy.
Based on the above insights, we also adopt the mixed data types to quantize normal values in our OVP pair encoding. For flint4, we use the same binary value of \(1000_{2}\) as the outlier identifier. Specifically, \(1000_{2}\) of flint4 corresponds to \(-0\), which is not used in the original design. In other words, our OVP encoding seamlessly works for flint4 without wasting any number representations. We use the original flint4 encoding algorithm (Zhu et al., 2017) to quantize normal values.
Moreover, the OVP encoding can be generally extended to higher-precision quantization, such as the 8-bit. Similarly, the 8-bit normal
Figure 4. The 4-bit outlier-victim pair encoding.
\begin{table}
\begin{tabular}{c|c}
**Input:** Values, \(val_{1}\), \(val_{2}\); Outlier threshold, \(T\). \\
**Output:** OVP encoding, \(out_{1},out_{2}\). \\
**def** OVParEncoding(\(val_{1}\), \(val_{2}\), \(T\)):** \\
**if**\(val_{1}>T\) and \(val_{1}>val_{2}\)**then** \\
**\(out_{1}=\) OutlierQuantization(\(val_{1}\));** \\
**\(out_{2}=1000_{2}\);// Outlier identifier. \\
**else if**\(val_{2}>T\)**then** \\
**\(out_{1}=1000_{2}\) \\
**\(out_{2}=\) OutlierQuantization(\(val_{2}\));** \\
**else** \\
**\(out_{1}=\) NormalQuantization(\(val_{1}\));** \\
**\(out_{2}=\) NormalQuantization(\(val_{2}\));** \\
**return**\(out_{1},out_{2}\) \\ \end{tabular}
\end{table}
Table 3. Data types for normal values of OVP encoding.
value also needs to eliminate one number. For instance, int8 can represent \([-128,127]\) integers, and we can make \(100000002\) the outlier identifier for int8 and narrow its range to \([-127,127]\). Similarly, the encoding algorithm can easily extend to read two 8-bit elements simultaneously.
### Data Type for Outliers: Abfloat
Next, we quantize outliers using the outlier-specific data type. The large outliers usually have a wide range, for which we use float-based data to quantize. We propose a data type called **adaptive biased float**, abfloat in short. The key idea is that by adding a proper bias to the exponent, all encoded values can skip the interval where normal values lie and provide more range for outliers.
#### Float-to-Fixed Conversion
To accommodate the normal values and avoid fractions, we first convert the floating-point encoding to the fixed point with an exponent. Also, the fixed point is friendly to the hardware implementation and has a lower overhead than the floating point. We transform the the floating point to fixed point with the following equation,
\[\texttt{sign}\times(1\ll\texttt{mb}+\texttt{namtissa})\ll(\texttt{exponent}+ \texttt{bias}), \tag{2}\]
where mb is the mantissa bit-width. Therefore, this fixed-point encoding scheme is more friendly and efficient for hardware implementation, as it only involves shift operations. Tbl. 4 shows the example of fixed-point E2M1 data type.
#### Adaptive Bias
Obviously, Tbl. 3 and Tbl. 4 show that the range of fixed-point abfloat overlaps with the normal values. For example, int4 and E2M1 contain the same numbers, 3, 4, and 6. Another example is that flint4 and E2M1 have almost the same number range except for 24. Therefore, we need the adaptive bias to adjust the range of abfloat. For example, we set bias = 2 for E2M1, whose real values will be extended to \(\{12,\cdots,96\}\), which is complementary with the int4 normal value. Similarly, we set bias = 3 and extend range to \(\{24,\cdots,192\}\) for flint4 data type. We design a new decoder and instruction to implement adaptive bias in accelerators for the abfloat (Sec. 4.2).
#### E2M1 Abfloat
The 4-bit signed float has four possible configurations of exponent and mantissa: E0M3, E1M2, E2M1, and E3M0. They have different ranges and precisions. We conduct the following experiments to choose the most appropriate configuration as the final outlier-specific data type. To accommodate the broad range of outlier values, we quantize the largest outlier values (i.e., Max \(\sigma\) in Fig. 2) in Transformer models using all abfloat types. Then, we collect the average absolute error, as shown in Fig. 5. We found that E2M1 gives the least error in all tests, which provides both a large enough range and a certain degree of precision, and it also presents the best results in our subsequent evaluations. Similarly, we adopt signed E4M3 for 8-bit abfloat.
```
Input: Element \(e\); Bias, \(b\); Output: Quantized Element \(q\);
1defAbfloatQuant(\(e\), \(b\)); // Get exponent and base integer.
2\(exp=[log_{2}(abs(e))-1]\);
3\(base\_int=Round\{e/2^{exp}\}\);
4if\(base\_int=4\)then
5\(exp=exp+1\);
6\(base\_int=base\_int-2\);
7// Encoded as abfloat data type.
8\(exp=exp-b\);
9\(base\_int=base\_int\&1\);
10\(unsigned\_q=Concat(exp,base\_int)\);
11\(q=Concat(e<0,unsigned\_q)\)
12return\(q\)
```
**Algorithm 2**The abfloat encoding algorithm.
In our work, we target the post-training quantization (PTQ) (Wang et al., 2019), which does not require retraining and hence is best suitable for large models as their trainings are expensive. However, we still need to use one batch of data from the **training set** for the scale factor selection. Intuitively, inspired by the \(3\sigma\) rule, we take \(3\sigma\) as the initial scale factor. Then the algorithm will search for the best scale factor with the smallest MSE within a specific range of this baseline, which shows good results in our evaluations. For quantization-aware training (QAT) (Wang et al., 2019), we can get a suitable scale factor by retraining it with the straight-through estimator (STE) (Beng et al., 2019).
## 4. Olive Architecture
This section presents how to integrate OliVe in GPU and output-stationary systolic array architecture. We then present the hardware decoder for the aforementioned outlier-victim pair encoding and outlier data type. On these architectures, our proposed OliVe architecture can directly support the mixed precision (Wang et al., 2019; Wang et al., 2019) and mixed data type (Wang et al., 2019; Wang et al., 2019), which are efficient for quantizing DNN tensors that have different importance and distribution.
### GPU Tensor Core
We first describe how to integrate the OliVe design into the tensor core architecture of GPU in the Fig. 5(a). We employ Turing architecture (Turing et al., 2019) as our baseline GPU, which has 68 streaming multiprocessors (SMs), and each SM has eight tensor cores (544 in total), as shown in Tbl. 5. According to the modeling of prior work (Wang et al., 2019), each tensor core has two octets, which have eight FEDPs (four-element dot product). As such, there are \(68\times 8\times 2\times 8\times 4=34,816\) 16-bit float multipliers. The Turing architecture can originally support mixed-precision computation. For example, the RTX 2080Ti GPU with Turing architecture (Turing et al., 2019) provides 107.6, 215.2, and 430.3 TOPS (tera operations per second) for 16-bit float, 8-bit int, and 4-bit int, respectively. Therefore, we assume that the tensor core can simultaneously support 8-bit 8EDP (eight-element dot product) and 4-bit 16EDP (16-element dot product), as shown in Fig. 5(a).
We can easily embed our proposed OliVe architecture in GPU, which adopts the SIMD architecture. We first put the 4-bit outlier-victim pair decoders (Fig. 5(b)) for each 16EDP. To support the new OliVe data types, we add an adder and a shifter for each 16EDP. Similarly, we also design the 8-bit decoder for the 8EDP units.
### Decoders
**Outlier-Victim Pair Decoder.** To support outlier-victim pair decoding, we design a new decoder that can be easily embedded in existing accelerators. As shown in Fig. 5(b), the decoder reads 1 byte, which is the smallest addressable memory unit in many architectures, and exactly one value pair. Then, the decoder transforms the outlier identifier 1000\({}_{2}\) to 0 and decodes the outlier value with the outlier decoder. To accommodate the computation of the outlier abfloat values, the decoder will generate an exponent-integer pair. Therefore, the decoder needs to append a 0000\({}_{2}\) as the exponent number for the normal int4 data type. For flint4, we exploit its original decoder (Zhu et al., 2019) to get the exponent-integer pair.
**Outlier Decoder.** The above OVP decoder contains an outlier decoder for outlier values with the E2M1 abfloat data type. Fig. 7 shows the details of the 4-bit abfloat decoder design. For a 4-bit E2M1 abfloat number \(x=(b_{2}b_{1}b_{0})_{2}\), following equations decode exponent and integer:
\[\text{exponent}=\text{bias}+(b_{2}b_{1})_{2}\]
\[\text{integer}=\begin{cases}0&if\ x=000_{2}\\ (1b_{0})_{2}&otherwise\end{cases}\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c}
**Architecture** & **SM** & **TC** & **16-bit Unit** & **8-bit Unit** & **4-bit Unit** \\ \hline
**Turing**(Turing et al., 2019) & 68 & 544 & 34,816 & 69,632 & 139,264 \\ \end{tabular}
\end{table}
Table 5. The Turing GPU architecture.
Figure 6. OliVe integration on GPU tensor cores (a), which only requires a set of lightweight OVP decoder (b).
Figure 7. The 4-bit abfloat decoder for outlier values.
For example, when the bias is \(2\), a number \(0101_{2}\) is \(48_{10}\), since its exponent is \(2_{10}+10_{2}=4_{10}\) and base integer is \(11_{2}=3_{10}\). Therefore, its real value is \(3\ll 4=48\).
Similarly, we also design and implement the 8-bit outlier-victim pair decoder and the E4M3 abfloat outlier decoder, which are straightforward extensions of 4-bit instances. As such, we do not present their details due to the limited space.
### Systolic Array
The systolic array (SA) integration is shown in Fig. 8. SA uses the same outlier-victim pair decoder design (Fig. 6b) as GPU, which shows the wide applicability of our design. But, unlike GPU, we only place the decoders along the borderlines, which can save most decoders. For example, if the array size is \(n\times m\), we only need \(n+m\) instead of \(n\times m\) decoders. That is one advantage of SA over the GPU's SIMD architecture. Our proposed OliVe-based data type can also support the systolic array processing element (PE) with an extra adder and shifter. We add an extra adder for every four PEs to support high-precision quantization, e.g., int8.
### OliVe MAC unit
After decoding for outlier and normal values, they are all transformed into unified exponent-integer pairs. To support the decoded exponent-integer pair computation, we need to add a shifter and an adder for the fixed-point MAC (multiply and accumulation) unit, as shown in Fig. 8 and the unit of Fig. 6 4-bit 16EDP. For example, we have two exponent-integer pairs \(<a,b>\) and \(<c,d>\), where \(a\) and \(c\) are exponents, \(b\) and \(d\) are integers, and \(<a,b>\) represents:
\[<a,b>=b\ll a\]
Then, we can get the result:
\[<a,b>\times<c,d>\] \[=(b\times d)\ll(a+c)\] \[=<a+c,b\times d>\]
Note that the final result can store with a \(32\)-bit int.
### Mixed Precision
As mentioned in Sec. 3, OliVe quantization can support the int8 for normal values and E4M3 abfloat for outlier values. Therefore, we propose the mixed-precision processing element (PE) for the higher precision data types.
**8-bit Int.** For the GPU tensor core architecture, it is originally designed with mixed-precision computation. For the systolic array, our architecture naturally supports 8-bit computation with four 4-bit PEs (Shen et al., 2017). For an int8 number \(x\), the higher 4 bits and the lower 4 bits can be split into two 4-bit numbers \(h\) and \(l\), and the \(x\) can be represented by:
\[x=(h_{x}\ll 4)+l_{x}=<4,h_{x}>+<0,l_{x}>.\]
We then can multiply two int8 numbers of \(x\) and \(y\):
\[x\times y =\underbrace{<4,h_{x}>\times<4,h_{y}>}_{PE0} +\underbrace{<4,h_{x}>\times<0,l_{y}>}_{PE1}\] \[+\underbrace{<0,l_{x}>\times<4,h_{y}>}_{PE2} +\underbrace{<0,l_{x}>\times<0,l_{y}>}_{PE3}\]
Therefore, we can use four 4-bit PEs to calculate the above four multiplications and accumulate the products to get the final product value of \(x\times y\).
**8-bit abfloat** Similarly, multiplication of 8-bit abfloat can be supported using the same approach. For an 8-bit abfloat number \(z\), it is first decoded into an exponent \(e_{z}\) and an integer \(i_{z}\). For \(i_{z}\), we similarly split it into \(i_{z}=(h_{z}<<4)+l_{z}\), then \(z=<4+e_{z},h_{z}>+<e_{z},l_{z}>\). Hence the same method can be used to perform 8-bit abfloat multiplication with four 4-bit PEs, where the abfloat has an extra \(e_{z}\) than int8.
In the most extreme case, two outliers with abfloat may be multiplied together. Because we adopt the 32-bit int as the accumulator, the maximum multiplicand should not be over \(\sqrt{2^{31}-1}\). Therefore, for the outlier value with the abfloat type, we will clip the absolute value of the outlier within \(2^{15}<\sqrt{2^{31}-1}\) to avoid the overflow for the int32 accumulators. Our experiments show that the outlier values of the Transformer models are much smaller than \(2^{15}\). Specifically, \(2^{15}\) is about \(768\sigma\) after normalization and quantization. As shown in Fig. 2, the maximum value of outliers does not exceed \(325\sigma\). Thus, we observe that no outlier is truncated in practice.
### Instruction Set
For 4-bit tensor cores, the Turing GPU architecture adopts the instruction mmao.s32.s4.s32. These four operands are matrices \(D\) (int32), \(A\) (int4), \(B\) (int4), and \(C\) (int32), and \(D=A\times B+C\). To support the OVP-based computation on GPU, we design a new instruction called mmaoupp:
\[\underbrace{\texttt{mmaoupp}.s32}_{\texttt{OUP-MMA}}\underbrace{\texttt{ oupi4}.\texttt{oupf}4.\texttt{s32}}_{\texttt{init4}}\underbrace{\texttt{s4}}_{\texttt{bias}}.\]
Moreover, because of the memory-aligned design of the data type, OliVe maintains the original programming interface for GPUs. We can replace the original int-based instruction with OVP-based instruction (e.g., mmaoupp) to easily construct the OVP-supported DNN quantization framework. Therefore, our OliVe framework has comprehensive and practical applicability, which is the most significant advantage of OliVe.
Figure 8. OliVe integration on systolic array.
## 5. Evaluation
In this section, we evaluate the LLM's accuracy with OliVe quantization. We also demonstrate OliVe's area overhead, speedup, and energy efficiency on GPU and systolic array, respectively.
### Methodology
**Framework and Evaluation Models.** To evaluate our OliVe quantization framework, we implement it in Pytorch (Vaswani et al., 2017). We evaluate BERT-base (Devlin et al., 2019), BERT-large (Devlin et al., 2019), and BART-base (Devlin et al., 2019), the three most commonly used language models, on eight datasets of the GLUE benchmark (Devlin et al., 2019). In addition, we evaluate BERT-base (Devlin et al., 2019) and BART-base (Devlin et al., 2019) on the summarization tasks SQuAD v1.1 and SQuAD v2.0 (Vaswani et al., 2017). To valid our quantization framework on large language models, we also evaluate GPT2-XL (Vaswani et al., 2017), BLOOM-7B1 (Peters et al., 2019), and OPT-6.7B (Peters et al., 2019) on Wikitext103 (Walick et al., 2017) and C4 (Liu et al., 2019) datasets. For all models mentioned above, we use state-of-the-art checkpoints from the huggingface repositories (Liu et al., 2019).
**Quantization Baselines.** We compare OliVe with existing quantization works, including GOBO (Liu et al., 2019), Outlier Suppression (Wang et al., 2019), Q8BERT (Wang et al., 2019), and ANT (Wang et al., 2019). Outlier suppression (Wang et al., 2019) is the state-of-the-art Transformer quantization work. GOBO (Liu et al., 2019) is also an outlier-aware quantization work. Q8BERT (Wang et al., 2019) is a method for quantizing GEMM operations to 8-bit. ANT (Wang et al., 2019) is a hardware-friendly quantization framework that achieves state-of-the-art results in both performance and accuracy.
**Accelerator Baselines.** We compare the performance and energy of OliVe against five DNN quantization accelerators, including OLAccel (Vaswani et al., 2017), AdaptivFloat (Vaswani et al., 2017) (shorted as AdaFloat), GOBO (Vaswani et al., 2017), ANT (Wang et al., 2019), and original int8 tensor cores in GPU (Wang et al., 2019). OLAccel (Vaswani et al., 2017) first proposed the outlier-aware quantization architecture for CNNs. We extend OLAccel to the Transformer-based models with element-wise mixed-precision weight and activation quantization. AdaFloat (Vaswani et al., 2017) extends the float type with a tensor-wise exponent bias. GOBO (Liu et al., 2019) is similar to OLAccel, but only supports the weight quantization for Transformer-based networks.
**OliVe Implementation.** We implement our decoder in Verilog RTL and synthesize it with Synopsys design compiler (Vaswani et al., 2017) with a 22 nm TSMC technology library to estimate its area, latency, and power. We use CACTTI (Wang et al., 2019) to estimate the area and power of on-chip memories. We integrate OliVe into GPU and hardware accelerator for the end-to-end performance and energy evaluation.
For the GPU integration and evaluation, we modify and extend GPGPU-Sim 4.0 (Chen et al., 2019) and AccelSim (Wang et al., 2019) with the configuration of NVIDIA 2080 Ti architecture. We use AccelWattch (Wang et al., 2019), GPUWattch (Wang et al., 2019), and CACTI (Wang et al., 2019) for the energy estimation. The majority of Transformer layers are matrix multiplication operations. For GEMM implementation on the tensor core, we use CUT-LASS (Wang et al., 2019), which is NVIDIA's open-source implementation.
For the accelerator evaluation, we compare AdaFloat, OLAccel and ANT with OliVe. We develop a cycle-level simulator to estimate the overall performance of OliVe based on DnnWeaver (DnnWeaver, 2019). Although DnnWeaver (Dnnweaver, 2019) is a FPGA tool set, prior DNN quantization accelerators, which include the BitFusion (Dnnweaver, 2019), and ANT (Wang et al., 2019), have extended its frontend to add the ASIC performance and energy simulation. As OliVe does not redesign the baseline accelerator architecture, we can directly embed new OliVe-related instructions and data format in the simulator without breaking the original simulation flow. In other words, we have used and modified the open-sourced implementaions of BitFusion (Dnnweaver, 2019; Dnnweaver, 2019), and ANT (Wang et al., 2019; Wang et al., 2019).
### Accuracy Results
We first evaluate the accuracy of OliVe quantization framework on different tasks and datasets, which is the prerequisite for applying it to reduce the inference cost of large language models (LLMs).
**GLUE Dataset.** We evaluate BERT-base (Devlin et al., 2019), BERT-large (Devlin et al., 2019) and BART-base (Devlin et al., 2019) on eight datasets of GLUE benchmark, but due to space limitation, we only show the results on CoLA, SST-2, MNLI, QQP and MRPC datasets in Fig. 6. For the BERT-base model, our 4-bit PTQ method accuracy drop less than 1% compared to the original full precision model on all eight datasets and outperforms all studied methods including 4-bit, 6-bit, and 8-bit PTQ and QAT methods. Since GOBO (Liu et al., 2019) only quantizes weights, we use the same method to compare with it and the result is shown in Tbl. 7. Our method also outperforms the GOBO under the weight-only quantization setting. In addition, we evaluate the BERT-large model, which is evaluated by few prior quantization works due to the larger number of parameters and hence much more challenging compared to BERT-base. The results in Tbl. 6 show the accuracy loss for BERT-large is around 1% on the five presented datasets and similar results are found on other datasets. For the BART-base model, our 4-bit PTQ results in Tbl. 6 show around 2% accuracy loss compared to the accuracy of original full-precision in all datasets. In the above evaluation, our 4-bit PTQ results are better than all the PTQ and most of the QAT results reported by prior works.
**SQuAD Dataset.** We also evaluate the accuracy of OliVe quantization on summarization task SQuAD (Wang et al., 2019), which is more challenging than the previous GLUE dataset. Tbl. 8 shows the results on SQuAD v1.1 and SQuAD v2.0 datasets. On both datasets, our
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Method** & **Algorithm** & **CoLA** & **SST-2** & **MNLI** & **QQP** & **MRPC** \\ \hline BERT\({}_{base}\) & 32-bit & 59.60 & 93.35 & 84.94 & 90.91 & 87.75 \\ \hline
**Ours** & **4-bit PTQ** & **59.30** & **92.43** & **84.10** & **90.36** & **87.99** \\ ANT & 4-bit QAT & 53.91 & **92.43** & 83.45 & - & - \\ ANT & 4-bit PTQ & 42.90 & 90.48 & 73.36 & 78.04 & 68.87 \\ OS & 4-bit QAT & 50.56 & 91.86 & 83.05 & 90.33 & 84.31 \\ OS & 6-bit PTQ & 54.40 & 91.86 & 82.02 & 88.94 & 83.33 \\ Q8 & 8-bit QAT & 58.48 & 92.24 & - & - & - \\ \hline BERT\({}_{large}\) & 32-bit & 63.35 & 93.46 & 86.65 & 91.07 & 87.99 \\ \hline
**Ours** & **4-bit PTQ** & **63.99** & **92.89** & **84.89** & **90.14** & **86.52** \\ \hline BART\({}_{base}\) & 32-bit & 56.32 & 93.35 & 86.45 & 91.34 & 87.50 \\ \hline
**Ours** & **4-bit PTQ** & **54.30** & **92.89** & **85.33** & **91.23** & 86.76 \\ OS & 4-bit QAT & 50.83 & 92.43 & 84.57 & 90.93 & **87.01** \\ OS & 6-bit PTQ & 44.51 & 90.94 & 82.98 & 88.45 & 80.88 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Results on GLUE datasets. Q8 and OS are Q8BERT (Wang et al., 2019) and outlier suppression (Wang et al., 2019) for short, respectively. Prior works do not report results in BERT\({}_{large}\) so we only compare against the original full-precision model.
4-bit PTQ method obtains a less than 2% accuracy loss on the BERT-base model and around 3% accuracy loss on the BART-base model, which is better than the 6-bit PTQ method of the state-of-the-art quantization work outlier suppression.
Large Language Models.We evaluate the accuracy of OliVe for LLMs under the PTQ setting. LLMs' inference is challenging as it requires significant memory, which makes their retraining even more resource-consuming. Thus, the PTQ method without retraining is more desirable than the QAT method for LLMs.
The recent work (Kang et al., 2021) has shown that the int8 quantization has a significant accuracy drop when the number of parameters of the OPT model grows to 6.7B. As shown in Tbl. 9, our 8-bit PTQ method has only a negligible perplexity increase on OPT-6.7B (lower is better), while the accuracy of the int8-based quantization method has a significant degradation and is worse than our 4-bit PTQ method on the C4 dataset. On GPT2-XL and BLOOM-7B1 models, our 8-bit PTQ method essentially achieves the original perplexity, and the 4-bit PTQ method achieves the performance close to int8. For comparison, the accuracy results of int4 and 4-bit ANT are unacceptable (10-1000x worse than FP32 model).
To summarize, our OliVe quantization framework pushes the limit of 4-bit quantization to a new state-of-the-art, as it is able to achieve nearly original accuracy for the commonly used language models including BERT-base, BERT-large, and BART-base on most datasets. Moreover, OliVe also gives the state-of-the-art results of 4-bit and 8-bit quantization on large language models like GPT2-XL, BLOOM-7B1, and OPT-6.7B.
### GPU Performance and Energy
We evaluate LLMs on the GPU simulator, where the batch size is set to 2 for GPT-like models and 16 for BERT-like models. For OliVe, 4-bit quantization can limit the loss to a relatively small error range. GOBO (Zhu et al., 2021) can achieve the original accuracy of all models but has a significant overhead on compressing weight in DRAM. Note that GOBO only quantizes the weight tensors and computes with FP16. We implemented GOBO's memory organization in the GPU. For ANT (Zhu et al., 2021), we make all models close to the original accuracy or perplexity by mixed precision (BERT-like models (Kang et al., 2021; Kang et al., 2021) with \(<1\%\) loss and GPT-like models (Kang et al., 2021; Kang et al., 2021) with \(<3\) perplexity) with the PTQ setting. In addition, we also compare the original int8 of GPU, which has unacceptable accuracy loss, just for performance and energy comparison to GPU baseline. We compare the GPU architecture integrated with our OliVe design against various baselines. The performance and energy results are shown in Fig. 9.
Performance.Fig. (a)a compares the speedup values of different quantization methods on GPUs. OliVe achieves the best performance and has higher speedups on the larger language models than GOBO. Due to the FP16 computation and weight-only quantization, GOBO (Zhu et al., 2021) achieves the lowest performance among all studied designs. In contrast, OliVe quantizes both activation and weight to low bits and does not increase the memory access overhead. This avoids performance degradation when the number of parameters increases. The PTQ seriously degrades the accuracy of ANT (Zhu et al., 2021) as it cannot handle outliers. In ANT, 80% of layers ends up using int8 quantization so the performance results between ANT and int8 are close. On average, OliVe achieves 4.5\(\times\), 2.7\(\times\), and 2.4\(\times\) speedup values over GOBO, int8, and ANT, respectively.
Energy.Fig. (b)b shows the normalized energy comparison of different designs, including constant, static, and dynamic power. And the dynamic power includes DRAM, L2 cache, L1 data cache, shared memory, register file, and processing elements (CUDA core and tensor core). The L1 contains the sum of the L1 cache and shared memory energy. OliVe has the lowest energy due to the aligned 4-bit design and GPU compatibility. Due to the worse accuracy result of the mixed precision, ANT is also close to int8 on the energy. Overall, 4-bit OliVe is very hardware-friendly so that it can take full advantage of the energy savings with lower bits. OliVe achieves average 4.0\(\times\), 2.3\(\times\), and 2.0\(\times\) energy reduction over GOBO, int8, and ANT, respectively.
Area.To measure the overhead of OliVe decoder on the GPU, we scale the OliVe decoder to 12 \(nm\), which is the same manufacturing process as RTX 2080 Ti (Zhu et al., 2021) and calculate the tile area. According to Tbl. 5, there are 139,264 4-bit decoders and 69,632 8-bit decoders
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Method** & **Bits** & **MNLI** & **STSB(Pear.)** \\ \hline BERT\({}_{base}\) & 32 & 84.94 & 89.70 \\ \hline
**Ours (weights only)** & **4** & **84.75** & **89.62** \\ GOBO\({}^{*}\)(weights only) & 4 & 84.45 & 88.33 \\ \hline \hline \end{tabular}
\end{table}
Table 7. Comparison with GOBO on the MNLI and STSB dataset. \({}^{*}\)The accuracy of our GOBO implementation slightly differs from the number reported in the original paper (Zhu et al., 2021).
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Bits** & **SQuAD v1.1** & **SQuAD v2.0** \\ \hline BERT\({}_{base}\) & 32 & 88.28/80.82 & 77.34/73.60 \\ \hline
**Ours** & **4** & **86.38/78.16** & **75.90/72.08** \\ Outlier Suppression & 6 & 84.48/75.53 & 74.69/70.55 \\ \hline BART\({}_{base}\) & 32 & 91.63/84.79 & 80.82/77.41 \\ \hline
**Ours** & **4** & **88.15/79.87** & **77.37/73.69** \\ Outlier Suppression & 6 & 83.68/75.34 & 74.44/70.36 \\ \hline \hline \end{tabular}
\end{table}
Table 8. PTQ results on SQuAD datasets.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Method** & **Bits** & **SQuAD v1.1** & **SQuAD v2.0** \\ \hline BERT\({}_{base}\) & 32 & 88.28/80.82 & 77.34/73.60 \\ \hline
**Ours** & **4** & **86.38/78.16** & **75.90/72.08** \\ Outlier Suppression & 6 & 84.48/75.53 & 74.69/70.55 \\ \hline BART\({}_{base}\) & 32 & 91.63/84.79 & 80.82/77.41 \\ \hline
**Ours** & **4** & **88.15/79.87** & **77.37/73.69** \\ Outlier Suppression & 6 & 83.68/75.34 & 74.44/70.36 \\ \hline \hline \end{tabular}
\end{table}
Table 8. PTQ results on SQuAD datasets.
on the GPU die and their area is shown in Tbl. 10. Since the GPU die size of RTX 2080 Ti is 754 \(mm^{2}\), the 4-bit decoder and 8-bit decoder only account for 0.250% and 0.166% of the entire GPU area respectively, which we believe is a tiny and worthy overhead.
### Accelerator Performance and Energy
As explained in Sec. 5.1, we also integrate OliVe to the systolic-array-based hardware accelerator and compare its performance and energy against existing designs of ANT (Shen et al., 2017), OLAccel (Zhu et al., 2018), and AdaFloat (Zhu et al., 2018). Similar to its GPU implementation, ANT is a mixed-precision design. Since AdaFloat does not support mixed precision, we only provide the 8-bit quantization results. All accelerators can achieve close to original accuracy for all Transformer models.
**Performance.** As shown in Fig. (a)a, OliVe has the most significant advantage in latency speedup. Owing to its inability to deal with outliers, the performance of ANT is similar to OLAccel on most models. The speedup values of OliVe are very similar on all models, and they do not change with the increasing number of model parameters. On average, OliVe achieves 4.8\(\times\), 3.8\(\times\), and 3.7\(\times\) speedup value over AdaFloat, OLAccel, and ANT, respectively.
**Energy.** Fig. (b)b shows the normalized energy consumption of different designs composed of static and dynamic energy (DRAM, on-chip buffer, and core). OliVe has the lowest energy consumption. Compared to OLAccel, OliVe has a significant advantage in terms of static and DRAM. Worse mixed-precision results increase ANT energy consumption, which is even close to AdaFloat in BLOOM-7B1 model. On average, OliVe achieves 3.7\(\times\), 2.1\(\times\), and 3.3\(\times\) energy reduction over AdaFloat, OLAccel, and ANT, respectively.
**Area.** Tbl. 11 shows the area breakdown of OliVe-based systolic array architecture under 22 \(nm\) process. In this scenario, the 4-bit and 8-bit decoders introduce about 2.2% and 1.5% overhead of the core area, respectively, which is inconsiderable compared to the area of PEs in the array. Considering on-chip memory structures, the overall area overhead would be even smaller. In addition, we also scale other accelerators to 22 \(nm\) using DeepScaleTool (Zhu et al., 2018) and get similar results to those numbers. Note that we implement all accelerators with a similar area size. The small area overhead of our OliVe directly benefits from the carefully-designed outlier-victim pair (OVP) encoding.
## 6. Related Work and Discussion
This section presents and discusses research on DNN acceleration and compression. With the growing computation requirements of DNN models, it is crucial to design the algorithms and architecture to accelerate DNN models. Various compression methods, such
\begin{table}
\begin{tabular}{c|c|c|c} Component & Number & Area (\(mm^{2}\)) & Area Ratio \\ \hline
4-bit Decoder (13.53\(\mu m^{2}\)) & 139,264 & 1.88 & 0.250\% \\ \hline
8-bit Decoder (18.00\(\mu m^{2}\)) & 69,632 & 1.25 & 0.166\% \\ \end{tabular}
\end{table}
Table 10. The area of OliVe decoder on RTX 2080 Ti.
Figure 10. Comparison of different designs on accelerators.
\begin{table}
\begin{tabular}{c|c|c|c} Component & Number & Area (\(mm^{2}\)) & Area Ratio \\ \hline
4-bit Decoder (37.22\(\mu m^{2}\)) & 128 & 0.00476 & 2.2\% \\ \hline
8-bit Decoder (49.50\(\mu m^{2}\)) & 64 & 0.00317 & 1.5\% \\ \hline
4-bit PE (50.01\(\mu m^{2}\)) & 4096 & 0.205 & 96.3\% \\ \end{tabular}
\end{table}
Table 11. Area breakdown of OliVe under 22 \(nm\) process.
Figure 9. Comparison of four different designs on GPU.
as pruning and quantization, have been proposed to exploit the redundancy property of DNNs.
**DNN Acceleration.** In the past few years, various architectures (Krizhevsky et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2018; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019) have been proposed to match the computation characteristics of DNN models. To accelerate the DNN system, most optimizations focus on compilation (Krizhevsky et al., 2017; Krizhevsky et al., 2018; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019) and scheduling (Krizhevsky et al., 2017; Krizhevsky et al., 2018; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019).
The DNN acceleration highly relies on the performance of matrix multiplication. Therefore, several works focus on improving data reuse and simplifying control logic through a tailored dataflow architecture for matrix multiplication(Krizhevsky et al., 2017; Krizhevsky et al., 2018; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019). TPU (Krizhevsky et al., 2019) introduces a highly optimized dataflow architecture that efficiently reuses data across multiple computation stages. Modern GPUs (Krizhevsky et al., 2019) now incorporate matrix multiplication accelerators, such as tensor core, optimized for SIMD operations to enhance DNN workload acceleration further.
**Pruning.** Pruning means removing a portion of weight, input, or output of DNN layers, resulting in a sparse model with reduced model size. However, a significant reduction leads to irregular memory accesses, which are negative for the acceleration of inference and training. To address this issue, researchers propose several sparse optimizations in algorithms and hardware architectures to reduce inefficient computation (Krizhevsky et al., 2017; Krizhevsky et al., 2018; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019); Krizhevsky et al. (2019); Krizhevsky et al. (2019); Krizhevsky et al. (2019). In addition, a sparse tensor core is introduced in NVIDIA Ampere GPU architecture (Beng et al., 2016) to support the 2:4 structured sparsity.
**Quantization.** Quantization is another effective and efficient way to reduce the DNN model size and computation burden. There are two popular quantization methods, i.e., quantization-aware training (QAT) (Krizhevsky et al., 2018; Krizhevsky et al., 2019; Krizhevsky et al., 2019) and post-training quantization (PTQ) (Krizhevsky et al., 2018; Krizhevsky et al., 2019; Krizhevsky et al., 2019); Krizhevsky et al. (2019). QAT allows the model to adapt to quantization noise by retraining. PTQ is very effective to implement since it converts the original FP32 model directly into a lower-bit model without the training data and pipeline. Thus, PTQ is more feasible for language models at billion scales.
By quantizing data to low bit-width, quantization accelerators can significantly reduce memory bandwidth requirements and increase the computation speed. BitFusion (Krizhevsky et al., 2019) combines the low-bit PEs to support different bit-width quantization. OLAccel (Krizhevsky et al., 2019) utilizes 16-bit MAC to the first layer and 4-bit MAC to other layers. DRQ (Krizhevsky et al., 2019) quantizes data in sensitive and insensitive areas with different precision, which is similar to outlier-aware quantization. GBOO (Krizhevsky et al., 2019) is an accelerator that takes advantage of outlier-aware quantization, which quantizes the outliers of weights with higher precision. However, the outlier-aware quantization accelerators mentioned above have unaligned memory accesses, resulting in additional overhead and a limited computing speed. ANT (Krizhevsky et al., 2019) proposes a fixed-length adaptive quantization framework but only takes the distribution of tensors into account and ignores the importance of outliers. In contrast, our proposed novel O1iVe quantization framework can handle outlier values in a memory-aligned and hardware-friendly way.
AdaptivFloat (Krizhevsky et al., 2019) is similar to abfloat in adding a bias to the exponent, but the motivations and how the bias is determined are different. AdaptivFloat is to adapt to the dynamic ranges of different layers and calculates the optimal bias at a layer granularity using its algorithm. Our abfloat is to make full use of the encoding range, so it simply adds a uniform bias to all encoding values to skip the range of normal values, which is simpler to implement.
**GPU Architecture.** NVIDIA has been updating its new generations of GPUs, e.g., Ampere architecture (Beng et al., 2016), which adds the sparse tensor core for structured sparsity in DNNs and compute data compression to increase the memory access bandwidth. The structured sparsity for tensor cores is orthogonal to our proposed quantization as our element-wise quantization does not affect (sparse) tensor core dataflow. Ampere GPU's compute data compression can compress zero values and similar bytes in DRAM and L2 cache. As such, it is lossless and therefore general-purpose. It is also transparent and orthogonal to O1iVe, which does not modify the memory system. In contrast, prior quantization work (Krizhevsky et al., 2019) perform compression at the DRAM-level, which could be impacted by the data compression in Ampere GPUs.
On the other hand, DNN quantization is a lossy compression. We believe the strictly lossless compression would have limited benefits for DNN quantization. Thus, our work could complement Ampere's current compute data compression as a special-purpose solution. Since existing GPU simulators (Beng et al., 2016; Krizhevsky et al., 2019) cannot support data compression, we will continue to follow up and study this problem in the future work.
## 7. Conclusion
In this work, we propose a novel outlier-victim pair (OVP) quantization, which can handle outlier values with low hardware overhead and achieve high performance gains. The key insight is to sacrifice the normal values next to those essential outliers (called victims) to accommodate them. The OVP encoding designed based on this idea is able to make outliers and normal values globally identical but locally distinguishable. To the best of our knowledge, O1iVe pushes the limit of 4-bit quantization to a new state-of-the-art, as it is able to achieve nearly original accuracy for commonly used language models. Moreover, our architecture design can be efficiently integrated into existing hardware accelerators such as tensor core and systolic array. Finally, O1iVe-based accelerator surpasses the existing outlier-aware accelerator, GOBO, by \(4.5\times\) speedup and \(4.0\times\) energy reduction, respectively.
###### Acknowledgements.
This work was supported by the National Key R&D Program of China under Grant 2022YFB4501401, the National Natural Science Foundation of China (NSFC) grant (62222210, and 62072297, and 61832006). The authors would like to thank the anonymous reviewers for their constructive feedback for improving the work. We also thank Tailong Wangliu, Shuangjie Ruan for their continuous support.
|
2306.11488 | Informed POMDP: Leveraging Additional Information in Model-Based RL | In this work, we generalize the problem of learning through interaction in a
POMDP by accounting for eventual additional information available at training
time. First, we introduce the informed POMDP, a new learning paradigm offering
a clear distinction between the information at training and the observation at
execution. Next, we propose an objective that leverages this information for
learning a sufficient statistic of the history for the optimal control. We then
adapt this informed objective to learn a world model able to sample latent
trajectories. Finally, we empirically show a learning speed improvement in
several environments using this informed world model in the Dreamer algorithm.
These results and the simplicity of the proposed adaptation advocate for a
systematic consideration of eventual additional information when learning in a
POMDP using model-based RL. | Gaspard Lambrechts, Adrien Bolland, Damien Ernst | 2023-06-20T12:20:23Z | http://arxiv.org/abs/2306.11488v3 | # Informed POMDP: Leveraging Additional Information in Model-Based RL
###### Abstract
In this work, we generalize the problem of learning through interaction in a POMDP by accounting for eventual additional information available at training time. First, we introduce the informed POMDP, a new learning paradigm offering a clear distinction between the training information and the execution observation. Next, we propose an objective for learning a sufficient statistic from the history for the optimal control that leverages this information. We then show that this informed objective consists of learning an environment model from which we can sample latent trajectories. Finally, we show for the Dreamer algorithm that the convergence speed of the policies is sometimes greatly improved on several environments by using this informed environment model. Those results and the simplicity of the proposed adaptation advocate for a systematic consideration of eventual additional information when learning in a POMDP using model-based RL.
Machine Learning, ICML, ICML
## 1 Introduction
Reinforcement learning (RL) aims to learn to act optimally through interaction with environments whose dynamics are unknown. A major challenge in this field is partial observability, where only incomplete observation \(o\) of the Markovian state of the environment \(s\) is available for taking action \(a\). Such an environment can be formalized as a partially observable Markov decision process (POMDP). In this context, an optimal policy \(\eta(a|h)\) generally depends on the history \(h\) of observations and past actions, which grows linearly with time. Fortunately, it is theoretically possible to find a statistic \(f(h)\) from the history \(h\) that summarizes all relevant information to act optimally, and that is recurrent. Formally, a recurrent statistic is updated according to \(f(h^{\prime})=u(f(h),a,o^{\prime})\) each time that an action \(a\) is taken and a new observation \(o^{\prime}\) is received, with \(h^{\prime}=(h,a,o^{\prime})\). Such a statistic \(f(h)\) for which there exists an optimal policy \(\eta(a|h)=g(a|f(h))\) is called a sufficient statistic from the history for the optimal control. Standard approaches have thus relied on learning a recurrent policy \(\eta_{\theta,\phi}(a|h)=g_{\phi}(a|f_{\theta}(h))\), using a recurrent neural network (RNN) \(f_{\theta}\) for the statistic. Those policies are simply trained by stochastic gradient ascent of a RL loss using backpropagation through time (Bakker, 2001; Wierstra et al., 2010; Hausknecht & Stone, 2015; Heess et al., 2015; Zhang et al., 2016; Zhu et al., 2017). In this case, the RNN learns a sufficient statistic \(f_{\theta}(h)\) as it learns an optimal policy (Lambrechts et al., 2022; Hennig et al., 2023). Although those standard approaches are theoretically able to implicitly learn a statistic that is sufficient for the optimal control, sufficient statistics can also be learned explicitly. Notably, many works (Igl et al., 2018; Buesing et al., 2018; Han et al., 2019; Gregor et al., 2019; Guo et al., 2020; Lee et al., 2020; Hafner et al., 2019, 2020, 2021, 2023; Guo et al., 2018; Gregor et al., 2019) have focused on learning a recurrent statistic that is predictive sufficient (Bernardo & Smith, 2009) for the reward and next observation given the action: \(p(r,o^{\prime}|h,a)=p(r,o^{\prime}|f(h),a)\). A recurrent and predictive sufficient statistic is indeed proven to provide a sufficient statistic for the optimal control (Subramanian et al., 2022). It can be noted that in those works, this sufficiency objective is pursued jointly with the RL objective.
Whereas those methods allow one to learn sufficient statistics and optimal policies in the context of POMDP, they learn solely from the partial observations. However, assuming the same partial observability at training time and execution time is too pessimistic for many environments, notably for those that are simulated. We claim that additional information about the state \(s\), be it partial or complete, can be leveraged during training for learning sufficient statistics, in order to increase the supervision of policies. To this end, we generalize the problem of learning from interaction in a POMDP by introducing the informed POMDP. This formalization introduces the training information \(i\) about the state \(s\), which is available at training time, but keeps the execution POMDP unchanged. Importantly, this training information is designed such that the observation is conditionally independent of the state given the information. Note that it is
always possible to design such an information \(i\), possibly by concatenating the observation \(o\) with the eventual additional observations \(o^{+}\), such that \(i=(o,o^{+})\). This formalization offers a new learning paradigm where the training information is used along the reward and observation to supervise the learning of the policy.
In the context of informed POMDP, we show that recurrent statistics are sufficient for the optimal control of the execution POMDP when they are predictive sufficient for the reward and next information given the action: \(p(r,i^{\prime}|h,a)=p(r,i^{\prime}|f(h),a)\). We then derive a convenient objective for finding a predictive sufficient statistic, which amounts to approximating the conditional distribution \(p(r,i^{\prime}|h,a)\) through likelihood maximization using a model \(q_{\theta}(r,i^{\prime}|f_{\theta}(h),a)\), where \(f_{\theta}\) is a recurrent statistic. Compared to the classic objective for learning sufficient statistics (Igl et al., 2018; Buesing et al., 2018; Han et al., 2019; Hafner et al., 2019), this objective approximates \(p(r,i^{\prime}|h,a)\) instead of \(p(r,o^{\prime}|h,a)\). In addition, we show that this learned generative model \(q_{\theta}(r,i^{\prime}|f_{\theta}(h),a)\) is an environment model from which latent trajectories can be generated. Consequently, policies can be optimized in a model-based RL fashion using those generated trajectories. This proposed approach boils down to adapting model-based algorithms, such as PlaNet or Dreamer (Hafner et al., 2019, 2020, 2021, 2023), by relying on a model of the information instead of a model of the observation. We consider several standard environments that we formalize as informed POMDPs (Mountain Hike, Flickering Atari, Velocity Control and Flickering Control). Our informed adaptation of Dreamer is shown to provide a significant convergence speed and performance improvement on some environments, while hurting performances in others, especially in the flickering environments.
Other methods were proposed to account for additional information available at training time. Those approaches, referred to as asymmetric learning, usually learn policies for the POMDP by imitating an expert policy conditioned on the state (Choudhury et al., 2018). Alternatively, asymmetric actor-critic approaches use a critic conditioned on the state (Pinto et al., 2018). However, those heuristic approaches lack a theoretical framework, and the resulting policies are known to be suboptimal for the POMDP (Warrington et al., 2021; Baisero and Amato, 2022; Baisero et al., 2022). Intuitively, under partial observability, optimal policies might indeed need to consider actions that reduce the state uncertainty or that corresponds to safer trajectories. To address those limitations, Warrington et al. (2021) proposes to constrain the expert policy such that its imitation results in an optimal policy in the POMDP. Baisero and Amato (2022) proposed an unbiased state-conditioned critic for asymmetric actor-critic approaches, by introducing the history-state value function \(V(h,s)\). Baisero and Amato (2022) adapted this method to value-based RL, where the history-dependent value function \(V(h)\) uses from the history-state value function \(V(h,s)\) in its temporal difference target. Alternatively, Nguyen et al. (2022) modified the RL objective by trading off the expert imitation objective with respect to the return, resulting in an imitation bonus akin to the entropy in soft actor-critic methods. Finally, in the work that is the closest to ours, Nguyen et al. (2021) proposed, under the strong assumption that beliefs \(b(s)=p(s|h)\) are available at training time, to enforce that the statistic \(f(h)\) encodes the belief, a sufficient statistic for the optimal control (Astrom, 1965). In contrast, we introduce a novel approach that is guaranteed to provide a sufficient statistic for the optimal control, and that leverages the additional information only through the objective. Moreover, our new learning paradigm is not restricted to state supervision, but support any level of additional information. Finally, to the best of our knowledge, our method is the first to exploit additional information for learning an environment model in model-based RL for POMDPs.
This work is structured as follows. In Section 2, the informed POMDP is presented along with the underlying execution POMDP, and its optimal policies. In Section 3, the learning objective for sufficient statistic is presented in the context of informed POMDP. In Section 4, the model-based RL algorithm that is used, Dreamer, is introduced along with our proposed adaptation to informed POMDPs. In Section 5, we compare the performance and convergence speed of the Uninformed Dreamer and the Informed Dreamer in several environments. Finally, in Section 6, we conclude by summarizing the contributions and limitations of this work.
## 2 Informed Partially Observable Markov Decision Process
In Subsection 2.1, we introduce the informed POMDP and the associated training information, along with the underlying execution POMDP. In Subsection 2.2, we introduce the optimal policies and the reinforcement learning objective in the context of informed POMDPs.
### Informed POMDP and Execution POMDP
Formally, an informed POMDP \(\widetilde{\mathcal{P}}\) is defined as a tuple \(\widetilde{\mathcal{P}}=(\mathcal{S},\mathcal{A},\mathcal{I},\mathcal{O},T,R, \widetilde{I},\widetilde{O},P,\gamma)\) where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\mathcal{I}\) is the information space, and \(\mathcal{O}\) is the observation space. The initial state distribution \(P\) gives the probability \(P(s_{0})\) of \(s_{0}\in\mathcal{S}\) being the initial state of the decision process. The dynamics are described by the transition distribution \(T\) that gives the probability \(T(s_{t+1}|s_{t},a_{t})\) of \(s_{t+1}\in\mathcal{S}\) being the state resulting from action \(a_{t}\in\mathcal{A}\) in state \(s_{t}\in\mathcal{S}\). The reward function \(R\) gives the immediate reward \(r_{t}=R(s_{t},a_{t})\) obtained at each transition. The information distribution \(\widetilde{I}\) gives
the probability \(\widetilde{I}(i_{t}|s_{t})\) to get information \(i_{t}\in\mathcal{I}\) in state \(s_{t}\in\mathcal{S}\). The observation distribution \(\widetilde{O}\) gives the probability \(\widetilde{O}(o_{t}|i_{t})\) to get observation \(o_{t}\in\mathcal{O}\) given information \(i_{t}\). Finally, the discount factor \(\gamma\in[0,1[\) gives the relative importance of future rewards. The main assumption about an informed POMDP is that the observation \(o_{t}\) is conditionally independent of the state \(s_{t}\) given the information \(i_{t}\): \(p(o_{t}|i_{t},s_{t})=\widetilde{O}(o_{t}|i_{t})\). In other words, the random variables \(s_{t}\), \(i_{t}\) and \(o_{t}\) satisfy the Bayesian network \(s_{t}\longrightarrow i_{t}\longrightarrow o_{t}\). In practice, it is always possible to define such a training information \(i_{t}\). For example, the information \(i_{t}=(o_{t},o_{t}^{+})\) always satisfies the aforementioned conditional independence, whatever \(o_{t}^{+}\) is. Taking a sequence of \(t\) actions in the informed POMDP conditions its execution and provides samples \((i_{0},o_{0},a_{0},r_{0},\ldots,i_{t},o_{t})\) at training time, as illustrated in Figure 1.
For each informed POMDP, there is an underlying execution POMDP that is defined as \(\mathcal{P}=(\mathcal{S},\mathcal{A},\mathcal{O},T,R,O,P,\gamma)\), where \(O(o_{t}|s_{t})=\int_{\mathcal{I}}\widetilde{O}(o_{t}|i)\widetilde{I}(i|s_{t}) \,\mathrm{d}i\). Taking a sequence of \(t\) actions in the execution POMDP conditions its execution and provides the history \(h_{t}=(o_{0},a_{0},\ldots,o_{t})\in\mathcal{H}\) at execution time, where \(\mathcal{H}\) is the set of histories of arbitrary length. Note that the information samples \(i_{0},\ldots,i_{t}\) and reward samples \(r_{0},\ldots,r_{t-1}\) are not included in the history, since they are not available at execution time, as illustrated in Figure 1.
### Reinforcement Learning Objective
A policy \(\eta\in H\) is defined as a mapping from histories to probability measures over the action space, where \(H=\mathcal{H}\rightarrow\Delta(\mathcal{A})\) is the set of such mappings. A policy is said to be optimal for an informed POMDP when it is optimal in the underlying execution POMDP, i.e., when it maximizes the expected return \(J(\eta)\), defined as,
\[J(\eta)=\mathop{\mathbb{E}}_{\begin{subarray}{c}s_{0}\sim P(\cdot)\\ o_{t}\sim O(\cdot|s_{t})\\ a_{t}\sim\eta(\cdot|h_{t})\\ s_{t+1}\sim T(\cdot|s_{t},a_{t})\end{subarray}}\left[\sum_{t=0}^{\infty} \gamma^{t}R(s_{t},a_{t})\right]. \tag{1}\]
The RL objective for an informed POMDP is thus to find an optimal policy \(\eta^{*}\in\operatorname*{arg\,max}_{\eta\in H}J(\eta)\) for the execution POMDP from interaction with the informed POMDP.
## 3 Optimal Control with Recurrent Sufficient Statistics
In Subsection 3.1, we introduce sufficient statistics for the optimal control and discuss their relation with optimal policies. In Subsection 3.2, we derive an objective for learning in an informed POMDP a recurrent statistic that is sufficient for the optimal control. In Subsection 3.3, we propose a joint objective for learning an optimal recurrent policy with a sufficient statistic. For the sake of conciseness, in this section, we simply use \(x\) to denote a random variable at the current time step and \(x^{\prime}\) to denote it at the next time step. Moreover, we use the composition notation \(g\circ f\) to denote the history-dependent policy \(g(\cdot|f(\cdot))\).
### Recurrent Sufficient Statistics
Let us first define the concept of sufficient statistic, from which a necessary condition for optimality can be derived.
**Definition 1** (Sufficient statistic).: In an informed POMDP \(\widetilde{\mathcal{P}}\) and in its underlying execution POMDP \(\mathcal{P}\), a statistic from the history \(f\colon\mathcal{H}\rightarrow\mathcal{Z}\) is sufficient for the optimal control if, and only if,
\[\max_{g\colon\,\mathcal{Z}\rightarrow\Delta(\mathcal{A})}J(g\circ f)=\max_{ \eta\colon\,\mathcal{H}\rightarrow\Delta(\mathcal{A})}J(\eta). \tag{2}\]
**Corollary 1** (Sufficiency of optimal policies).: In an informed POMDP \(\mathcal{P}\) and in its underlying execution POMDP \(\widetilde{\mathcal{P}}\), if a policy \(\eta=g\circ f\) is optimal, then the statistic \(f:\mathcal{H}\rightarrow\mathcal{Z}\) is sufficient for the optimal control.
In this work, we focus on learning recurrent policies, i.e., policies \(\eta=g\circ f\) for which the statistic \(f\) is recurrent. Formally, we have,
\[\eta(a|h) =g(a|f(h)),\;\forall(h,a), \tag{3}\] \[f(h^{\prime}) =u(f(h),a,o^{\prime}),\;\forall h^{\prime}=(h,a,o^{\prime}). \tag{4}\]
This allows to process the history iteratively each time that a new action is taken and a new observation is received. According to Corollary 1, when learning a recurrent policy \(\eta=g\circ f\), the objective can thus be decomposed into two problems: finding a sufficient statistic \(f\) and an optimal conditional distribution \(g\) conditioned on this statistic,
\[\max_{\begin{subarray}{c}f\colon\,\mathcal{H}\rightarrow\mathcal{Z}\\ g\colon\,\mathcal{Z}\rightarrow\Delta(\mathcal{A})\end{subarray}}J(g\circ f). \tag{5}\]
### Learning Recurrent Sufficient Statistics
Below, we provide a sufficient condition for a statistic to be sufficient for the optimal control of an informed POMDP.
**Theorem 1** (Sufficiency of recurrent predictive sufficient statistics).: In an informed POMDP \(\widetilde{\mathcal{P}}\), a statistic \(f\colon\mathcal{H}\rightarrow\mathcal{Z}\) is sufficient for the optimal control if it is (i) recurrent and
Figure 1: Informed POMDP: Bayesian network of its execution, arrows represent conditional dependencies.
(ii) predictive sufficient for the reward and next information given the action,
(i) \[f(h^{\prime})=u(f(h),a,o^{\prime}),\ \forall h^{\prime}=(h,a,o^{\prime}),\] (6) (ii) \[p(r,i^{\prime}|h,a)=p(r,i^{\prime}|f(h),a),\ \forall(h,a,r,i^{\prime}).\] (7)
We provide the proof for this theorem in Appendix A, generalizing earlier work by Subramanian et al. (2022).
Now, let us consider a distribution over the histories and actions whose probability density function writes \(p(h,a)\). For example, we consider the stationary distribution induced by the current policy \(\eta\) in the informed POMDP \(\widetilde{\mathcal{P}}\). Let us also assume that the probability density function \(p(h,a)\) is non-zero everywhere. As shown in Appendix B, under mild assumption, any statistic satisfying the following objective,
\[\max_{\begin{subarray}{c}f:\ \mathcal{H}\rightarrow\mathcal{Z}\\ q:\ \mathcal{Z}\times\mathcal{A}\rightarrow\Delta(\mathbb{R}\times \mathcal{I})\end{subarray}}\operatorname*{\mathbb{E}}_{p(h,a,r,i^{\prime})} \log q(r,i^{\prime}|f(h),a), \tag{8}\]
also satisfies (ii). This variational objective jointly optimizes the statistic function \(f:\mathcal{H}\rightarrow\mathcal{Z}\) with the conditional probability density function \(q\colon\mathcal{Z}\times\mathcal{A}\rightarrow\Delta(\mathbb{R}\times \mathcal{I})\). According to Theorem 1, a recurrent statistic satisfying objective (8) is thus sufficient for the optimal control.
In practice, both the recurrent statistic and the probability density function are implemented with neural networks \(f_{\theta}\) and \(q_{\theta}\), respectively. They are both parametrized by \(\theta\in\mathbb{R}^{d}\), such that the objective can be maximized by stochastic gradient ascent. Regarding \(f_{\theta}\), it is implicitly implemented by an RNN whose update function \(z_{t}=u_{\theta}(z_{t-1};x_{t})\) is parametrized by \(\theta\). The inputs are \(x_{t}=(a_{t-1},o_{t})\), with \(a_{-1}\) the null action, which is typically chosen to zero. The hidden state of the RNN \(z_{t}=f_{\theta}(h_{t})\) is thus a statistic from the history that is recurrently updated using \(u_{\theta}\). Regarding \(q_{\theta}\), it is implemented by a parametrized probability density function estimator. The objective writes,
\[\max_{\theta}\underbrace{\operatorname*{\mathbb{E}}_{p(h,a,r,i^{\prime})} \log q_{\theta}(r,i^{\prime}|f_{\theta}(h),a)}_{L(f_{\theta})}. \tag{9}\]
We might wonder whether this informed objective is better than the classic objective, where \(i=o\). In this work, we hypothesize that regressing the information distribution instead of the observation distribution is a better objective in practice. Indeed, according to the data processing inequality applied to the Bayesian network \(s^{\prime}\longrightarrow i^{\prime}\longrightarrow o^{\prime}\), the information \(i^{\prime}\) is more informative than the observation \(o^{\prime}\) about the Markovian state \(s^{\prime}\) of the environment,
\[I(s^{\prime},i^{\prime}|h,a)\geq I(s^{\prime},o^{\prime}|h,a). \tag{10}\]
We thus expect the statistic \(f_{\theta}(h)\) to converge faster towards a sufficient statistic, and the policy to converge faster towards an optimal policy.
### Optimal Control with Recurrent Sufficient Statistics
As seen from Corollary 1, sufficient statistics are needed for the optimal control of POMDPs. Moreover, as we focus on recurrent policies implemented with RNNs, we can exploit objective (9) to learn a sufficient statistic \(f_{\theta}\). In practice, we jointly optimize the RL objective \(J(\eta_{\theta,\phi})=J(g_{\phi}\circ f_{\theta})\) and the statistic objective \(L(f_{\theta})\). This allows to use the information \(i\) to guide the statistic learning through \(L(f_{\theta})\). This joint objective writes,
\[\max_{\theta,\phi}J(g_{\phi}\circ f_{\theta})+L(f_{\theta}). \tag{11}\]
A policy \(\eta_{\theta,\phi}\) satisfying objectives (11) is guaranteed to satisfy (5) and the policy is thus optimal for the informed and execution POMDP. Note however that there may exist policies satisfying (5) that do not satisfy (11).
The objective \(L(f_{\theta})\) provides a recurrent model of the reward and next information given the history and action. In the following, we show that we can exploit this model to generate artificial trajectories, called imagined trajectories, under conditions on \(q_{\theta}\). Those imagined trajectories can then be used to maximize the imagined return of the policy, which in turn maximizes \(J(g_{\phi}\circ f_{\theta})\) if the model is accurate.
## 4 Model-Based Reinforcement Learning through Informed World Models
Model-based RL focuses on learning a model of the dynamics \(p(r,o^{\prime}|h,a)\) of the environment, known as a world model. Since this approximate model allows one to generate imagined trajectories, a near-optimal behaviour is usually derived either by online planning or by optimizing a policy based on those trajectories (Sutton, 1991; Ha and Schmidhuber, 2018; Chua et al., 2018; Zhang et al., 2019; Hafner et al., 2019, 2020). In the following, we show that our informed model \(q_{\theta}(r,i^{\prime}|f_{\theta}(h),a)\) can be slightly modified to provide an informed world model from which latent trajectories can be sampled. We then propose the Informed Dreamer algorithm, adapting to informed POMDPs the DreamerV3 algorithm (Hafner et al., 2023). In Subsection 4.1, we introduce this informed world model and its training objective. In Subsection 4.2, we present the Informed Dreamer algorithm exploiting this informed world model to train its policy.
### Informed World Model
In this work, we implement the probability density function \(q_{\theta}\) with a variational autoencoder (VAE) conditioned on the statistic of the RNN. Together, they form a variational RNN (VRNN) as proposed in (Chung et al., 2015), also known as a recurrent state-space model (RSSM) in the RL context
(Hafner et al., 2019). Formally, we have,
\[\hat{e} \sim q_{\theta}^{p}(\cdot|z,a), \text{(prior, 12)}\] \[\hat{r} \sim q_{\theta}^{r}(\cdot|z,\hat{e}), \text{(reward decoder, 13)}\] \[\hat{i}^{\prime} \sim q_{\theta}^{i}(\cdot|z,\hat{e}), \text{(information decoder, 14)}\]
where \(\hat{e}\) is the latent variable of the VAE. The prior \(q_{\theta}^{p}\) and the decoders \(q_{\theta}^{i}\) and \(q_{\theta}^{r}\) are jointly trained with the encoder,
\[e \sim q_{\theta}^{\varepsilon}(\cdot|z,a,o^{\prime}), \text{(encoder, 15)}\]
to maximize the likelihood of reward and next information samples. The latent representation \(e\sim q_{\theta}^{e}(\cdot|z,a,o^{\prime})\) of the next observation \(o^{\prime}\) can be used to update the statistic to \(z^{\prime}\),
\[z^{\prime} =u_{\theta}(z,a,e). \text{(recurrence, 16)}\]
Note that the statistic \(z\) is no longer deterministically updated to \(z^{\prime}\) given \(a\) and \(o^{\prime}\), instead we have \(z\sim f_{\theta}(\cdot|h)\), which is induced by \(u_{\theta}\) and \(q_{\theta}^{e}\). This key design choice allows sampling imagined trajectories without reconstructing the imagined observation \(\hat{o}^{\prime}\) by using the latent \(\hat{e}\) in update (16), as shown in the next subsection. This requirement of latent representation sampling restricts the class of model-based algorithm that can be adapted using our method.
In practice, we maximize the evidence lower bound (ELBO), a tight variational lower bound on the likelihood of reward and next information samples (Chung et al., 2015),
\[\mathop{\mathbb{E}}_{\begin{subarray}{c}p(h,a,r,i^{\prime})\\ f_{\theta}(z|h)\end{subarray}} \log q_{\theta}(r,i^{\prime}|z,a)\geq\mathop{\mathbb{E}}_{ \begin{subarray}{c}p(h,a,r,i^{\prime},o^{\prime})\\ f_{\theta}(z|h)\end{subarray}}\Bigg{[}\mathop{\mathbb{E}}_{q_{\theta}^{e}(e|z,a,o^{\prime})}\big{[}\] \[\log q_{\theta}^{i}(i^{\prime}|z,e)+\log q_{\theta}^{r}(r|z,e) \big{]}-\] \[\mathrm{KL}\left(q_{\theta}^{\varepsilon}(\cdot|z,a,o^{\prime}) \parallel q_{\theta}^{p}(\cdot|z,a)\right)\Bigg{]}. \tag{17}\]
Despite the statistic \(f_{\theta}(\cdot|h)\) being stochastic, the ELBO objective ensures that the stochastic statistic \(f_{\theta}(\cdot|h)\) becomes predictive sufficient for the reward and next information. Note that when \(i=o\), it corresponds to Dreamer's world model and learning objective. Figure 2 shows, for a sample trajectory \((i_{0},o_{0},a_{0},r_{0},\ldots,i_{T},o_{T})\), the update of the statistic \(z\) according to the update function \(u_{\theta}\) and the encoder \(q_{\theta}^{e}\). Maximizing the ELBO maximizes the conditional log-likelihood \(q_{\theta}^{e}(r|z,e)\) and \(q_{\theta}^{i}(i|z,e)\) of \(r\) and \(i^{\prime}\) for a sample of the encoder \(e\sim q_{\theta}^{e}(\cdot|z,a,o^{\prime})\), and minimises the KL divergence from \(q_{\theta}^{e}(\cdot|z,a,o^{\prime})\) to the prior distribution \(q_{\theta}^{p}(\cdot|z,a)\), as highlighted in orange.
### Informed Dreamer
While our informed world model does not learn the observation distribution, it can still generate imagined trajectories. Indeed, the VRNN only uses the latent representation \(e\sim q_{\theta}^{e}(\cdot|z,a,o^{\prime})\) of the observation \(o^{\prime}\), trained to reconstruct the information \(i^{\prime}\), in order to update \(z\) to \(z^{\prime}\). Consequently, we can use the prior distribution \(\hat{e}\sim q_{\theta}^{p}(\cdot|z,a)\), trained to minimise the KL divergence from \(q_{\theta}^{p}(\cdot|z,a,o^{\prime})\) in expectation, to generate latent trajectories. The Informed Dreamer algorithm uses this informed world model, a critic \(v_{\psi}(z)\), and a latent policy \(a\sim g_{\phi}(\cdot|z)\). Figure 3 illustrates the generation of a latent trajectory, along with imagined rewards \(\hat{r}\sim q_{\theta}^{r}(\cdot|z,e)\) and approximate values \(\hat{v}=v_{\psi}(z)\). During generation, the actions are sampled according to \(a\sim g_{\phi}(\cdot|z)\), and any RL algorithm can be used to maximize the imagined returns. Note that the mean imagined reward and estimated values are given by functions that are differentiable with respect to \(\phi\), such that the imagined return can be directly maximized by stochastic gradient ascent. In the experiments, we use an actor-critic approach for discrete actions and direct maximization for continuous actions, following DreamerV3 (Hafner et al., 2023).
A pseudocode for the adaptation of the Dreamer algorithm using this informed world model is given in Appendix C. We also detail some divergences of our formalization with respect to the original Dreamer algorithm (Hafner et al., 2023). Like in DreamerV3, we uses symlog predictions, a discrete VAE, KL balancing, free bits, reward normalisation, a distributional critic, and entropy regularization.
Finally, as shown in Figure 4, when deployed in the execution POMDP, the encoder \(q_{\theta}^{e}\) is used to compute the latent
Figure 3: Variational RNN: Bayesian graph of its evaluation when imagining a latent trajectory using policy \(g_{\phi}\) (dependence of \(q_{\theta}^{r}\) and \(v_{\psi}\) on \(z\) is omitted).
Figure 2: Variational RNN: Bayesian graph of its evaluation for a given trajectory at training time (dependence of \(q_{\theta}^{r}\) and \(q_{\theta}^{i}\) on \(z\) is omitted). The loss components are illustrated in orange.
representations of the observations and to update the statistic. The actions are then selected according to \(a\sim g_{\phi}(\cdot|z)\).
## 5 Experiments
In this section, we compare Dreamer to the Informed Dreamer on several control problems, formalized as informed POMDPs. We use the implementation of DreamerV3 released at github.com/danijar/DreamerV3 by the authors, and release our adaptation to informed POMDPs at github.com/glambrechts/informed-dreamer. For all environments, we use the same unique set of hyperparameters as in DreamerV3, including for the Informed Dreamer.
### Varying Mountain Hike
In the Varying Mountain Hike environments, the agent is tasked with walking throughout a mountainous terrain. There exists four versions of this environment, depending on the initial state distribution and the type of observation that is available. The agent has a position on a two-dimensional map and can take actions to move relative to its initial orientation. The initial orientation is either always North, or a random cardinal orientation, depending on the environment version. The initial orientation is never available to the agent, but the agent receives a noisy observation of its position or its altitude, depending on the environment version. The reward is given by its altitude relative to the mountain top, such that the goal of the agent is to obtain the highest cumulative altitude. Around the mountain top, states are terminal. The optimal therefore consists in going as fast as possible towards those terminal states while staying on the crests in order to get less negative rewards than in the valleys. We refer the reader to (Lambrechts et al., 2022) for a formal description of this environment, heavily inspired from the Mountain Hike of (Igl et al., 2018).
For this environment, we consider the position and initial orientation to be available as additional information. In other words, we consider the state-informed POMDP with \(i=s\). As can be seen from Figure 5, the speed of convergence of the policies is greatly improved when using the Informed Dreamer in this informed POMDP. Moreover, as shown in Table 1, the final performance of the policy is always better than or similar to the Dreamer algorithm.
### Flickering Atari
In the Flickering Atari environments, the agent is tasked with playing the Atari games (Bellemare et al., 2013) on a flickering screen. The dynamics are left unchanged, but the agent may randomly observe a blank screen instead of the game screen, with probability \(p=0.5\). While the classic Atari games are known to have low stochasticity and few partial observability challenges (Hausknecht and Stone, 2015), their flickering counterparts have constituted a classic benchmark in the partially observable RL literature (Hausknecht and Stone, 2015; Zhu et al., 2017; Igl et al., 2018; Ma et al., 2020). Moreover, regarding the recent advances in sample-effiency of model-based RL approaches, we consider the Atari 100k benchmark, where only 100k actions can be taken by the agent for generating samples of interaction.
For these environments, we consider the RAM state of the simulator, a \(128\)-dimensional byte vector, to be available as additional information for supervision. This information vector is indeed guaranteed to satisfy the conditional independence of the informed POMDP: \(p(o|i,s)=p(o|i)\). Moreover, we postprocess this additional information by only selecting the subset of variables that are relevant to the
\begin{table}
\begin{tabular}{c c c c} \hline \hline Altitude & Varying & Uninformed & Informed \\ \hline False & False & \(-1\mathbf{4.47}\pm\mathbf{03.27}\) & \(-14.56\pm 03.45\) \\ False & True & \(-19.84\pm 03.91\) & \(-\mathbf{17.87}\pm\mathbf{01.18}\) \\ True & False & \(-43.11\pm 59.89\) & \(-\mathbf{18.04}\pm\mathbf{11.94}\) \\ True & True & \(-90.04\pm 35.57\) & \(-\mathbf{54.07}\pm\mathbf{54.87}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Final non-discounted reward of Dreamer and Informed Dreamer on the Varying Mountain Hike environments.
Figure 4: Execution policy: Bayesian graph of its execution in the POMDP using the VRNN encoder \(q_{\theta}^{c}\) and update function \(u_{\theta}^{c}\) to condition the latent policy \(g_{\phi}\).
Figure 5: Uninformed Dreamer versus Informed Dreamer (\(i=s\)) on the Varying Mountain Hike environments: non-discounted return with respect to the number of million steps. Results show the mean, minimum and maximum values over four runs.
game that is considered, according to the annotations provided in Anand et al. (2019). Depending on the game, this information vector might contain the number of remaining opponents, their positions, the player position, its state, etc.
Figure 6 shows that the speed of convergence and the performance of the policies is greatly improved by considering additional information for three environments (Asteroids, Bowling, and Pong), while degraded for four others (Boxing, Frostbite, Hero and Ms Pacman) and left similar for the rest. The final non-discounted returns are given in Table 2, offering similar conclusions.
### Velocity Control
In the Velocity Control environments, we consider the standard DeepMind Control task Tassa et al. (2018) where only the joints velocities are available as observations, and not their absolute positions, which is a standard benchmark in partially observable RL literature Han et al. (2019); Lee et al. (2020); Warrington et al. (2021). For these environments, we consider the complete state (including the positions) to be available as additional information.
Figure 7 shows that the speed of convergence and the performance of the policies is greatly improved in this benchmark, for nearly all of the considered games. Moreover, the final non-discounted returns are given in Table 3, and show that the policies obtained after one million time steps are generally better when considering additional information.
\begin{table}
\begin{tabular}{c c c} \hline \hline Task & Uninformed & Informed \\ \hline Asteroids & \(1085.21\pm 236.29\) & \(\mathbf{1620.98\pm 579.77}\) \\ Battle Zone & \(\mathbf{5863.99\pm 2081.67}\) & \(4258.01\pm 1000.00\) \\ Bowling & \(55.08\pm 13.08\) & \(\mathbf{90.33\pm 04.51}\) \\ Boxing & \(\mathbf{12.86\pm 03.21}\) & \(-0.53\pm 10.69\) \\ Breakout & \(0.33\pm 04.73\) & \(\mathbf{04.17\pm 01.53}\) \\ Frostbite & \(\mathbf{413.95\pm 377.40}\) & \(268.38\pm 490.85\) \\ Hero & \(\mathbf{4293.33\pm 2534.57}\) & \(313.27\pm 24.66\) \\ Ms Pacman & \(\mathbf{1262.75\pm 565.18}\) & \(923.11\pm 665.01\) \\ Pong & \(-19.24\pm 01.73\) & \(-\mathbf{0.98\pm 15.13}\) \\ Private Eye & \(-23.86\pm 57.74\) & \(\mathbf{448.28\pm 398.36}\) \\ Qbert & \(\mathbf{879.47\pm 378.32}\) & \(812.20\pm 1973.42\) \\ Seaquest & \(\mathbf{312.08\pm 80.83}\) & \(302.60\pm 231.80\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Final non-discounted reward of Dreamer and Informed Dreamer on the Flickering Atari environments.
Figure 6: Uninformed Dreamer versus Informed Dreamer (\(i=\phi(\text{RAM})\)) on the Flickering Atari environments: non-discounted return with respect to the number of million steps. Results show the mean, minimum and maximum values over four runs.
Figure 7: Uninformed Dreamer versus Informed Dreamer (\(i=s\)) on the Velocity Control environments: non-discounted return with respect to the number of million steps. Results show the mean, minimum and maximum values over four runs.
### Flickering Control
In the Flickering Control environments, the agent performs one of the standard DeepMind Control task from images but through a flickering screen. Like for the Flickering Atari environments, the dynamics are left unchanged, except that the agent may randomly observe a blank screen instead of the task screen, with probability \(p=0.5\). For these environments, we consider the state to be available as additional information, as for the Velocity Control environments.
Regarding this benchmark, considering additional information seem to degrade learning, generally resulting in worse policies. This suggests that not all information is good to learn, some might be irrelevant to the control task and hinders the learning of optimal policies. The final returns are given in Table 4, and offer similar conclusions.
## 6 Conclusion
In this work, we introduced a new formalization for considering additional information available at training time for POMDP, called the informed POMDP. In this context, we proposed an objective for learning recurrent sufficient statistic for the optimal control. Next, we showed that this objective can be slightly modified to provide an environment model from which latent trajectories can be generated. We then adapted a successful model-based RL algorithm, known as Dreamer, with this informed world model, resulting in the Informed Dreamer algorithm. By considering several environments from the partially observable RL literature, we showed that this informed learning objective improves the convergence speed and quality of the policies in several environments. However, we also observed that this uninformed objective hurts performance in some environments, motivating further work in which a particular attention is given to the design of the information \(i\).
\begin{table}
\begin{tabular}{c c c} \hline \hline Task & Uninformed & Informed \\ \hline Acrobot Swingup & \(166.42\pm 117.81\) & \(\mathbf{333.86\pm 147.49}\) \\ Cartpole Balance & \(\mathbf{988.09\pm 01.57}\) & \(943.18\pm 39.97\) \\ Cartpole Balance Sparse & \(971.12\pm 00.00\) & \(\mathbf{979.91\pm 00.00}\) \\ Cartpole Swingup & \(\mathbf{838.44\pm 23.23}\) & \(798.12\pm 28.26\) \\ Cartpole Swingup & \(485.90\pm 334.90\) & \(\mathbf{677.38\pm 96.19}\) \\ Cheetah Run & \(\mathbf{683.80\pm 53.87}\) & \(590.43\pm 22.62\) \\ Cup Catch & \(\mathbf{959.79\pm 12.75}\) & \(946.11\pm 19.66\) \\ Flixer Finn & \(\mathbf{708.31\pm 397.54}\) & \(587.21\pm 188.07\) \\ Finger Turn Easy & \(755.08\pm 483.89\) & \(\mathbf{925.93\pm 20.07}\) \\ Finex Turn Hard & \(568.66\pm 491.80\) & \(\mathbf{887.85\pm 32.84}\) \\ Hopper Hop & \(\mathbf{279.92\pm 30.22}\) & \(213.99\pm 23.51\) \\ Hopper Stand & \(450.49\pm 504.36\) & \(\mathbf{774.22\pm 120.96}\) \\ Pendulum Swingup & \(\mathbf{797.12\pm 70.80}\) & \(741.94\pm 117.27\) \\ Reacher Easy & \(\mathbf{937.19\pm 16.79}\) & \(926.02\pm 67.70\) \\ Reacher Hard & \(\mathbf{732.34\pm 168.36}\) & \(556.36\pm 420.29\) \\ Walker Run & \(\mathbf{765.40\pm 21.11}\) & \(580.77\pm 39.79\) \\ Walker Stand & \(\mathbf{972.93\pm 39.72}\) & \(933.29\pm 96.17\) \\ Walker Walk & \(\mathbf{957.88\pm 26.84}\) & \(898.33\pm 36.68\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Final non-discounted reward of Dreamer and Informed Dreamer on the Flickering Control environments.
\begin{table}
\begin{tabular}{c c c} \hline \hline Task & Uninformed & Informed \\ \hline Acrobot Swingup & \(66.21\pm 52.25\) & \(\mathbf{163.01\pm 139.63}\) \\ Cartpole Balance & \(959.60\pm 08.13\) & \(\mathbf{967.45\pm 24.47}\) \\ Cartpole Balance Sparse & \(\mathbf{852.71\pm 53.15}\) & \(810.24\pm 248.14\) \\ Cartpole Swingup & \(667.95\pm 54.72\) & \(\mathbf{701.96\pm 88.14}\) \\ Cartpole Swingup Sparse & \(01.53\pm 03.46\) & \(\mathbf{28.48\pm 109.70}\) \\ Cheetah Run & \(\mathbf{619.95\pm 241.31}\) & \(543.41\pm 136.00\) \\ Cup Catch & \(732.09\pm 477.75\) & \(\mathbf{950.31\pm 48.63}\) \\ Finger Strip & \(626.15\pm 211.54\) & \(\mathbf{640.60\pm 233.99}\) \\ Finox Turn Easy & \(579.49\pm 447.18\) & \(\mathbf{849.73\pm 102.69}\) \\ Finox Turn Hard & \(451.75\pm 479.93\) & \(\mathbf{828.81\pm 132.77}\) \\ Hopper Hop & \(158.88\pm 13.78\) & \(\mathbf{167.22\pm 34.24}\) \\ Hopper Stand & \(361.82\pm 22.89\) & \(\mathbf{595.42\pm 198.06}\) \\ Pendulum Swingup & \(\mathbf{355.11\pm 406.69}\) & \(298.29\pm 479.81\) \\ Reacher Easy & \(931.37\pm 43.92\) & \(\mathbf{944.82\pm 44.94}\) \\ Racher Hard & \(853.13\pm 102.10\) & \(\mathbf{954.89\pm 14.17}\) \\ Walker Run & \(430.21\pm 83.55\) & \(\mathbf{604.20\pm 75.88}\) \\ Walker Stand & \(883.65\pm 98.58\) & \(\mathbf{925.09\pm 56.47}\) \\ Walker Walk & \(867.97\pm 103.26\) & \(\mathbf{910.38\pm 21.88}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Final non-discounted reward of Dreamer and Informed Dreamer on the Velocity Control environments.
Figure 8: Uninformed Dreamer versus Informed Dreamer (\(i=s\)) on the Flickering Control environments: non-discounted return with respect to the number of million steps. Results show the mean, minimum and maximum values over four runs.
## Acknowledgements
The authors would like to thank our colleagues Pascal Leroy, Arnaud Delaunoy, Renaud Vandeghen and Florent De Geeter for their valuable comments on this manuscript. Gaspard Lambrechts gratefully acknowledges the financial support of the _Federation Wallonie-Bruxelles_ for his FRIA grant. Adrien Bolland gratefully acknowledges the financial support of the _Federation Wallonie-Bruxelles_ for his FNRS grant. Computational resources have been provided by the _Consortium des Equipements de Calcul Intensif_ (CECI), funded by the _Fonds de la Recherche Scientifique de Belgique_ (F.R.S.-FNRS) under Grant No. 2502011 and by the _Walloon Region_, including the Tier-1 supercomputer of the _Federation Wallonie-Bruxelles_, infrastructure funded by the _Walloon Region_ under Grant No. 1117545.
|
2310.04005 | Probing Fermi surface parity with spin resolved transverse magnetic
focussing | Measurements of the Fermi surface are a fundamental technique for determining
the electrical and magnetic properties of solids. In 2D systems, the area and
diameter of the Fermi surface is typically measured using Shubnikov-de Haas
oscillations and commensurability oscillations respectively. However, these
techniques are unable to detect changes in the parity of the Fermi surface
(i.e. when +k $\neq$ -k). Here, we show that transverse magnetic focussing can
be used to detect such changes, because focussing only measures a well defined
section of the Fermi surface and does not average over +k and -k. Furthermore,
our results show that focussing is an order of magnitude more sensitive to
changes in the Fermi surface than other 2D techniques, and could be used to
investigate similar Fermi surface changes in other 2D systems. | M. J. Rendell, S. D. Liles, S. Bladwell, A. Srinivasan, O. Klochan, I. Farrer, D. A. Ritchie, O. P. Sushkov, A. R. Hamilton | 2023-10-06T04:30:53Z | http://arxiv.org/abs/2310.04005v2 | # Probing Fermi surface shifts with spin resolved transverse magnetic focussing
###### Abstract
Transverse magnetic focussing is the solid state equivalent of a mass spectrometer. It is unique among 2D measurement techniques as it is able to measure a well defined section of the Fermi surface, making it possible to detect changes that would be averaged out over the whole Fermi surface. Here, we utilise this unique property to probe non-adiabatic spin dynamics and spin dependent scattering of holes. We combine spin-resolved magnetic focussing with an additional independent in-plane magnetic field and observe a change in focussing peak amplitude that is not symmetric with respect to the field direction (i.e. \(+B_{\parallel}\neq-B_{\parallel}\)), and is extremely sensitive to the magnitude of the in-plane magnetic field. We show that the magnetic focussing signal is extremely sensitive to small changes in the Fermi velocity, which can be used to detect small shifts in the Fermi surface caused by an in-plane magnetic field. We also find that focussing can be used to detect the proximity between spin-split Fermi surfaces, which cause non-adiabatic spin dynamics.
+
Footnote †: preprint: APS/123-QED
Techniques for measuring the Fermi surface have existed since the early 1930s [1; 2], and are a powerful way to probe electronic and magnetic properties of metals, semiconductors, superconductors and heavy fermion compounds [3; 4; 5; 6]. Magnetic focussing was originally proposed as a technique for probing the Fermi surface of metals [7; 8; 9], and is unique among methods for measuring 2D Fermi surfaces in that it uses point contacts as injection and detection points as a solid-state equivalent of a charge-mass spectrometer. This means focussing is able to measure a well defined section of the Fermi surface as the charge carriers do not complete a full orbit [10; 11; 12; 13; 14; 15; 16]. In addition, the use of point contacts makes focussing extremely sensitive, allowing it to probe spin and charge dynamics including branched electron flow, small-angle scattering, spin separation and spin precession [17; 18; 19; 20; 21; 22].
Here we show that magnetic focussing of holes is an extremely sensitive probe of the spin-dependent Fermi surface and spin dynamics in 2D systems. We use focussing to observe non-adiabatic spin dynamics and spin-dependent scattering created by in-plane magnetic fields (B\({}_{\parallel}\)). We observe a change in focussing peak amplitude that is non-symmetric with respect to the polarity of the in-plane magnetic field (+B\({}_{\parallel}\neq\) -B\({}_{\parallel}\)). The change in peak amplitude can be explained by B\({}_{\parallel}\) causing a shift in the spin-split 2D Fermi surfaces. For smaller in-plane fields (B\({}_{\parallel}<\) 4T) the change in peak amplitude is monotonic, and is consistent with B\({}_{\parallel}\) changing the Fermi velocity and hence the scattering rate of the spin states. The amplitude change is visible for magnetic fields as small as B\({}_{\parallel}\) = 0.1T, demonstrating the sensitivity of focussing. At larger in-plane fields (B\({}_{\parallel}>\) 4T) a significant shift in the focussing peaks is observed for one polarity of B\({}_{\parallel}\). This is a result of the spin-split Fermi surfaces touching, creating non-adiabatic spin dynamics. Both of these effects are only visible in focussing measurements, as they would usually be averaged out over the full Fermi surface.
The magnetic focussing sample is fabricated on a GaAs/AlGaAs heterostructure with a 15nm GaAs quantum well confining the 2D hole gas (2DHG). The 2DHG is induced in accumulation mode (no doping) by applying a negative voltage to an overall top gate. Figure 1 shows an SEM of the sample with lithographic split gates used to define the focussing geometry. The overlay indicates the electrical measurement setup and the orientation of the in-plane magnetic field. To perform focussing, a constant current of holes (I\({}_{\rm SD}\) = 5nA) is injected through a quantum point contact (QPC), and the resulting focussing signal is measured as a voltage across a second QPC. This is performed as a four terminal measurement with a pair of lock-in amplifiers at low frequency (17 Hz). When the perpendicular magnetic field (B\({}_{\rm Focus}\)) is such that the focussing diameter (d\({}_{\rm Focus}\)) is equal to the spacing between QPCs, a peak is observed in the focussing voltage. These peaks occur when the magnetic field is an integer multiple of [10]
\[B_{\rm Focus}=\frac{2\hbar k_{F}}{ed_{\rm Focus}}\]
Where k\({}_{F}\) is the Fermi momentum. All measurements in this work use a focussing diameter d\({}_{\rm Focus}\) = 800nm at a 2D density of n\({}_{\rm 2D}\) = 1.89x10\({}^{11}\) cm\({}^{-2}\), (V\({}_{\rm TG}\) = -1.35V). The QPCs have lithographic dimensions of 300x300nm and are biased to G=2e\({}^{2}/h\) to inject and detect both spin polarisations. The sample is measured in a He dilution system with a 9/5/1 T vector magnet at base temperature (30mK). In-plane and out-of-plane fields are also measured using Hall sensors on the sample probe to correct for any magnet hysteresis.
In GaAs hole systems, spin-orbit interactions have a significant effect on magnetic focussing measurements.
The Rashba spin-orbit interaction causes a spin splitting of the first 2D subband which results in a different momentum for each spin. Figure 1 b) shows the spin-split first 2D subband for holes in GaAs, with the red and blue bands representing the different spins. The splitting of the 2D subband results in a difference of momentum between the spins (k\({}_{+}\) and k\({}_{-}\)) at the Fermi energy (horizontal dashed line). The different momentum creates a different focussing trajectory for each spin, which splits the first focussing peak [21]. The splitting of the 2D subband can also create a different scattering rate for each spin, as it changes the slope of each subband and hence the velocity [20]. The relatively symmetric quantum well heterostructure used in this work allows for visible spin splitting while also giving a similar scattering rate for both spin states.
Figure 1 c) shows focussing with no in-plane magnetic field. A double peak is observed consistent with spin-resolved focussing. A double Gaussian fit to the split peak (red and blue peaks in Fig. 1 c) shows that both peaks have a similar amplitude in the absence of an in-plane magnetic field. Figure 1 d) shows the first focussing peak with a magnetic field applied in-plane parallel to the QPC current direction (B\({}_{\parallel}\)). With B\({}_{\parallel}\) = +2T (top trace - blue), a clear change in the peak amplitude is observed compared to the peaks with no in-plane field (black centre trace). This change in peak amplitude would typically be interpreted as a change in the spin polarisation [21]. However, when the direction of B\({}_{\parallel}\) is reversed to -2T (bottom trace in Fig. 1 d) the change in peak amplitude is not symmetric. This is not consistent with a change in spin polarisation as the Zeeman splitting should be the same for \(\pm\)B\({}_{\parallel}\).
To rule out any Zeeman or spin polarisation effects, we next measure the change in peak amplitude with small in-plane magnetic fields. Figure 2 a) shows the results of focussing with small B\({}_{\parallel}\) applied in 0.1T increments up to \(\pm\)0.5T. A double Gaussian is fitted to each peak and the amplitude as a function of B\({}_{\parallel}\) is plotted in Fig. 2 b). A change in amplitude of the spin peaks is observed for fields as small as \(\pm\)0.1T, far too small to be caused by a
Figure 1: **Spin-resolved focussing with in-plane magnetic fields a)** SEM of the focussing sample. The overlay shows the orientation of the in-plane (B\({}_{\parallel}\)) and out-of-plane (B\({}_{Focus}\)) magnetic fields, as well as the electrical setup for measurements. Red and blue semicircles indicate the spin-split focussing trajectories. **b)** The first 2D subband for holes. A Rashba spin-orbit interaction causes a spin-splitting of the subband resulting in two different momenta (k\({}_{+}\) and k\({}_{-}\)) at the Fermi energy (horizontal dashed line). This difference in momentum results in different focussing trajectories for each spin. **c)** The spin split first focussing peak resulting from the two spin-dependent focussing trajectories (labelled with red circle and blue square). By fitting a double Gaussian to the first focussing peak the amplitude of both peaks can be extracted. **d)** Spin split focussing with an in-plane magnetic field. The change in peak amplitude is not symmetric when the direction of the in-plane field is reversed.
Figure 2: **Focussing with small B in-plane parallel to the QPC orientation.****a)** Shows the focussing signal for different in-plane fields of \(\pm\) 0.5 T in 0.1 T steps. Central black trace is for zero field, top solid traces are for field parallel to the QPC current and bottom dashed traces are for field antiparallel. Data has been vertically offset for clarity. **b)** The amplitude of each spin peak from a double Gaussian fit to the data in a). **c)** The calculated energy dispersion of one of the spin-split 2D hole subbands for B\({}_{\parallel}\) = +0.1T, 0T and -0.1T using Eq 1. The dashed horizontal line is the Fermi energy. **d)** The change in velocity of the spin subbands as a function of B\({}_{\parallel}\). The velocity (v\({}_{B}\)) is normalised by the average velocity of the two spins at B\({}_{\parallel}\) = 0 (v\({}_{\text{B}=0}\)).
change in spin polarisation. Even at 0.5T the change in peak amplitude (\(\sim\) 10%) is significantly larger than the Zeeman energy (\(\sim\)1% of E\({}_{\text{F}}\)) and therefore is too small to be a Zeeman effect. Instead, we consider a shift in the Rashba spin-splitting of the 2D subbands caused by B\({}_{\parallel}\).
The Hamiltonian for the 2D subbands is of the form
\[\mathcal{H}=\frac{\mathbf{p^{2}}}{2m^{*}}+\frac{i\alpha}{2}( \sigma_{+}p_{-}^{3}-\sigma_{-}p_{+}^{3})+\frac{g_{1}\mu_{B}}{2}(B_{+}p_{+}^{2} \sigma_{-}+B_{-}p_{-}^{2}\sigma_{+})\\ +\frac{g_{2}\mu_{B}}{2}(B_{-}p_{+}^{4}\sigma_{-}+B_{+}p_{-}^{4} \sigma_{+}) \tag{1}\]
where \(\frac{ia\alpha}{2}(\sigma_{+}p_{-}^{3}-\sigma_{-}p_{+}^{3})\) is the Rashba spin-orbit term, \(\frac{g_{1}\mu_{B}}{2}(B_{+}p_{+}^{2}\sigma_{-}+B_{-}p_{-}^{2}\sigma_{+})+ \frac{g_{2}\mu_{B}}{2}(B_{-}p_{+}^{4}\sigma_{-}+B_{+}p_{-}^{4}\sigma_{+})\) are the Zeeman terms due to the in-plane magnetic field and \(B_{\pm}=B_{x}\pm iB_{y}\). \(B_{\parallel}\) causes a small shift in the 2D spin subbands, which is in opposite directions for the two subbands. Figure 2 c) shows the calculated energy dispersion of one of the 2D hole subbands using Eq. 1. There is a small shift in the subband dispersion, which is not symmetric for \(\pm\)B\({}_{\parallel}\). This shift is too small to cause a measurable change in the location of the magnetic focussing peaks, however it will still cause a change in the subband curvature [18; 23; 24]. The change in subband curvature causes the Fermi velocity (v\({}_{\text{F}}\)) to change along the hole trajectory. v\({}_{\text{F}}\) is linked to the scattering rate of each spin, since holes that travel slower will have more time to scatter (and vice versa). The focussing peak amplitude is exponentially sensitive to v\({}_{\text{F}}\):
\[R_{Focus}\propto e^{-\pi d/(2v_{\text{F}}\tau_{\text{Focus}})} \tag{2}\]
where \(d\) is the focussing diameter and v\({}_{\text{F}}\tau_{\text{Focus}}\) is the characteristic focussing scattering length [11; 20; 25]. Figure 2 d) shows the calculated change in v\({}_{\text{F}}\) for small B\({}_{\parallel}\). This matches the trend in focussing peak amplitude shown in Fig. 2 b). This change in focussing peak amplitude for small B\({}_{\parallel}\) demonstrates the high sensitivity of focussing to small changes in the Fermi surface.
Next, we consider shifts in the Fermi surface caused by large B\({}_{\parallel}\). Figure 3 a) shows the evolution of the first focussing peak with a large in-plane magnetic field applied parallel to the QPC current direction (B\({}_{\parallel}\)). As B\({}_{\parallel}\) increases in magnitude the peak amplitude changes, which would typically interpreted as a change in spin polarisation. Again, when the polarity of B\({}_{\parallel}\) is reversed (Fig. 3 b), the change in peak amplitude is not symmetric. This is not consistent with a change in spin polarisation as Zeeman splitting should be the same for \(\pm\)B\({}_{\parallel}\). To make this clearer, the amplitude of the spin peaks is plotted as a function of B\({}_{\parallel}\) in Fig. 3 c). The peak amplitude is clearly not symmetric around B\({}_{\parallel}=0\), and similar results are obtained when considering the peak area. If the change in peak amplitude was due to Zeeman spin polarisation there should be a monotonic response in the amplitude. This is not visible in Fig. 3 c), providing further evidence that the change in peak amplitude is not a Zeeman effect or a change in spin polarisation. In addition, the asymmetry in the amplitude change is not an artefact of the setup, as it meets the Onsager reciprocity conditions (\(\pm\)B\({}_{\parallel}\) symmetry is restored if the field direction is reversed and the current and voltage probes are swapped - see supplementary info).
The non-monotonic change in the focussing peak amplitude can be explained by a shift in the spin-split Fermi surfaces caused by B\({}_{\parallel}\). Figure 4 shows the calculated shift in the spin-split Fermi surfaces using Eq. 1. Figure 4 a) shows the section of the Fermi surface measured by focussing in the absence of B\({}_{\parallel}\). The red and blue lines indicate the calculated spin-split Fermi surface. The solid section corresponds to the focussing trajectory, while the dashed side does not contribute to focussing. At B\({}_{\parallel}=0\)T the spacing between the blue and red subbands is the same for all points on the Fermi surface. Figure 4 b) shows the change in Fermi surface for B\({}_{\parallel}=+4\)T. The in-plane field causes the blue Fermi surface to shift towards k\(=0\), while the red surface is shifted away. Since only the solid parts of the Fermi surfaces contribute to focussing, these paths move further apart, leading to more adiabatic transport. When the direction of B\({}_{\parallel}\) is reversed (Fig. 4 c), the blue Fermi surface shifts away from k\(=0\), while the red surface shifts towards k\(=0\). The solid sections of the Fermi surfaces now almost touch, allowing mixing between the spin states. This results in non-adiabatic spin evolution [26] and causes a significant shift in the focussing peaks at B\({}_{\parallel}=-4\)T as shown in Fig. 3 c). This asymmetry in the Fermi surface shift is not observed in other 2D measurements (e.g. Shubnikov-de Haas oscillations) since they sample the full Fermi surface and hence would see the same result for +B\({}_{\parallel}\) and -B\({}_{\parallel}\)
In summary, we have used magnetic focussing to measure a shift in spin-split Fermi surfaces caused by B\({}_{\parallel}\). For small B\({}_{\parallel}\), the Fermi surface shift results in a change in velocity and hence scattering along each focussing trajectory. At large B\({}_{\parallel}\), Fermi surface shifts lead to non-adiabatic transport in one direction of B\({}_{\parallel}\). This non adiabatic transport causes a significant shift in the focussing peaks. Neither of these effects can be explained by a change in spin polarisation or a Zeeman effect. These results show the sensitivity of magnetic focussing as a technique for probing changes in the 2D Fermi surface. In addition, these effects can only be measured via magnetic focussing as it is able to probe a section of the Fermi surface, unlike other 2D measurements such as Shubnikov-de Haas oscillations.
The authors would like to thank U. Zulicke and Z. Krix for many valuable discussions. Devices were fabricated at the UNSW node of the Australian National Fabrication Facility (ANFF). This research was funded by the Australian Government through the Australian Research Council Discovery Project Scheme; Australian Research Council Centre of Excellence FLEET (project number CE170100039); and
by the the UK Engineering and Physical Sciences Research Council (Grant No. EP/R029075/1). All experimental data and calculation code is available at [http://dx.doi.org/10.5281/zenodo.8368876](http://dx.doi.org/10.5281/zenodo.8368876)
|
2302.13768 | High speed silicon photonic electro-optic Kerr modulation | Electro-optic silicon-based modulators contribute to ease the integration of
high-speed and low-power consumption circuits for classical optical
communications or quantum computers. However, the inversion symmetry in the
silicon crystal structure inhibits the use of Pockels effect. An electric
field-induced optical modulation equivalent to a Pockels effect can
nevertheless be achieved in silicon by the use of DC Kerr effect. Although some
theoretical and experimental studies have shown its existence in silicon, the
DC Kerr effect in optical modulation have led to a negligible contribution so
far. This paper reports demonstration of high-speed optical modulation based on
the electric field-induced linear electro-optic effect in silicon PIN junction
waveguides. The relative contributions of both plasma dispersion and Kerr
effects are quantified and we show that the Kerr induced modulation is dominant
when a high external DC electric field is applied. Finally, the high-speed
modulation response is analyzed and eye diagram up to 100 Gbits/s in NRZ format
are obtained. This work demonstrates high speed modulation based on Kerr effect
in silicon, and its potential for low loss, quasi-pure phase modulation. | Jonathan Peltier, Weiwei Zhang, Leopold Virot, Christian Lafforgue, Lucas Deniel, Delphine Marris-Morini, Guy Aubin, Farah Amar, Denh Tran, Xingzhao Yan, Callum G. Littlejohns, Carlos Alonso-Ramos, David J. Thomson, Graham Reed, Laurent Vivien | 2023-02-27T13:49:32Z | http://arxiv.org/abs/2302.13768v1 | # High speed silicon photonic electro-optic Kerr modulation
###### Abstract
Electro-optic silicon-based modulators contribute to ease the integration of high-speed and low-power consumption circuits for classical optical communications or quantum computers. However, the inversion symmetry in the silicon crystal structure inhibits the use of Pockels effect. An electric field-induced optical modulation equivalent to a Pockels effect can nevertheless be achieved in silicon by the use of DC Kerr effect. Although some theoretical and experimental studies have shown its existence in silicon, the DC Kerr effect in optical modulation have led to a negligible contribution so far. This paper reports demonstration of high-speed optical modulation based on the electric field-induced linear electro-optic effect in silicon PIN junction waveguides. The relative contributions of both plasma dispersion and Kerr effects are quantified and we show that the Kerr induced modulation is dominant when a high external DC electric field is applied. Finally, the high-speed modulation response is analyzed and eye diagram up to 100 Gbits/s in NRZ format are obtained. This work demonstrates high speed modulation based on Kerr effect in silicon, and its potential for low loss, quasi-pure phase modulation.
## 1 Introduction
Integrated electro-optic modulators are key component in systems such as classical and quantum optical communications, photonics-based quantum computing and sensing. These systems target high-speed and low power consumption optical modulators. Silicon (Si) modulators, which rely primarily on the plasma dispersion effect [1], are intrinsically limited in speed due to their high RC constant [2]. Si modulators relying on the Pockels effect could overcome these limitations to produce a fast and pure phase modulation. Since silicon does not have a natural \(\chi^{(2)}\) due to its centrosymmetric structure, such modulation cannot be achieved directly except by straining the crystal lattice [3] leading to a low resulting Pockels coefficient. The integration of high-\(\chi^{(2)}\) materials on the Si platform has been widely considered. These include doped polymers, Barium Titanate (BTO) [4], Lead Zirconate Titanate (PZT) [4] or lithium niobate (LN) [4]. These approaches require the development of hybrid or heterogeneous integration processes which increase the technology complexity. An electro-optic modulation in Si can also be achieve through DC Kerr effect that electrically induces an effective \(\chi^{(2)}\) which can be hence exploited to vary the refractive index by applying an electrical modulation superimposed to a static field. DC Kerr effect has been studied in bulk silica [5], bulk silicon [6, 7], silicon interface [8], bulk antiferromagnetic NiO [9] and in integrated platforms including silicon-organic hybrid [10] silicon-rich nitride [11], silicon rich carbide [12] and in silicon nitride [13]. It has also been studied in the silicon platform for electric field-induced (EFI) second-harmonic generation (EFISHG) [14], electro-optic (EO) modulation (EOM) [15, 16], slow light regime [17] and in cryogenic experiments [18]. However, the high-speed EOM in [15, 16, 17] using PN junctions led to a plasma dispersion effect that has a higher contribution to the modulation than the DC Kerr effect. While the DC Kerr effect has been well studied in the DC regime, no assessment discriminating the contribution of the DC Kerr and plasma dispersion modulation in the dynamic regime has been reported to our knowledge. This paper presents a comprehensive analysis of the DC Kerr effect induced in a PIN diode inserted in a silicon Mach-Zehnder Interferometer (MZI) in both static and dynamic regimes. Data transmission has been analyzed up to 100 Gbits/s in Non-Return-to-Zero (NRZ) format. An experimental method has been developed to assess the relative contribution of plasma dispersion from the Kerr effect in the dynamic regime.
The DC Kerr effect, also known as electric field-induced Pockels effect, originates from the third-order nonlinear susceptibility tensor \(\chi^{(3)}\) in presence of a static electric field. The refractive index change induced by Kerr effect when a static electric field \(F_{DC}\) and an RF field \(F_{RF}\cos\Omega t\) are applied to the PIN junction is given by [10]:
\[\Delta n(t)=\] \[\frac{3\chi^{(3)}}{2n_{si}}(F_{DC}^{2}+\frac{1}{2}F_{RF}^{2}+2F_ {DC}F_{RF}\cos\Omega t+\frac{1}{2}F_{RF}^{2}\cos 2\Omega t) \tag{1}\]
with \(\Omega=2\pi f\), \(f\) the RF frequency, \(n_{si}=3.48\) the silicon refractive index and \(\chi^{(3)}=2.8\times 10^{-19}\) m\({}^{2}\).V\({}^{-2}\) at \(\lambda=1.55\) um, for a silicon waveguide with a cross-section oriented along the crystallographic axis [110][19, 20]. Eq. (1) exhibits three kinds of dependencies. The first one corresponds to the static refractive index growing with the square of the field amplitudes that will be called later DC Kerr effect concerning \(F_{DC}\). The second one relies on an index modulation at an angular frequency \(\Omega\) which has its amplitude growing with the product of the DC and RF fields amplitudes. It will be called later electric field-induced (EFI) linear EO effect. At last an index modulation at a \(2\Omega\) component exhibits an amplitude growing with the square of the RF field amplitude alone. It will be called later quadratic EO effect.
## 2 Results and discussions
Static and dynamic studies are conducted to distinguish Kerr effects from that of plasma dispersion on the index variation in three different unbalanced Mach-Zehnder modulators (MZMs). They consist of either PN or PIN junctions named PN, PIN2, PIN3 and their respective intrinsic region width are w=0, w=0.33 and 1.05 um (Fig. 1). Each junction waveguide has the same cross-sectional design with a 450 nm width, a 220 nm height, and a 100 nm slab thickness, suitable for the propagation of a single TE polarization mode. The unbalancing of the MZMs is realized by a length difference \(\Delta L=200\) um between the arms leading to a passive phase shift \(\Delta\theta=2\pi/\lambda n_{g}\Delta L\) with \(n_{g}=3.6\), the group index of our waveguide. The operating point of the MZM can thus be adjusted at the quadrature (\(\Delta\theta=\pi/2\)) without the need of heaters by only tuning the laser wavelength around 1550 nm.
### Measurement of the DC Kerr modulation
The first experiments focus on the comparison between the three junctions in MZMs under a DC bias voltage only. The variation of the effective index of the guided mode (\(\Delta n_{DC}\)) as a function of the reverse DC voltage (\(V_{DC}\)) applied to the junction is obtained by measuring the shift of the resonance wavelength \(\Delta\lambda_{r}\):
\[\Delta n_{DC}(V_{DC})=\frac{\lambda_{r}\Delta\lambda_{r}(V_{DC})}{FSR(\lambda _{r})L} \tag{2}\]
with \(\lambda_{r}\) the resonance wavelength, \(FSR(\lambda_{r})\) the free spectral range of the MZM and \(L\) the length of the electrodes all along the junctions. See Supplement 1 section S1. Optical and electro-optic simulations taking into account the DC Kerr and plasma dispersion effects were performed to design the three different PN/PIN waveguides. The measured and simulated variations of the effective index of the three junctions are presented in Fig. 1. Total refractive index modulations are in good agreement with the simulations. By increasing the width of the intrinsic region of the junction to 1.05 um, the contribution of the plasma dispersion effect is significantly reduced to become minor compared to the DC Kerr effect, while it is dominant for the PN junction waveguide. The DC Kerr effect can thus contribute up to 82% of the total index change in the PIN3 junction waveguide.
### Measurement of the EFI linear EO effect
The study of the electric field-induced (EFI) linear EO effect in the \(\Omega\) angular frequency modulation focuses on the PIN3 junction, which shows a dominant contribution of the DC Kerr effect in the effective index change (four times greater than the contribution from plasma dispersion). A common DC bias voltage is applied to both arms of the MZM and a sinusoidal RF signal (\(f=5\) GHz) is split with two opposite phases to be applied in push-pull configuration. The optical wavelength is chosen to operate at the quadrature point. A simplified schematic view of the experimental setup to characterize the EOM is provided in Fig. 2(a). It is worthwhile to notice that the push-pull configuration of the MZM driving leads to assess the index variation versus voltage as an equivalent efficiency of a single path because the measured index variation is twice the index variation in each arm while the considered voltage is twice of what it is applied to each arm. The RF analysis in push-pull configuration leads moreover to the cancellation of DC shift terms from Eq. (1) of the index variation in the MZM output measurements because the shift is the same in each arm.
The transfer function of the MZM as a function of the phase shift \(\Delta\phi(t)\) is:
\[\frac{P(t)}{P_{0}}=\frac{1}{2}\left\{1+\cos[\Delta\phi(t)+\Delta\theta]\right\} \tag{3}\]
with \(P_{0}\) the maximum output power of the MZM.
The EOM response at the \(\Omega\) angular frequency can be approximated at the quadrature point (\(\Delta\theta=\pi/2\)) as \(P_{\Omega}(t)=\frac{1}{2}P_{\Omega}\Delta\phi(t)\) with \(\Delta\phi(t)=m_{\Omega}\cos\Omega t\), \(m_{\Omega}\) the modulation index, \(m_{k}\) the EFI linear EO modulation index and \(m_{c}\) the carrier modulation index:
\[m_{\Omega}=m_{k}+m_{c} \tag{4}\]
\[m_{k}=\Gamma\frac{2\pi}{\lambda}L_{eff1}\frac{3\chi^{(3)}}{n_{si}}F_{DC}F_{RF} \tag{5}\]
with the mode overlap \(\Gamma=0.87\) in the Si waveguide, the effective length \(L_{eff1}=[1-exp(-\alpha_{RF}L)]/\alpha_{RF}\) and the RF field loss \(\alpha_{RF}=4.3\) dB.cm\({}^{-1}\). See Supplement 1 section S2 and S3 for more details. Both the EFI linear EO effect and the plasma dispersion effect are expected to increase linearly with the RF amplitude. Only the EFI linear EO effect is expected to increase with the applied reverse DC bias following the Eq. (5). The dynamic carrier modulation is expected to decrease with \(V_{DC}\) considering a small signal approximation on its static response.
For a 6 mm long junction, a linear behavior of the effective index change \(\Delta n_{\Omega}=m_{\Omega}\lambda/(2\pi L_{eff1})\) as a function of the applied reverse DC bias and RF amplitude is observed in Fig. 2(b) and Fig. 2(c), respectively. This is a clear signature of the EFI linear EO effect. In Fig. 2(b), the non-zero intersection of \(\Delta n_{\Omega}\) at \(V_{DC}=0\) V indicates that carriers also contributed to the modulation in addition to the EFI linear EO effect at low reverse DC voltages. The slope of the curve allows to determine the \(\chi^{(3)}\) coefficient (\(\chi^{(3)}=1.0\times 10^{-19}\) m\({}^{2}\).V\({}^{-2}\)). See Supplement 1 section S4 and S5 for more information. This value is slightly underestimated (Supplement 1 section S5) due to the carriers
contribution having a negative evolution with \(V_{DC}\). However, it remains relatively close to the \(\chi^{(3)}\) values found in the literature.
### Measurement of the quadratic EO effect
The quadratic EO effect at the angular frequency of \(2\Omega\) can only be observed in a single-drive configuration, as it is proportional to the square of the electric field. We studied the transfer function at angular frequencies of \(\Omega\) and \(2\Omega\) to separate the modulation behavior resulting from the distortion produced by the nonlinear transfer function of the MZM (Eq. (3)) and the quadratic EO effect. A bandpass RF filter centered at \(\Omega\) was placed at the signal generator output insuring a very high rejection at \(2\Omega\). We considered the PIN3 junction where distortion due to the carrier absorption modulation is negligible.
The phase shift induced by the plasma dispersion and the Kerr effects can then be written as:
\[\Delta\phi(t)=m_{\Omega}\cos\Omega t+m_{2\Omega}\cos 2\Omega t \tag{6}\]
where \(m_{2\Omega}\) is the modulation index associated with the quadratic EO effect:
\[m_{2\Omega}=\Gamma\frac{2\pi}{\lambda}L_{eff2}\frac{3\chi^{(3)}}{4n_{si}}F_{RF} ^{2} \tag{7}\]
and \(L_{eff2}=[1-exp(-2\alpha_{RF})]/(2\alpha_{RF})\) is the effective length for the \(2\Omega\) component.
The \(\Omega\) and \(2\Omega\) components of the MZI spectral response \(\omega\)an be written - after inserting the phase shift \(\Delta\phi(t)\) (Eq. (6)) in the MZM transfer function \(P(t)/P_{0}\) (Eq. (3)), performing a Jacobi-Anger expansion and neglecting inter-modulations - as follows:
\[\frac{P_{\Omega}(t)}{P_{0}}=\sin(\Delta\theta)J_{1}(m_{\Omega})\cos\Omega t \tag{8}\]
\[\frac{P_{2\Omega}(t)}{P_{0}}= \left[\right.-\cos(\Delta\theta)J_{2}(m_{\Omega}) \tag{9}\] \[\left.+\sin(\Delta\theta)J_{0}(m_{\Omega})J_{1}(m_{2\Omega}) \right]\cos 2\Omega t\]
where \(J_{n}(m_{\Omega})\) are the Bessel functions of the first kind.
The modulation indices \(m_{\Omega}\) and \(m_{2\Omega}\) are determined by fitting the DC transmission and the spectral responses using Eq. (8) and Eq. (9) at a fixed reverse DC and RF voltages. See Supplement 1 section S6.
The measurements performed for a 5 mm long PIN3 junction (Fig. 3(a)) show that the \(2\Omega\) component is induced by the quadratic EO effect and not the signal distortion (the modulation operates at quadrature). Then, we can extract the corresponding modulation index \(m_{2\Omega}\) from the response of the PIN3 junction. We can notice that it is however not possible to extract the \(m_{2\Omega}\) modulation index from the responses of the PN and PIN2 junctions because the distortion induced by carriers is too important. See Supplement 1 section S6. The modulation indices \(m_{\Omega}\) and \(m_{2\Omega}\) are accurately extracted at different reverse DC and RF bias voltages for the PIN3 junction using this method. Experimental results are compiled in Supplementary Table S1.
Fig. 3(b) shows the linear variation of the refractive index change \(\Delta n_{2\Omega}=m_{2\Omega}\lambda/(2\pi L_{eff2})\) as a function of the square RF voltage (i.e. \(\Delta n_{2\Omega}\) quadratically increases with the RF voltage). This variation is independent of the applied reverse DC voltage, as expected with a quadratic EO effect. In addition, a linear fit of \(\Delta n_{2\Omega}\) with respect to \(F_{RF}^{2}\) is performed to extract the \(\chi^{(3)}\) coefficient (\(\chi^{(3)}=1.5\times 10^{-19}\) m\({}^{2}\).V\({}^{-2}\)). This value is close to the average value from the literature and is consistent with the value found in the previous section.
Moreover, the measurements of the \(\Omega\) and \(2\Omega\) components of the spectral response can be used to calculate the
Figure 1: (a) Depiction of PN junction, (b) PIN with intrinsic region width \(\mathrm{w}=0.33\) μm (PIN2), and (c) PIN with \(\mathrm{w}=1.05\) μm (PIN3). (d) Effective refractive index changes of PN, (e) PIN2, and (f) PIN3 junctions versus the applied reverse DC bias voltage with respective MZM arm lengths of 2, 6 and 6 mm. Dots are the experimental measurements and lines correspond to the respective simulations of the whole modulation, of the DC Kerr and carrier modulations.
Figure 3: The dots and the lines represent respectively the measurements and the corresponding fit or simulations. (a) Optical MZM transfer function for three electrical spectral components excluding intrinsic losses with \(P_{0}\) the maximum output power, \(P_{DC}\) the static power, \(P_{\Omega}\) the modulation power at angular frequency \(\Omega\) and \(P_{2\Omega}\) at frequency \(2\Omega\) for the PIN3 junction by applying reverse \(V_{DC}=6\) V, \(V_{RF}=2.0\) V. (b) Amplitude of the refractive index modulation at angular frequency \(2\Omega\) versus the applied voltage \(V_{RF}\) at frequency \(\Omega\) for reverse DC biases from 0 to 15V. Whatever the value \(V_{DC}\), it induces no variation of \(\Delta n_{2\Omega}\). (c) Respective relative contribution of index variation in the \(\Omega\) component from EFI linear EOM and from carrier modulation versus the applied reverse DC bias voltage.
Figure 2: (a) Schematic view of the experimental setup used to measure the EOM from the MZM. DC voltage is applied to both arms; RF is either applied in single-drive or push-pull configuration. (EDFA: erbium-doped fiber amplifier). (b) Effective index variations measured in push-pull configuration versus the reverse DC bias for a fixed RF peak amplitude of 1.4V, (c) versus the RF amplitude for three reverse DC biases.
EFI linear EOM contribution to the modulation at \(\Omega\) using Eq. (5) and Eq. (7):
\[m_{k}=4\frac{F_{DC}L_{eff1}}{F_{RF}L_{eff2}}m_{2\Omega} \tag{10}\]
The DC electric field inside the PIN junction is estimated using \({F_{DC}=(V_{DC}+V_{bi})/w}\) with \(V_{bi}\) the built-in voltage and \(w\) the width of the intrinsic region [16]. See Supplement 1 section S4. The RF field is estimated from the small signal approximation \({F_{RF}\approx V_{RF}dF_{DC}/dV_{DC}}\).
The contribution of the EFI linear EOM (\(m_{k}/m_{\Omega}\)) and carrier modulation (\((m_{\Omega}-m_{k})/m_{\Omega}\)) in the \(\Omega\) spectral response are reported in Fig. 3(c) showing that above \(V_{DC}=5\) V, at a modulation frequency of 5 GHz, the EFI linear EO effect contribution to the modulation becomes greater than the carrier modulation and reaches more than a factor of 3 at 15 V. A good agreement with simulations from Fig. 1(f) is obtained.
### Eye diagram experiments
The data transmission characteristics of EO modulators based on DC Kerr effect using PIN3 diode has been analyzed. The DATA and \(\overline{\text{DATA}}\) signals from a SHF bit pattern generator were amplified and transmitted to the respective arms of the MZM in push-pull configuration. A schematic view of the setup is shown in Fig. 4(a).
First, optical eye diagrams were acquired at 10 Gbits/s on a digital communication analyzer (DCA) from a 6 mm long modulator with each arm driven at 4 \(V_{pp}\) and at different reverse DC bias voltages. The extinction ratio (ER) and the signal-to-noise ratio (SNR) of the modulated optical signal were computed by the DCA. ER is greatly improved by reverse biasing \(V_{DC}\) (Fig. 4(b)). Indeed, for a \(V_{DC}\) varying from 2 V to 30 V, the measured ER increases from 1.5 dB to 3.7 dB, and the SNR increases from 8.9 to 15.6. More eye diagrams as a function of \(V_{DC}\) are presented in Supplement 1 Fig. S3.
At higher data rate, the DC Kerr effect improves the transmission capability, reaching a maximum data rate of 40 Gbits/s for the same 6 mm long PIN3 modulator with each arm driven at 4 V\({}_{pp}\) (Supplement 1 Fig. S4(b)). Its speed is limited by the RF electrodes bandwidth which can be further improved by redesigning the traveling wave electrodes to achieve an expected electro-optic bandwidth of about 40 GHz for 1 cm propagation length [21].
Then, the bandwidth limitation of the DC Kerr effect for higher speed optical modulation has been investigated on shorter modulators with 1 mm long PIN3 modulator with each arm driven at 2 V\({}_{pp}\). The obtained speed limit shows a closing of the eye diagram around 80 Gbits/s (Fig. 5(a)), which is the same as the achieved speed limit of 1 mm long conventional depletion modulation under same test setup [22]. At 100 Gbits/s, the use of numerical 6 taps feed forward equalization (FFE) has led to open the eye diagram (Fig. 5(b)) showing such a DC Kerr modulator associated with the proper equalizing equipment could be promising to achieve very high speed modulation.
## 3 Conclusion
The electric field-induced Pockels effect (i.e. DC Kerr effect) has been observed in a Si PIN junction-based Mach-Zehnder modulator (MZM). The refractive index variations as a function of both reverse DC bias voltage and RF amplitude have been measured in the dynamic regime showing a linear response with the DC bias voltage at a fixed RF amplitude. The refractive index modulations at angular frequencies \(\Omega\) and \(2\Omega\) resulting from an applied RF signal at the angular frequency \(\Omega\) have been extracted to quantify the EFI linear EO effect contribution to the modulation. We have shown that the DC Kerr effect is the main reason for the high speed modulation above 5 V DC bias voltages in comparison with plasma dispersion effect. Furthermore, optical modulation has been demonstrated up to 100 Gbits/s for a 1 mm long Mach-Zehnder modulator. Silicon modulators based on the electric field-induced linear EO modulation show promising characteristics for high-speed optical communications but also for applications requiring low loss and pure phase modulation.
## 4 Methods
### Sample fabrication
The silicon MZI modulators are fabricated through silicon photonics foundry CORNERSTONE [23], which provides detailed fabrication steps based on 8-inch 220 nm SOI wafers and doping information. The passive waveguides were etched with 250 nm thickness patterned PECVD oxide hard mask. The hard mask also protects the silicon core during the n-type implantation process. The junction is optimized through the self-aligned doping steps in [23] for the studied PN and PIN junctions.
### Set-up for dynamic measurements
A T100S-HP tunable laser is used to inject light into the device via the grating couplers. A polarization controller is used to ensure a TE-mode injection. A 90/10 splitter is used to separate the output power. 10% goes into a CT400 optical components tester to measure the DC optical power and 90% goes to a Keopsys KPS prebooster set to output a constant 3 dBm power. The amplified modulated optical signal is collected using a Agilent 83440D photodiode and fed to an Anritsu MS2830A signal analyzer set to monitor either the \(\Omega\) or \(2\Omega\) components of the spectral response. A Keithley 2401 is used to polarized PIN junctions. The RF signals are generated using an Anritsu MG3694C signal generator. The signal is then coupled with the DC bias voltage using a Anritsu V251 bias-T. For push-pull experiments, the RF signal is split in half using an Anritsu V241C power splitter and a phase delay is introduced on one arm using a Waka 02X0518-00 phase shifter. ACP 50 GHz GSGSG RF probes are used to applied the DC and RF bias voltages to the travelling-wave electrodes. Measurements are done at the quadrature point by tuning the laser wavelength.
### Eye diagrams experimental set-up
MZI modulators was differentially driven with combined \(V_{RF}\) and \(V_{DC}\) by using two high voltage bias tees (SHF BT45R - HV100). The high-speed signals were generated from SHF bit pattern generator and amplified to 4 \(V_{pp}\) on each arm for modulations bellow 50 Gbits/s and to 2
for higher modulations rate up to 100 Gbits/s. NRZ signals are sent to the MZI modulators via 67 GHz GSGSG probes and terminated with DC blocks and 50m ohm resistors. Measurements are done at the quadrature point. Eye diagrams are displayed using the averaging function of the DCA to reduce optical noise from EDFA.
## Funding
EP/N013247/1, EP/T019697/1, UF150325
## Acknowledgment
The authors acknowledge CORNERSTONE team of University of Southampton for the device fabrication. J. Peltier acknowledge Victor Turpaud for fruitful discussions, and Quentin Chateiller and Bruno Garbin for the development of the Python package Autolab used in his experiments. This work was supported by funding from EPSRC Platform Grant (EP/N013247/1) and EPSRC Strategic Equipment Grant (EP/T019697/1). D. J. Thomson acknowledges funding from the Royal Society for his University Research Fellowship (UF150325).
## Disclosures
The authors declare no conflicts of interest.
## Data Availability
Data underlying the results presented in this paper are available from the corresponding authors upon reasonable request.
## Supplemental Document
See Supplement 1 for supporting content.
|
2308.08475 | Data Navigator: An accessibility-centered data navigation toolkit | Making data visualizations accessible for people with disabilities remains a
significant challenge in current practitioner efforts. Existing visualizations
often lack an underlying navigable structure, fail to engage necessary input
modalities, and rely heavily on visual-only rendering practices. These
limitations exclude people with disabilities, especially users of assistive
technologies. To address these challenges, we present Data Navigator: a system
built on a dynamic graph structure, enabling developers to construct navigable
lists, trees, graphs, and flows as well as spatial, diagrammatic, and
geographic relations. Data Navigator supports a wide range of input modalities:
screen reader, keyboard, speech, gesture detection, and even fabricated
assistive devices. We present 3 case examples with Data Navigator,
demonstrating we can provide accessible navigation structures on top of raster
images, integrate with existing toolkits at scale, and rapidly develop novel
prototypes. Data Navigator is a step towards making accessible data
visualizations easier to design and implement. | Frank Elavsky, Lucas Nadolskis, Dominik Moritz | 2023-08-16T16:28:36Z | http://arxiv.org/abs/2308.08475v1 | # Data Navigator: An Accessibility-Centered Data Navigation Toolkit
###### Abstract
Making data visualizations accessible for people with disabilities remains a significant challenge in current practitioner efforts. Existing visualizations often lack an underlying navigable structure, fail to engage necessary input modalities, and rely heavily on visual-only rendering practices. These limitations exclude people with disabilities, especially users of assistive technologies. To address these challenges, we present Data Navigator: a system built on a dynamic graph structure, enabling developers to construct navigable lists, trees, graphs, and flows as well as spatial, diagrammatic, and geographic relations. Data Navigator supports a wide range of input modalities: screen reader, keyboard, speech, gesture detection, and even fabricated assistive devices. We present 3 case examples with Data Navigator, demonstrating we can provide accessible navigation structures on top of raster images, integrate with existing toolkits at scale, and rapidly develop novel prototypes. Data Navigator is a step towards making accessible data visualizations easier to design and implement.
accessibility, visualization, tools, technical materials, platforms, data interaction
## 1 Introduction
While there is a growing interest in making data visualizations more accessible for people with disabilities, current toolkit and practitioner efforts have not risen to the challenge at scale. Major data visualization tools and ecosystems predominantly produce inaccessible artifacts for many users with disabilities. We believe this is largely a gap caused by a lack of underlying structure in most visualizations, failure to engage the input modalities used by people with disabilities, and over-reliance on visual-only rendering practices.
Users who are blind or low vision commonly use screen readers and users with motor and dexterity disabilities often do not use "pointer" (precise mouse and touch) based input technology when interacting with digital interfaces. Many users with motor and dexterity disabilities use discrete navigation controls, either sequentially using keyboard-like input, or directly using voice or text commands.
Most interactive visualizations simply focus on pointer-based input: they can be clicked or tapped, hovered, and selected in order to perform analytical tasks. This excludes non-pointer input technologies. These devices require consideration for the navigation structure and underlying semantics of a visual interface.
However, building navigable spatial and relational interfaces is a difficult task with current resources.
Raster images, arguably the most common format for creating and disseminating data visualizations, currently cannot be made into navigable structures. These are only described using alt text, which limits their usefulness to screen reader users.
Unfortunately, more accessible rendering formats like SVG with ARIA (accessible rich internet applications) properties are more resource intensive than raster approaches, like WebGL-powered HTML canvas or pre-rendered PNG files. SVG puts a burden on low-bandwidth users and a ceiling on how many data points can be rendered in memory.
In addition, ARIA itself has 2 major limitations. First, when added to interface elements, ARIA only provides _screen reader_ access, which means that developers must build a solution from scratch for other navigation input modalities. Second, ARIA's linear navigation structure can be time-consuming for screen reader users if a visualization has many elements. This may impede how essential insights and relationships
Fig. 1: Data Navigator provides data visualization libraries and toolkits with accessible data navigation structures, robust input handling, and flexible semantic rendering capabilities.
are understood [14, 37, 38, 37, 19, 32].
Some emerging approaches have sought to address this serial limitation of data navigation and provide richer experiences for screen reader users [14, 37, 38, 47]. However, these approaches rely on a tree-based navigation structure which is often not an appropriate choice for visualizations of relational, spatial, diagrammatic, or geographic data. Many visualization structures are currently unaddressed.
Zong et al. stress that in order to realize richer, more accessible data visualizations, the responsibility must be shared by "toolkit makers," the practitioners who design, build, and maintain visualization authoring technologies [47]. Our contribution is towards that aim, to make more accessible data experiences easier to design and implement within existing visualization work.
We present Data Navigator. Data Navigator is a toolkit built on a graph data structure, within which a broad array of common data structures can be expressed (including list, tree, graph, relational, spatial, diagrammatic, and geographic structures). Data Navigator also exposes an interface that supports interactions via screen reader, keyboard, gesture-based touch, motion gesture, voice, as well as fabricated and DIY input modalities. Data Navigator provides expressive structure and semantic rendering capabilities as well as the ability for developers to use their own, preferred method of rendering.
Data Navigator builds upon human-studies motivated work on accessible navigation [38, 47] towards a more generalizable resource for visualization practitioners. We contribute a high-level system design for our node-edge graph-based solution as well as an implementation of this system on the web, using JavaScript, HTML, and CSS. Through our case examples we also demonstrate that our generalized approach is suitable for replication of existing best practices from other systems, integration into existing visualization toolkit ecosystems, and development of novel prototypes for accessible navigation. We illustrate how Data Navigator's use of generic edges, dynamic navigation rules, and loose coupling between navigation and visual encodings provides practitioners robust, expressive, control over their system designs.
## 2 Related Work
Our contribution is an attempt to bridge the gap between research and practice more effectively across broad ecosystems in order to enable deeper and more expressive accessible data navigation interfaces. Below we outline the prior research and standards that inform our project, a breakdown of existing visualization toolkit approaches to data navigation, and then accessible input device considerations.
### Accessibility research and standards in visualization
Research and standards are both somewhat limited by a strong bias towards visual disabilities. In _Chartability_, 36 of the 50 criteria related to accessible visualization considerations involve visual disabilities [10, 11]. Marriott et al. also found that visual disability considerations are the primary focus of data visualization literature [27], leaving the barriers that many other demographics face unstudied.
However, despite the heavy focus on visual disabilities, the work that does exist in the visualization community is deeply valuable and serves as an important starting point for our technical contribution.
#### 2.1.1 Accessible navigation design considerations
Zong et al.'s research, which was conducted as in-depth co-design work and validated in usability studies involving blind participants, presented a design space for accessible, rich screen reader navigation of data visualizations. They organized their design space into _structure_, _navigation_, and _description_ considerations and demonstrated example _structural_, _spatial_, and _direct_ tree-based approaches [47].
_Chart Reader_ also engaged these design space considerations in their co-design work on accessible data navigation structures [38]. We consider these design dimensions as the best starting point for our work, bridging the gap between research and toolkits.
There are additional research projects that have focused on accessible data navigation and interaction [14, 33, 34, 37]. These contributions explore a range of different interaction structures, including lists, trees, and tables of information as well as direct access methods such as voice interface commands and simple, pre-determined questions.
#### 2.1.2 Accessible visualization: understanding users
A wide array of emerging research projects investigate screen reader users needs, barriers, and preferences, and offer guidelines, models, and considerations for creating accessible data visualizations [11, 32, 25, 4, 2]. Jung et al. offer guidance to consider the order of information in textual descriptions and during navigation [19]. Kim et al. collected screen reader users' questions when interacting with data visualizations, which could open the door for more natural language data interaction [20].
#### 2.1.3 Accessibility standards and guidelines
In the space of research, there has been a growing interest in developing guidelines for practitioners [8, 10] and even applying guidelines as a method of validation alongside human studies evaluations and co-design [11, 24, 25, 47]. Unfortunately, most accessibility standards and guidelines do not explicitly engage how to structure data navigation.
Despite this, existing accessibility standards bodies like the Web Content Accessibility Guidelines do stress the importance of accurate, functional semantics in order for screen reader users to know how to interact with elements [41]. For interactive visualizations this means that button-like or link-like behavior should expressly be made using elements that are semantically buttons and links. Our system should be capable of expressing meaningful semantics to users of assistive technologies.
### Visualization toolkits and technical work
Unfortunately while many data visualization toolkits offer some degree of accessible navigation and interaction capabilities to developers, very few toolkits currently out there offer control over the important aspects of accessible data navigation design. Replicating existing research and strategies, remediating toolkit ecosystems, and building novel prototypes are all difficult or impossible to do due to the current lack of toolkit capabilities.
Existing data visualization toolkits have 3 major limitations that we wanted to address in the design of Data Navigator:
1. **Built on visual materials**: toolkits produce either raster or SVG-based visualizations, neither of which are focused towards designing navigable, semantic structures. As a consequence, many visualizations are simply entirely inaccessible.
2. **Lacking relational expressiveness**: When data navigation _is_ provided, the navigation is based on either a tree or list structure (see Figure 2). The consequence of this limitation is that many other non-list and non-tree data relationships become difficult or impossible to represent without overly tedious navigation or inefficient architecture.
3. **Designed only for screen reader interaction**: When _accessible_ data navigation is provided, it is generally only made possible through SVG with ARIA (Accessible Rich Internet Application) attributes. ARIA is primarily only leveraged by screen readers [42]. If a data element can be clicked and performs some form of function, only direct pointer (mouse and touch) and screen reader users are included. The consequence of this is that a wide array of other input devices, many used as assistive technologies by people with motor and dexterity disabilities, are excluded.
#### 2.2.1 Rich, tree-based approaches
De-coupling rendered, visual structures from meaningful and effective navigation experiences can provide richer experiences for screen reader users [47]. Prior research and industry work, with the exception of the _Visa Chart Components_ library [39], has relied heavily on a 1 to 1 relationship between structure (the encoded marks) and navigation. This emerging work is significant, because it paves the way for considering the design dimensions of accessible data interaction and navigation without dependence on a visually encoded space.
_Olli's_ approach has been to build ready-to-go adaptors that automatically build multiple tree structures for a few ecosystems (_Vega_,
Vega-Lite_, and _Observable Plot_) and is entirely uncoupled from a data visualization's graphics. Their approach renders navigable tree structures _underneath_ a visualization.
Other than _Olli_, _Highcharts_[16], _Visa Chart Components_, and _Progressive Accessibility Solutions'_ visualization toolkits [14, 37] also primarily provide tree and list navigation structures across all of their chart types. These toolkits render their structures _upon_ the visualization's graphic space. These tools also provide some degree of support for other assistive technologies and input modalities, although are limited exclusively to SVG rendering.
Unfortunately, these toolkits lack capabilities for dealing with graph, relational, spatial, diagrammatic, and geographic data structures.
#### 2.2.2 Serial, list-based approaches
Toolkits like _Vega-Lite_[31] and _Observable Plot_ only provide basic screen reader support through ARIA attributes when visualizations are rendered using SVG. These libraries do not currently provide additional access to other assistive technologies and input modalities.
Microsoft's _PowerBI_ largely uses a serial structure, although it has tree-like elements as well. _PowerBI_ generally provides the same access to keyboard users as it does to screen readers, although not completely.
#### 2.2.3 No navigation provided
Other visualization tools, like _ggplot2_ or _Datawrapper_, _Tableau_, as well as both _Vega-Lite_ and _Highcharts_ (when rendering to canvas), produce raster images and have no navigable structure available. Raster, or pixel-based graphics have been an accessibility burden since the early days of graphical user interface development [3]. Practitioners who use these toolkits can only provide alternative text.
### _Considering assistive technologies and input devices_
Modern data visualizations may contain functional capabilities such as the ability to hover, click, elect, drag, or perform some analytical tasks over the elements of the visualization space [31]. Virtually all of these analytical capabilities are designed for use with a mouse.
Input device consideration can roughly be organized as either _pointer-based_ (such as a mouse or direct touch) or _non-pointer based_ (which may employ speech recognition or sequential, discrete navigation such as with a keyboard). Assistive pointer-based devices, such as a head-mounted touch styls, can typically perform any actions that a mouse can and are therefore served by current interactive visualizations. However, assistive non-pointer devices, such as a tongue, foot, or breath-operated switch, are not.
By only providing pointer-based interactivity, modern interactive visualizations exclude users who leverage non-pointer based input, who are most commonly people with motor and dexterity disabilities. And unfortunately, there is a complete lack of engagement with these populations in the data visualization research community [27].
By comparison, the broader accessibility and HCI research communities have rich engagement with interaction and assistive technologies for users with motor and dexterity disabilities. Most research either focuses broadly on physical peripheral devices or sensors [36], wearables [30], or DIY making and fabrication [18].
The DIY making space involves a broad spectrum of complex input devices and materials, such as fabricating with wood and sensors for children with disabilities [22], 3D printed materials for rehabilitation professionals [13], and even using produce-based input (such as bananas and cucumbers) for aging populations [29].
Broadly, both research and practical developments related to accessible, non-pointer input are much further ahead than data visualization research and practice. Our goal for Data Navigator is to provide a technical resource towards engaging this under-addressed space.
## 3 Data Navigator: System Design
We categorized our system design goals into design considerations for _Structure_, _Input_, and _Rendering_:
1. **Generic structure and navigation specification**: Human studies work has validated that lists, tables, trees, and even pseudo-treelike and direct structure types are all valuable to users in different contexts and with different considerations. Our system must be able to work with all of these as well as less frequently-used structures (spatial, relational, geographic, graph, and diagrammatic).
2. **Robust input handling**: Blind and low vision users may use combinations of different assistive technologies, such as magnifiers, voice interfaces, and screen readers. Users with motor impairments may rely on voice, gesture, eye-tracking, keyboard-interface peripherals (like \(\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ }}}}}}}}{}}}}}}}}}\)) and \(\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}\)\), and \(\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrmmathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\\\\\\ \ \\\\\\\\\\\ \\\\\ \\\\\\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\
### Structure
#### 3.1.1 Beyond trees: towards an accessibility graph
The first major contribution in the design of Data Navigator is to use node-edge data as the substrate for our navigation system.
The most important argument in favor of using a graph-based approach is that a graph can construct virtually any other data structure type (see Figure 2), including list, table, tree, spatial, geographic, and diagrammatic structures. Graphs are generic, which enables them to represent structures both in current and future interface practices [12].
To demonstrate our point, the most recent emerging work with advancements in accessible data navigation used node-edge diagrams to demonstrate their tree-like structures [38, 47] similar to Figure 2, Figure 9, and Figure 10. This is because trees are a form of node-edge graph, but with a root, siblings, parents, and children as sub-types of nodes that generally have rules for how they relate to one another.
Node-edge graph structures prioritize direct relationships. Examples of common direct relationships in visualization are boundaries on maps (see Figure 3), flows and cycles, data with multiple high level tree structures pointing to the same child datasets (such as _Olli_ in Figure 2), or even just in diagrammatic, graph-based visualizations.
A graph structure allows for direct access between information elements that are not just part of the input data of 1:1 rendered elements, but may also have perceptual or human-attributed meaning. Examples of this might include semantic or task-based relationships, such as navigating to annotations or callouts, between visual-analytic features like trends, comparisons, or outliers. Spatial layouts such as intersections of sets or parallel vectors (see Section 4.3), or even relationships to information outside of a visualization and back into it (like in Figure 7) are enabled by a graph structure.
#### 3.1.2 Graph structures are more computationally efficient
Data visualizations often portray information that becomes difficult to handle when using trees and lists. The distance users must travel between relational elements is significant in lists while redundancy when navigating relational elements in trees can be problematic.
As an example of this, often a data table or list of locations are used in conjunction to a map, such as listing all 50 states alphabetically along with relevant information. The list itself is expensive to navigate and may not provide any relationship information about which states border others, let alone ways to easily and directly access those states.
Part of the visual design justification of using a map instead of a table is for sighted individuals to understand how geospatial information may interact with a given variable. The spatial relationships matter. But when supplementing the list of states with sub-lists for each state's bordering states (see Figure 3), it produces redundancy in the rendered result. The rendered data contains circular connections between nodes but must render every reference, producing a computational resource creep and cluttered user experience that can be difficult to exit.
#### 3.1.3 Specific edge instances and generic edges
In Data Navigator, nodes are _objects_ that always contain a set of edges, where each edge contains a minimum of 4 pieces of information: a unique identifier, a source, a target, and navigation rules. These properties are only accessed when a navigation event occurs on a node with an edge that contains a reference to a rule for that navigation event. Navigation rules may be unique to an edge instance or shared among other edge instances.
The source and target properties of edges are either ids that reference node instances (see Figure 4) or _functions_ (see Figure 5). Because some edges in a graph may be directed or not, non-directed graphs can use source and target properties to arbitrarily refer to either node attached to an edge.
Generic functions for source or target properties can link nodes to other nodes based on changing content, structure, or behavior that may be difficult or impossible to determine before a user navigates the structure.
Function calling also allows some edges to be _purely_ generic. An example of a reasonable use case of a purely generic edge is in Figure 5, where the source is a function which returns the present node and the target is whichever node the user was on previously. This single edge may then be part of every node's set of edges, enabling users to have a simple _undo_ navigation control without creating an _undo_ edge unique to every source node.
Using this pattern, it is possible to have fully navigable structures using only generic edges.
### Input
#### 3.2.1 Abstracted navigation facilitates agnostic input
Navigation rules in Data Navigator (see Figure 4 and Figure 5) are created alongside the node-edge structure. Edges reference rules for navigation. However, these rules are generic and agnostic to the specifics
Figure 4: An example of how a single edge instance references a navigation rule and can even have multiple navigation rules. A navigation rule can be referenced by multiple edges.
Figure 5: A generic edge, such as “any-return” can be applied to any node. Function calls handle dynamically assigning the edge’s source and target nodes on-demand.
Figure 3: **A. Map of engineers per capita of US states. B. Tree representation of the map data where states are listed alphabetically and also include links to neighboring states. The structure repeats itself if users navigate in a loop. C. Graph representation with the same navigation potential without redundant rendering.**
of input modalities and can be invoked as methods by virtually any detected user input event (see Figure 6).
Navigation rules are objects with a unique name, ideally as a noun or verb in natural language that refers to a direction or location, a movement direction (a binary used to determine moving towards the source or target of an edge), and optionally any known user inputs that activate that navigation, such as a keyboard keypress event name.
It is important for a system to abstract navigation events so that inputs can be uncoupled from the logic of Data Navigator. This allows higher level software or hardware logic to handle input validation while Data Navigator is just responsible for acting on validated input.
Later in our first case example (Section 4.1), we demonstrate an application that handles screen reader, keyboard, mouse and touch (pointer) swiping, hand gestures, typed text, and speech recognition input. Abstract navigation namespaces can be called by any of these input methods.
Additionally, since navigation rules are flexible, end users can also supply their own key-bind remapping preferences or input validation rules if developers provide them with an interface.
Because calling a navigation method is abstract, users can even supply events from their own input modalities as long they have access to either a text input interface or access to Data Navigator's navigation methods. Our demonstration material (in Section 4.1) also includes handling for DIY fabricated interfaces, which are important in accessibility maker spaces. We chose a produce-based interface [29], since it was an easy and low cost proof of concept.
We believe that enabling agnostic input provides a rich space for future research projects. In addition, browser addons and assistive technologies could both leverage this flexible interface for end users.
#### 3.2.2 Discrete, sequential input opens new avenues
The _keyboard interface_ is considered foundational for many assistive devices, which leverage this technology for discrete, sequential, non-pointer navigation and interaction [43]. Desktop screen readers are the most common example of an assistive technology device that leverages the keyboard interface, however single or limited button switches, and-puff devices, on-screen keyboards, and many refreshable braille displays do as well. Support for the keyboard interface by default in turn provides all discrete, sequential input devices with access as well.
However by basing Data Navigator's foundational infrastructure on a keyboard-like modality, this also provides designers and developers new avenues to imagine how existing direct, pointer-based, or continuous inputs can map to discrete, sequential navigation experiences.
For example, with mobile screen readers this already happens: screen reader users swipe and tap on their screen to sequentially navigate, but the exact pixel locations of their swiping and tapping generally does not matter. Their current focus position is discrete and determined by the screen reader software.
Data Navigator therefore allows for many new possibilities. One possibility is that sighted mouse and touch users may now also swipe their way through dense plots or use small interfaces (such as on mobile devices) that may otherwise be too hard to precisely tap. Data Navigator optionally removes the accessibility barriers sometimes posed by precision-based input in visualizations.
Data Navigator does not have to be in conflict with precision-based input, either. A discrete, sequential navigation infrastructure can be used in tandem with precision-based pointer events as well as instant access when coupled with voice commands and search features.
### _Rendering_
#### 3.3.1 Flexible node semantics provide freedom
Nodes in Data Navigator are semantically flexible. This is because the marks in a data visualization may represent many things, that are either dependent on the data or the user interface materials.
Since our toolkit implementation is in JavaScript and HTML, our map example from Figure 3 might use image semantics for states,
Fig. 8: **A. The data specified for a node with a reference to separate data that is used to render that node. B. The node will render as a path at the specified Cartesian coordinates. C. This rendered node may then be placed over a visual.**
Fig. 6: An example navigation rule to move “left” can be called as a method by an event from any input modality. Some examples include common modalities such as touch swiping (**A**) or speaking “left” (**B**). This also includes advanced or future modalities such as gesture recognition (**C**) or touch-activated, fabricated interfaces (**D**).**
Fig. 7: An example of how navigation within Data Navigator could use semantic nodes as hyperlinks to provide access to other areas in an application. Alabama has a child node “Counties” which is a semantic input, either. A discrete, sequential navigation infrastructure can be used in tandem with precision-based pointer events as well as instant access when coupled with voice commands and search features.
alongside a description of the data relevant to that node. However since semantics are flexible in this way, Data Navigator could also be used to integrate into a larger ecosystem, with nodes rendered as hyperlinks to tables or other elements such as in Figure 7.
The concept of using node-edge graphs can even extend to have "nodes" that are entirely different parts of a document or tool, as well as integrated into the explicit structure provided by Data Navigator. In some accessibility toolkits, nodes are geometries without functional semantics [31] or list items nested within lists [1]. But in Data Navigator, nodes can semantically be buttons, links, or any HTML element. Interactive data visualizations sometimes demand more flexible node semantics than geometries or lists.
#### 3.3.2 Loose-coupling to visuals enables expressiveness
One of the most significant technical limitations of existing data visualization toolkits with regards to accessibility is that they rely on visual substrate, or visual materials, in order to produce data visualizations. In the case of static, raster images such as png files or WebGL and canvas elements on the web, there are no interface properties at all exposed to screen readers for programmatic exploration and interaction.
If raster images are used, they generally cannot be changed after rendering. However, according to web accessibility standards, elements must have a visual indicator provided when focused [40].
Since Data Navigator navigates using focus, an indicator must be rendered alongside the node semantics. But _what_ is focused visually and _where_ it is depends on different design needs.
In _Visa Chart Components_, chart elements can be _selected_, so the focus indication is visible over the existing elements in the chart space. The design choice to have interactive visual elements located within a chart or graph is also common in other toolkits that provide accessible focus indication, such as _Highcharts_, _PowerBI_, and _SAS Graphics Accelerator_.
However, some visualization toolkits create accessible structures entirely uncoupled from visual space [1], so focus indication is provided beneath or beside the chart, not over it.
Due to the different ways that accessibility might be provided, Data Navigator enables developers to have complete control over the rendering of which focus elements they want, in what styling, and where. This can accommodate both un-coupled and visually-coupled approaches to focusing and more.
Data Navigator's focus is _uncoupled_ by default and may even be used independent of any existing graphics at all. Rendering information may be passed to Data Navigator for it to render (like in Figure 8) or developers can provide their own rendered elements and simply use Data Navigator to move between them.
Because of Data Navigator's approach to rendering focusable elements, designers and developers can provide fully customized annotations, graphics, text, or marks that may not be not part of the original visual space or elements. One example of this might be adding an outlined path to a collective cross-stack group of bars in a stacked bar chart (see Figure 8).
Loose-coupling in this way provides robust flexibility to designers and developers to handle navigation paths and stories through a data visualization, even in bespoke or hand-crafted ways.
#### 3.3.3 On-demand node rendering is efficient
Practitioners care about performance and so do users. Practitioner toolkits often focus on lazy-loading techniques where accessibility elements are rendered on-demand rather than all in-memory up front [1, 9, 47].
Data Navigator's nodes are rendered _on-demand_ by default. Data Navigator only renders the node that is about to be focused by the user and after it is focused, the previously focused node is deleted from memory. This technique has advantages in cases where datasets are large or users have lower computational bandwidth available. However, there are cases where practitioners may want to render all of Data Navigator's structure in memory, such as server-side rendering or equivalent. Pre-rendering may be optionally enabled.
## 4 Case Examples with Data Navigator
We built example prototypes using our JavaScript implementation of Data Navigator, available open source at our GitHub repository.
Our first two prototype case examples represent some of the most powerful parts of Data Navigator as a system while reproducing known and effective data navigation patterns from existing industry and research projects. We provide a final case example as a co-design session that demonstrates how Data Navigator may be used to rapidly build new designs.
Fig. 9: **A. A raster (png) visualization of a stacked bar chart showing how 4 English teams performed across 3 major trophy contests. B. An example navigation schema that allows children nodes to have 2 parents (two tree structures intersecting), one for contests and one for teams. C. An example of Data Navigator’s navigation logic abstraction, which allows edge types to have programmatic sources, targets, and rules, such as a single rule that gives all nodes a edge to exit the visualization. D. An instantiation of the schema, showing all corresponding rendered nodes and their edge types according to the schema design and navigation rules.**
### Augmenting a Static, Raster Visualization
The first case example (shown in Figure 9) builds on an online JavaScript visualization library, _Highcharts_. _Highcharts_ already provides relatively robust data navigation handling out of the box for screen reader, keyboard, and even voice recognition interface technologies, such as _Dragon Naturally Speaking_. However, these capabilities are only provided when the chart is rendered using SVG. Developers have several other rendering options available, including WebGL, which is significantly more efficient [15]. We wanted to demonstrate that Data Navigator can provide a navigable data structure even if the underlying visualization is a raster image.
For our case example, we exported a png file using the built in menu of a sample stacked bar chart retrieved from their online demos [17]. We selected a stacked bar chart because it allows us to demonstrate how two tree structures may interact and share the same children nodes.
We recorded the data and hand-created all of the geometries and their spatial coordinates using _Figma_, by tracing lines over the raster image's geometries (see samples of the data and traced geometries in Figure 8). While this method was efficient for building an initial prototype, Section 4.2 engages deterministic methods for extracting and producing the nodes, edges, and descriptions required by Data Navigator automatically and at scale.
The visualization we selected represents 4 English football teams, _Arsenal_, _Chelsea_, _Liverpool_, and _Manchester United_ and how many trophies they won across 3 contests, _BPL_, _FA_ Cup, and _CL_.
We chose a schema design that arranged the _contests_ to be navigable across one dimension of movement (_up_ and _down_) while the _teams_ are navigable across a perpendicular dimension of movement (_left_ and _right_). This 2-axis style of navigation is used by _Highcharts_ (when rendering as SVG) and _Visa Chart Components_. We also chose these directions because it is coincidental that their visual affordance is closely coupled with the navigation design (the x axis is ordered _left_ to _right_ and since the bars are stacked, _up_ and _down_ can move within the stack). These directions can also be applied to the axis categories and legend categories as well, moving _left_ and _right_ across the entire _team's_ stacks or up and down across the entire _context's_ groupings.
Using a keyboard, a user might enter this schema and navigate to the legend, where they could press _Enter_ to then focus the legend's first child, pictured in Figure 8. Pressing up or down navigates in a circular fashion among the _context_ groupings. Pressing _Enter_ again then focuses the first child element of that _context_, all of which are in the _Arsenal_ group, since it is the first group along the x axis. A user can then navigate _up_, _down_, _left_, and _right_ among children. Pressing _L Key_ moves the user back up towards the contest while pressing _Backspace_ moves the user up towards the x axis. The x axis and _team_ groupings represent the second tree which intersects the first (the _contexts_).
Our first case example includes handling for additional input modalities beyond screen readers and keyboards, including a hand gesture recognition model, swipe-based touch navigation, and text input (which can be controlled using voice recognition software).
#### 4.1.1 Discussion
Our first case example demonstrates several of the most important capabilities of Data Navigator, namely that practitioners can add accessible navigation to previously inaccessible, static, raster image formats and that a wide variety of input modalities are supported easily.
Widely-used toolkits like _Vega-Lite_, _Highcharts_, and _D3_[2] allow practitioners to choose SVG and canvas-based rendering methods. Data Navigator's affordances help overcome the lack of semantic structure in canvas-based rendering, allowing developers to take advantage of its processing and memory efficiency.
Notably in addition to these capabilities, the visual focus highlighting added was entirely bespoke (as in Figure 8) and the navigation paths through the visual were based on our design intentions, not an extracted view or underlying architecture such as render order. This demonstrates that our system provides a significant degree of freedom and control for designers and developers.
As a final discussion point, the resulting visualization contains no automatically detectable accessibility conformance failures according to the W3C's Web Accessibility Initiative's accessibility evaluation tool, _WAVE_[44]. It is important for any technology developed to also meet minimum requirements for accessibility [10, 24, 47, 25], even when following best-practices and research.
### Building Data Navigation for a Toolkit Ecosystem
Our second case example, shown in Figure 10, builds on _Vega-Lite_. As shown in Figure 2, _Vega-Lite_ offers basic screen reader navigation but provides no navigation at all when rendered using canvas.
While it might be a tedious design choice to allow every mark in a visualization to be serially accessible to screen reader users, we nevertheless set out to build a generic ingestion function that would take a _Vega-Lite View_ object and deterministically recreate their existing SVG navigation structure in Data Navigator. This way users would have the same experience between SVG rendered charts and all current and future rendering options that _Vega-Lite_ offers to developers.
Notably, _Vega-Lite_ does not explicitly manipulate the navigation order at all when rendering with SVG. ARIA is simply provided to allow screen reader users to access each mark in the visualization in the order the mark appears in the DOM (which is the order it was rendered). The legend appears after the marks in our schema for this reason because _Vega-Lite_ renders the legend after marks. This choice of ordering is for visual reasons: z-axis placement is currently based on render order in SVG and _Vega-Lite_ wants their legend visually on top of the rendered marks.
In addition to mimicking their existing SVG navigation strategy, we also created a way to nest all of the marks within a group so that users can skip past them and drill in on-demand, which is a valuable pattern when dealing with situations where providing a mark-level fidelity of information may not be relevant to a user's needs by default [35, 47].
Figure 10: **A.** Various charts from _Vega-Lite_ share the same general structures with each other when rendered using canvas (**B**) or SVG (**C**). **D.** With Data Navigator, we replicated the existing SVG navigation pattern (**C**) but used a canvas-based rendering for the visualization. **E.** We also improved the navigation scheme to nest marks within a mark group to allow users to skip them, if needed.
In order to deterministically supply Data Navigator with accurate information about any given _Vega-Lite_ visualization, we built 3 functions: one that takes a _Vega-Lite View_ as input and extracts meaningful nodes, one that produces edges based on those nodes, and one to describe our nodes in a meaningful way for screen reader users. These generic functions technically work on all existing _Vega-Lite_ charts, however some are more useful out of the box than others due to the type of marks involved.
#### 4.2.1 Discussion
This case example demonstrates that ecosystem-level remediation and customization is not only possible for toolkit builders but Data Navigator offers robust potential. Data Navigator's structure, input, and rendering capabilities are all flexible and can be adjusted to suit the needs of a specific toolkit's design and intended use.
Many visualization libraries may not even provide screen reader accessible SVG using ARIA-based approaches but do have a consistent underlying architectural pattern. Some libraries have a consistent method for converting data into visual formats, readable text labels, and interaction logic. Strong contenders would be visualization libraries popular in online, web-based data science notebooks like _ggplot2_ in R or _matplotlib_ for Python, which typically only render rasterized pngs or semanticless SVG.
Toolkits with consistent underlying architecture would allow toolkit developers, not just developers who _use_ toolkits, to remediate and customize their navigation accessibility using a generic approach.
Enabling accessibility at the toolkit level allows all downstream use of that tool to have better defaults, options, and resources available for building more accessible outcomes for end users.
Many libraries and toolkits provide users with a level of functional defaults and abstract conciseness so that users don't have to worry about low-level geometric considerations [31].
Data Navigator allows toolkits developers to also provide their users with abstractions and defaults for accessibility that make sense for their ecosystem.
Despite our schema recreating a screen reader experience based on SVG (and improving it), Data Navigator's additional features also apply: users are able to leverage a much wider array of input modalities.
_Vega-Lite_ provides many ways to make marks clickable and even perform complex actions using mouse-based input. While Data Navigator does not engage accessible brush and drag-based inputs, it does provide keyboard-only access by default, which can be used to make events previously only accessible to mouse clicking available to many other technologies. This is an improvement over _Vega-Lite_'s SVG + ARIA rendering option.
When measuring performance across test datasets containing 406 and 20,300 data points in a scatter plot, Data Navigator increases initialization time by \(\sim\)0.45 to \(\sim\)1.5ms respectively. Our extraction functions specific to _Vega-Lite_ increase initialization between \(\sim\)4.8 and \(\sim\)8.5ms respectively. Given that our benchmark testing for _Vega-Lite's SVG rendering initialized in \(\sim\)1,800ms for 20,300 data points and canvas in \(\sim\)700ms, we do not anticipate that Data Navigator will have a negative impact on performance in most visualization contexts.
### _Co-designing Novel Data Navigation Prototypes_
Recent projects in accessible data navigation have involved extensive co-design work with people with disabilities, ranging on the magnitude of months with as many as 10 co-designers at a time [24, 25, 38, 47].
However many visualization experiences may be authored in smaller scales, with fewer designers, and less time such as the development of a prototype or demonstration of an emerging idea. In practical or industry contexts, co-design sessions (and design sessions in general) may be much shorter. The goal of these co-design sessions is simply to create an artifact with the artifact's intended users.
Since our paper is contribution towards practical outcomes, we simulated a light co-design session with the aim of producing low-fidelity prototypes of novel data interaction patterns.
#### 4.3.1 Co-design Session Methods and Setup
Authors Frank Elavsky (sighted) and Lucas Nadolskis (blind) set out with the goal of developing screen-reader friendly prototypes that can explore geometric and mathematical models produced by the math diagramming tool _Penrose_[46].
Nadolskis is a neuroscience engineer who is a native screen reader user and uses both mathematical concepts as well as data-related tasks in his research. Elavsky proposed a series of possible math-based visualization types produced by _Penrose_ to build prototypes for, and Nadolskis selected _set_ and _vector_ diagrams as the two worth exploring first. The justification for this selection is that understanding these two concepts is important for work in data science, programming, and more advanced math concepts.
In particular we grounded the context of our contribution in a hypothetical classroom setting, where a screen reader user who is a student will have access to the equations in both raw text and _MathJax_. We want to provide an experience that does not replace the existing resources screen reader users have to learn in classrooms but rather supplement.
At our disposal for our co-design session was a _Dot Pad_[6], which is a refreshable tactile braile display. Our _Dot Pad_ enabled Elavsky to produce something visual and then translate it into the display for Nadolskis. Similar to de Greef et al. [5], we used a tactile interface as an intermediary to help us get a shared sense of the meaningful spatial features of our figures.
Elavsky started with a reference diagram and then traced a wide variety of every possible node that might be worth navigating to in the diagram (see Figure 11).
We selected which nodes were most important in each diagram, how to navigate between them, and how we wanted to render their visuals and semantics.
The selection of our problem space, scope of solutions, context of contribution, general discussion, and preparation of materials took approximately 12 hours of work over 2 weeks. The exploration of our
Fig. 11: Our material preparation process involved taking a reference (**A**), tracing it (**B**), and rendering it on a tactile display (**C**).
Fig. 12: **A**. A reference image from _Penrose_ of a set diagram containing two sets intersecting. **B**. A diagram of our proposed structure, with three levels of information.
prototype design space for our 2 prototypes took 1 hour. Building the prototypes took 2 hours.
#### 4.3.2 Creating a Navigable Set Diagram
Our first prototype was a set diagram (see Figure 12). For our structure, we decided that it has 3 important semantic levels: the high level, the inclusion level, and the exclusion level. The inclusion level is first and the siblings are all sets or subsets that include other sets. The exclusion level is beneath and contains sets or subsets that are exclusive to the sets they belong to, which are accessed by drilling down from a set.
Our schema design starts with a user encountering the root level (1) and may optionally drill in to the first child of the next level (2) using the _Enter_ key. The user may navigate siblings at this level using _right_ and _left_ directions, but this level is not circular (like in Figure 9) to maintain the spatial relationships. The user may drill in on either set again to view the non-intersecting portion of that set. Any node can drill up, towards the root, using _Escape_ or _Backspace_.
#### 4.3.3 Creating a Navigable Parallel Vectors Diagram
Our second prototype was a parallel vectors diagram (see Figure 13). For the structure of this diagram we created a first level group that contains each vector and vector sum. The sibling to this grouping is another group which organizes sub-equations related to calculating each parallel vector. The sub equations each contain children that pair the sub equation with the vector it is parallel to.
Similar to Figure 12, this figure maintains spatial relationships along the x dimension, does not have circular navigation, and allows drilling in and out.
#### 4.3.4 Discussion
After our co-design sessions, our visual materials and navigation structures were used in the creation of functional prototypes. We additionally hand-crafted the descriptions and semantics for each node.
Accessibility work often takes a long time, from co-design to building to validation. But we believe that a well-articulated and useful design space, with tools that provide expressiveness and control over the dimensions of that design space, can improve how this work is done. The above case example demonstrates how builders who are thinking about data navigation design can rapidly scaffold prototypes for use in Data Navigator.
In particular, Data Navigator's design as a system gave our co-design sessions _vocabulary correspondence_. Data Navigator's language helped us focus on the _nodes_, _edges_, and _navigation rules_ for our _structure_ while we also explicitly discussed the _rendering_ details of _coordinates_, _shapes_, _styling_, and _semantics_ for each node. The vocabulary of our design space directly corresponded with code details required to create a functional prototype.
We note that this co-design work is not intended to contribute a _validated_ set of designs. Rather, our contribution with this case example is to demonstrate that within the larger ecosystem of a research venture, Data Navigator is an improvement over designing and building navigable structures from scratch.
## 5 Limitations and Future Work
Data Navigator is a technical contribution, a system designed for appropriation [7] and adaptation [45] in different applied contexts. It is, as Louridas writes, a _technical material_: a technology that enables new and useful capabilities [23]. While beyond the scope of the current paper, a critical next step for future work is to conduct separate studies with both practitioners and end users to evaluate Data Navigator's affordances.
Unlike toolkits that provide an end-to-end development pipeline for accessible visualization, Data Navigator serves as a low-level building block or material (like concrete). As such, one potential limitation of the framework is that it can be used to build both curbs (which are inaccessible) as well as ramps and _curb-cuts_ (which may be more broadly accessible).
Even when building more accessible curb-cuts, we stress the importance of actively involving people with disabilities in the design and validation of new ideas, in line with prior work [24, 25, 28, 47]. For example, while our first two case examples replicate co-designed and validated existing work, our third case example \(\diamondsuit\) co-designed prototypes would need to be validated with relevant stakeholders before wider implementation. Our system does not _guarantee_ any sort of accessibility on its own.
The diverse array of modalities supported by Data Navigator opens an immediate line of future work in engaging people with a correspondingly diverse set of disabilities. While recent explorations into accessible data visualization have been inspiring, this trend has primarily focused on the experiences of people with visual disabilities [21, 27, 10]. More research should be conducted with other populations, particularly people who leverage assistive technologies beyond screen readers, to understand how interactive data visualizations can be better designed to serve them.
Finally, there are significant opportunities to improve the efficiency of our approach, including developing deterministic and non-deterministic methods to generate node-edge data and navigation rules from a visualization. Ma'ayan et al. stress in particular that reducing tedious complexity can contribute to the success of a well-designed toolkit [26]. Future work should identify areas where graphical interface tools or higher-level specifications can improve the experience of working with Data Navigator.
## 6 Conclusion
Practitioners at large continue to produce inaccessible interactive data visualizations, excluding people with disabilities. We believe that the burden of remediation first starts with the developers who build and maintain the toolkits that practitioners use.
However, the challenges faced by toolkit builders are significant. Most toolkits lack an underlying, navigable structure, support for broad input modalities used by people with disabilities, and meaningful, semantic rendering.
To engage these limitations we present Data Navigator, a technical contribution that builds on existing work towards a more generalizable accessibility-centered toolkit for creating data navigation interfaces. Data Navigator is designed for use by practitioners who both build and use existing toolkits and want a tool to make their data visualizations and interfaces more accessible.
We contribute a high-level system design for our node-edge graph-based approach that can be used to build data structures that are navigable by a wide array of assistive technologies and input modalities. Data Navigator is generic and can scaffold list, tree, graph, relational, spatial, diagrammatic, and geographic types of data structures common to data visualization.
Our system is designed to encourage both remediation of existing inaccessible systems and visualization formats as well as help scaffold the design of novel, future projects. We look forward to further research that explores the possibilities enabled by Data Navigator.
Figure 13: **A. A reference image from _Penrose_ of a parallel vectors diagram. B. A diagram of our proposed structure, with two main sub-categories of information: understanding the vectors and their parallels.**
## Acknowledgments
We want to take this time to express immense gratitude for Reviewer 1, whose generous and thorough feedback helped this project find its true vision. Elavsky also wants to thank the many folks who have encouraged this project's ideation and formation over the last few years.
This work was supported by a grant from Apple, Inc. Any views, opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and should not be interpreted as reflecting the views, policies or position, either expressed or implied, of Apple Inc.
|
2310.12667 | STANLEY: Stochastic Gradient Anisotropic Langevin Dynamics for Learning
Energy-Based Models | We propose in this paper, STANLEY, a STochastic gradient ANisotropic LangEvin
dYnamics, for sampling high dimensional data. With the growing efficacy and
potential of Energy-Based modeling, also known as non-normalized probabilistic
modeling, for modeling a generative process of different natures of high
dimensional data observations, we present an end-to-end learning algorithm for
Energy-Based models (EBM) with the purpose of improving the quality of the
resulting sampled data points. While the unknown normalizing constant of EBMs
makes the training procedure intractable, resorting to Markov Chain Monte Carlo
(MCMC) is in general a viable option. Realizing what MCMC entails for the EBM
training, we propose in this paper, a novel high dimensional sampling method,
based on an anisotropic stepsize and a gradient-informed covariance matrix,
embedded into a discretized Langevin diffusion. We motivate the necessity for
an anisotropic update of the negative samples in the Markov Chain by the
nonlinearity of the backbone of the EBM, here a Convolutional Neural Network.
Our resulting method, namely STANLEY, is an optimization algorithm for training
Energy-Based models via our newly introduced MCMC method. We provide a
theoretical understanding of our sampling scheme by proving that the sampler
leads to a geometrically uniformly ergodic Markov Chain. Several image
generation experiments are provided in our paper to show the effectiveness of
our method. | Belhal Karimi, Jianwen Xie, Ping Li | 2023-10-19T11:55:16Z | http://arxiv.org/abs/2310.12667v1 | # STANLEY: Stochastic Gradient Anisotropic Langevin Dynamics for Learning Energy-Based Models
###### Abstract
We propose in this paper, **STANLEY**, a **ST**ochastic gradient **AN**isotropic **L**ang**E**vin d**Y**namics, for sampling high dimensional data. With the growing efficacy and potential of Energy-Based modeling, also known as non-normalized probabilistic modeling, for modeling a generative process of different natures of high dimensional data observations, we present an end-to-end learning algorithm for Energy-Based models (EBM) with the purpose of improving the quality of the resulting sampled data points. While the unknown normalizing constant of EBMs makes the training procedure intractable, resorting to Markov Chain Monte Carlo (MCMC) is in general a viable option. Realizing what MCMC entails for the EBM training, we propose in this paper, a novel high dimensional sampling method, based on an anisotropic stepsize and a gradient-informed covariance matrix, embedded into a discretized Langevin diffusion. We motivate the necessity for an anisotropic update of the negative samples in the Markov Chain by the nonlinearity of the backbone of the EBM, here a Convolutional Neural Network. Our resulting method, namely STANLEY, is an optimization algorithm for training Energy-Based models via our newly introduced MCMC method. We provide a theoretical understanding of our sampling scheme by proving that the sampler leads to a geometrically uniformly ergodic Markov Chain. Several image generation experiments are provided in our paper to show the effectiveness of our method.
Introduction
The modeling of a data generating process is critical for many modern learning tasks. A growing interest in generative models within the realm of computer vision has led to multiple interesting solutions. In particular, Energy-based models (EBM) (Zhu et al., 1998; LeCun et al., 2006), are a class of generative models that learns high dimensional and complex (in terms of landscape) representation/distribution of the input data. EBMs have been used in several applications including computer vision (Ngiam et al., 2011; Xie et al., 2016; Du and Mordatch, 2019), natural language processing (Mikolov et al., 2013; Deng et al., 2020), density estimation (Li et al., 2019; Song et al., 2019) and reinforcement learning (Haarnoja et al., 2017). Formally, EBMs are built upon an unnormalized log probability, called the energy function, that is not required to sum to one as standard log probability functions. This noticeable feature allows for more freedom in the way one parameterizes the EBM. For instance, Convolutional Neural Network can be employed to parameterize this function, see Xie et al. (2016).
The training procedure of such models consists of finding an energy function that assigns to lower energies to observations than unobserved points. This phase can be cast as an optimization task and several ways are possible to solve it. In this paper, we will focus on training the EBM via Maximum Likelihood Estimation (MLE). Particularly, while using MLE to fit the EBM on observed data, the high non-convexity of the loss function leads to a non closed form maximization step. In general, gradient based optimization methods are thus used during that phase. Besides, given the intractability of the normalizing constant of our model, the aforementioned gradient, which is an intractable integral, needs to be approximated. A popular and efficient way to conduct such approximation is to use Monte Carlo approximation where the samples are obtained via Markov Chain Monte Carlo (MCMC) (Meyn and Tweedie, 2012). The goal of this embedded MCMC procedure while training the EBM is to synthesize new examples of the input data and use them to approximate quantities of interest.
Hence, the sampling phase is crucial for both the EBM training speed and its final accuracy in generating new synthetic samples. The computational burden of those MCMC transitions, at each iteration of the EBM training procedure, is alleviated via different techniques in the literature. For instance, in Nijkamp et al. (2019), the authors develop a short-run MCMC as a flow-based generator mechanism despite its non convergence property. Other principled approach, as in Hinton (2002), keeps in memory the final chain state under the previous global model parameter and uses it as the initialization of the current chain. The heuristic of such approach is that along the EBM iterations, the conditional distributions, depending on the model parameter, are more and more similar and thus using a good sample from the previous chain is in general a good sample of the current one. Though, this method can be limited during the first iterations of the EBM training since when the model parameter changes drastically, the conditional distributions do change too, and samples from two different chains can be quite inconsistent. Several extensions modifying the way the chain is initialized can be found in Welling and Hinton (2002); Gao et al. (2018); Du and Mordatch (2019).
An interesting line of work in the realm of MCMC-based EBM tackles the biases induced by stopping the MCMC runs too early. Indeed, it is known, see Meyn and Tweedie (2012), that before convergence, MCMC samples are biased and thus correcting this bias while keeping a short and less expensive run is an appealing option. Several contributions aiming at removing this bias for improved MCMC training include coupling MCMC chains, see Qiu et al. (2020); Jacob et al. (2020) or simply estimating this bias and correct the chain afterwards, see Du et al. (2021).
Here, our work is in line with the context of - high-dimensional data, - EBM parameterized by deep neural networks and - MLE-based optimization via MCMC, which make our method particularly attractive to all of the above combined. We also consider the case of a short-run MCMC for the training of an EBM. Rather than focusing on debiasing the chain, we develop a new sampling scheme where the goal is to obtain better samples from the target distribution in fewer MCMC transitions. We consider that the shape of the target distribution, which inspires our proposed method, is of utmost importance to obtain such negative
samples. Our **contributions** are summarized below:
1. We develop STANLEY, an Energy-Based model training method that embeds a newly proposed _convergent_ and _efficient_ MCMC sampling scheme, focusing on curvature informed metrics of the target distribution one wants to sample from.
2. Based on an anisotropic stepsize, our method, which is an improvement of the Langevin Dynamics, achieves to obtain negative samples from the Energy-Based model data distribution and improves the overall optimization algorithm.
3. We prove the geometric ergodicity uniformly on any compact set of our MCMC method assuming some regularity conditions on the target distribution and on the backbone model of the EBM.
4. We empirically assess the effectiveness of our method on several image generation tasks, both on synthetic and real datasets including the Oxford Flowers 102 dataset, CIFAR-10 and CelebA. We conclude the work with an Image inpainting experiment on a benchmark dataset.
**Roadmap**. Section 2 introduces important notations and related work. Section 3 develops the main algorithmic contribution of this paper, namely STANLEY. Section 4 presents our main theoretical results focusing on the ergodicity of the proposed MCMC sampling method. Section 5 presents several image generation experiments on both synthetic and real datasets. The complete proofs of our theoretical results can be found in the supplementary material.
## 2 On MCMC-based Energy Based Models
Given a stream of input data noted \(x\in\mathcal{X}\subset\mathbb{R}^{p}\), the EBM is a Gibbs distribution defined as follows:
\[p_{\theta}(x)=\frac{1}{Z(\theta)}{\exp(f_{\theta}(x))}\, \tag{1}\]
where \(\theta\in\Theta\subset\mathbb{R}^{d}\) denotes the global parameters vector of our model and \(Z(\theta):=\int_{x}{\exp(f_{\theta}(x))}\mathrm{d}x\) is the normalizing constant (with respect to \(x\)). In particular, \(f_{\theta}(x):\mathcal{X}\rightarrow\mathbb{R}\), is the energy function (up to a sign) that can be parameterized by a Convolutional Neural Network for instance. The natural way of fitting model (1) is to employ Maximum Likelihood Estimation (MLE) maximizing the marginal likelihood \(p(\theta)\), _i.e.,_ finding the vector \(\theta^{*}\) such that for any \(x\in\mathcal{X}\),
\[\theta^{*}=\arg\max_{\theta\in\Theta}\mathsf{L}(\theta)=\arg\max_{\theta\in \Theta}\mathbb{E}_{q(x)}[\log p_{\theta}(x)]\, \tag{2}\]
where \(q(x)\) denotes the true distribution of the input data \(x\). The optimization task (2) is not tractable in closed form and requires an iterative procedure in order to be solved. The standard algorithm used to train EBMs is Stochastic Gradient Descent (SGD), see Robbins and Monro (1951); Bottou et al. (2007). SGD requires having access to the gradient of the objective function \(\log p(\theta)\) which requires computing an intractable integral, due to the high nonlinearity of the generally utilized parameterized model \(f_{\theta}(x)\). Given the general form defined in (1), we have that:
\[\nabla\mathsf{L}(\theta)=\mathbb{E}_{q(x)}[\nabla_{\theta}f_{\theta}(x)]- \mathbb{E}_{p_{\theta}(x)}[\nabla_{\theta}f_{\theta}(x)]\,\]
and a simple Monte Carlo approximation of \(\nabla\log p(\theta)\) yields the following expression of the gradient
\[\nabla\mathsf{L}(\theta)\approx\frac{1}{n}\sum_{i=1}^{n}\nabla_{\theta}f_{ \theta}(x_{i}^{q})-\frac{1}{M}\sum_{m=1}^{M}\nabla_{\theta}f_{\theta}(z_{m})\, \tag{3}\]
where \(\{z_{m}\}_{m=1}^{M}\) are samples obtained from the EBM \(p_{\theta}(x)\) and \(\{x_{i}^{q}\}_{i=1}^{n}\) are drawn uniformly from the true data distribution \(q(x)\). While drawing samples from the data distribution is trivial, the challenge during the EBM training phase is to obtain samples from the EBM distribution \(p_{\theta}(x)\) for any model parameter \(\theta\in\Theta\). This task is generally performed using MCMC methods. State-of-the-art MCMC used in the EBM literature include Langevin Dynamics, see Grenander and Miller (1994); Roberts and Rosenthal (1998); Roberts and Tweedie (1996) and Hamiltonian Monte Carlo (HMC), see Neal (2011). Those methods are detailed in the sequel and are important concepts of our contribution.
**Energy Based Models.** Energy based models are a class of generative models that leverage the power of Gibbs potential and high dimensional sampling techniques to produce high quality synthetic image samples. EBMs are powerful tools for generative modeling tasks, as a building block for a wide variety of tasks. The main purpose of EBMs is to learn an energy function (1) that assigns low energy to a stream of observation and high energy values to other inputs. Learning of such models is done via MLE (Xie et al., 2016; Du and Mordatch, 2019) or Score Matching (Hyvarinen, 2005) or Noise Constrastive Estimation (Gao et al., 2020). In several general applications, authors leverage the power of EBMs for develop an energy-based optimal policy where the parameters of that energy function are provided by the reward of the overall system. Learning EBMs with alternative strategies include contrastive divergence (CD) (Hinton, 2002; Tieleman, 2008), noise contrastive estimation (NCE) (Gutmann and Hyvarinen, 2010; Gao et al., 2020), introspective neural networks (INN) (Lazarow et al., 2017; Jin et al., 2017; Lee et al., 2018), cooperative networks (CoopNets) (Xie et al., 2018, 2020, 2021c, 2022a, 2022b, 2022c, 2022d, 2023), f-divergence (Yu et al., 2020), and triangle divergence (Han et al., 2019, 2020). Recently, EBMs parameterized by modern neural networks have drawn much attention from the computer vision and machine learning communities. Successful applications with EBMs include image generation (Xie et al., 2016; Gao et al., 2018; Du and Mordatch, 2019; Zhao et al., 2021; Zheng et al., 2021), videos (Xie et al., 2017, 2021d), 3D volumetric shapes (Xie et al., 2018, 2022b), unordered point clouds (Xie et al., 2021a), texts (Deng et al., 2020), molecules (Ingraham et al., 2019; Du et al., 2020), as well as image-to-image translation (Xie et al., 2022a, 2021b), unpaired cross-domain image translation (Xie et al., 2021b; Song et al., 2023), out-of-distribution detection (Liu et al., 2020), inverse optimal control (Xu et al., 2022), deep regression (Gustafsson et al., 2020), salient object detection (Zhang et al., 2022) and latent space modeling (Pang et al., 2020; Zhang et al., 2021, 2023). Yet, unlike VAE (Kingma and Welling, 2014) or GAN (Goodfellow et al., 2014) EBMs enjoy from a single structure requiring training (versus several networks) resulting in more stability. The use of implicit sampling techniques, such as MCMC, as detailed in the sequel, allows more flexibility by trading off sample quality for computation time. Overall, the _implicit_ property of the EBM, seen as an energy function, makes it a tool of choice as opposed to _explicit_ generators that are limited to some design choices, such as the choice of the prior distribution for VAEs or both neural networks design in GANs.
**MCMC procedures.** Whether for sampling from a posterior distribution (Robert et al., 2010; Han et al., 2017; Xie et al., 2019; Zhang et al., 2020; An et al., 2021; Zhu et al., 2023; Xie et al., 2023), or in general intractable likelihoods scenario (Doucet et al., 2000), various inference methods are available. Approximate inference is a partial solution to the inference problem and include techniques such as Variational Inference (VI) (Wainwright and Jordan, 2008; de Freitas et al., 2001) or Laplace Approximation (Wolfinger, 1993; Rue et al., 2009). Those methods allow the simplification of the intractable quantities and result in the collection of good, yet approximate, samples. As seen in (3), training an EBM requires obtaining samples from the model itself. Given the nonconvexity of the structural model \(f_{\theta}(\cdot)\) with respect to the model parameter \(\theta\), direct sampling is not an option. Besides, in order to update the model parameter \(\theta\), usually through gradient descent type of methods (Bottou et al., 2007), exact samples from the EBM are needed in order to compute a good approximation of its (intractable) gradient, see (3). To do so, we generally have recourse to MCMC methods.
MCMC are a class of inference algorithms that provide a principled iterative approach to obtain samples from any intractable distribution. While being exact, the samples generally represent a larger computation burden than methods such as VI. Increasing the efficiency of MCMC methods, by obtaining exact samples, in other words constructing a chain that converges faster, in fewer transitions is thus of utmost importance in the context of optimizing EBMs. Several attempts have been proposed for the standalone task of posterior sampling through the use of Langevin diffusion, see the Unadjusted Langevin in Brosse et al. (2019), the MALA algorithm in Roberts and Rosenthal (1998); Roberts and Tweedie (1996); Durmus et al. (2017) or leveraging Hamiltonian Dynamics as in Girolami and Calderhead (2011). We propose in the next section, an improvement of the Langevin diffusion with the ultimate goal of speeding the EBM training procedure. Our method includes this latter improvement in an end-to-end learning algorithms for Energy-Based models.
## 3 Gradient Informed Langevin Diffusion
We now introduce the main algorithmic contribution of our paper, namely STANLEY. STANLEY is a learning algorithm for EBMs, comprised of a novel MCMC method for sampling samples from the intractable model (1). We provide theoretical guarantees of our scheme in Section 4.
### Preliminaries on Langevin MCMC based EBM
State-of-the-art MCMC sampling algorithm, particularly used during the training procedure of EBMs, is the discretized Langevin diffusion, cast as Stochastic Gradient Langevin Dynamics (SGLD), see Welling and Teh (2011). In particular, several applications using EBM and SGLD have thrived in image generation, natural language processing or even biology (Du et al., 2020). Yet, the choice of the proposal, generally Gaussian, is critical for improving the performances of both the sampling step (inner loop of the whole procedure) and the EBM training. We recall the vanilla discretized Langevin diffusion used in the related literature as follows:
\[z_{k}=z_{k-1}+\frac{\gamma}{2}\nabla\log\pi_{\theta}(z_{k})+\sqrt{\gamma}B_{k }\,\]
where \(\pi_{\theta}(\cdot):=p(\cdot,\theta)\) is the target potential one needs samples from and defined in (1), \(z_{k}\) represents the states of the chains at iteration \(k\), _i.e._, the generated samples in the context of EBM, \(k\) is the MCMC iteration index and \(B_{k}\) is the Brownian motion, usually set as a Gaussian noise and which can be written as \(B_{k}:=\epsilon\,\xi_{k}\) where \(\xi_{k}\) is a standard Gaussian random variable and \(\epsilon\) is a scaling factor for implementation purposes. This method directs the proposed moves towards areas of high probability of the stationary distribution \(\pi_{\theta}\), for any \(\theta\in\Theta\), using the gradient of \(\log\pi_{\theta}\) and has been the object of several studies (Girolami and Calderhead, 2011; Cotter et al., 2013). In high dimensional and highly nonlinear settings, the burden of computing this gradient for a certain number of MCMC transitions leads to a natural focus: improving of the sampling scheme by assimilating information about the landscape of the target distribution while keeping its ease of implementation.
### STANLEY, an Anisotropic Energy Based Modeling Approach
Given the drawbacks of current MCMC methods used for training EBMs, we introduce a new sampler based on the Langevin updates presented above in Step 4 of Algorithm 1.
**Intuitions behind the efficacy of STANLEY:** Some past modifications have been proposed in particular to optimize the covariance matrix of the proposal of the general MCMC procedure in order to better stride the support of the target distribution. Langevin Dynamics is one example of those improvements where the proposal is a Gaussian distribution where the mean depends on the gradient of the log target distribution
and the covariance depends on some Brownian motion. For instance, in Atchade (2006); Marshall and Roberts (2012), the authors propose adaptive and geometrically ergodic Langevin chains. Yet, one important characteristic of our EBM problem, is that for each model parameter updated through the training iterations, the target distribution moves and the proposal should take that adjustment into account. The techniques in Atchade (2006); Marshall and Roberts (2012) does not take the whole advantage of changing the proposal using the target distribution. In particular, the covariance matrix of the proposal is given by a stochastic approximation of the empirical covariance matrix. This choice seems completely relevant as soon as the convergence towards the stationary distribution is reached, in other words it would make sense towards the end of the EBM training, as the target distributions from a model parameter to the next one are similar. However, it does not provide a good guess of the variability during the first iterations since it is still very dependent on the initialization.
Moreover, in Girolami and Calderhead (2011), the authors consider the approximation of a constant. Even though this simplification leads to ease of implementation, the curvature metric chosen by the authors need to be inverted, step that can be a computational burden if not intractable. Especially in the case we are considering in our paper, _i.e.,_ ConvNet-based EBM, where the high nonlinearity would lead to intractable expectations. Therefore, in (4) and (5) of Algorithm 1, we propose a variant of Langevin Dynamics, in order to sample from a target distribution, using a full anisotropic covariance matrix based on the anisotropy and correlations of the target distribution, see the \(\sqrt{\gamma_{t}}\mathsf{B}_{k}\) err.
Geometric Ergodicity of Stanley
We will present our theoretical analysis for the Markov Chain constructed using Line 3-4 of Algorithm 1. Let \(\Theta\) be a subset of \(\mathbb{R}^{d}\) for some integer \(d>0\). We denote by \(\mathcal{Z}\) the measurable space of \(\mathbb{R}^{\ell}\) for some integer \(\ell>0\). We define a family of stationary distribution \((\pi_{\theta}(z))_{\theta\in\Theta}\), probability density functions with respect to the Lebesgue measure on the measurable space \(\mathcal{Z}\). This family of p.d.f. defines the stationary distributions of our newly introduced sampler.
### Notations and Assumptions
For any chain state \(z\in\mathcal{Z}\), we denote by \(\Pi_{\theta}(z,\cdot)\) the transition kernel as defined in the STANLEY update in Line 4 of Algorithm 1. The objective of this section is to rigorously show that each transition kernel \(\pi_{\theta}\), for any parameter \(\theta\in\Theta\) is geometrically ergodic and that this result holds for any compact subset \(\mathcal{C}\in\mathcal{Z}\). As a background note, a Markov chain, as constructed Line 4, is said to be geometrically ergodic when \(k\) iterations of the same transition kernel is converging to the stationary distribution of the chain with a geometric dependence on \(k\).
As in Allassonniere and Kuhn (2015), we state the assumptions required for our analysis. The first one is related to the continuity of the gradient of the log posterior and the unit vectors pointing in the direction of the sample \(z\) and in the direction of the gradient of the log posterior distribution at \(z\):
**H1**.: _For all \(\theta\in\Theta\), the structural model \(f_{\theta}(\cdot)\) satisfies:_
\[\lim_{|z|\to\infty}\frac{z}{|z|}\nabla f_{\theta}(z)=-\infty\,\ \ \lim_{|z|\to\infty}\frac{z}{|z|}\frac{\nabla f_{ \theta}(z)}{|\nabla f_{\theta}(z)|}<0\.\]
Besides, we assume some regularity conditions of the stationary distributions with respect to \(\theta\):
**H2**.: \(\theta\to\pi_{\theta}\) _and \(\theta\to\nabla\log\pi_{\theta}\) are continuous on \(\Theta\)._
For a positive and finite function noted \(V:\mathcal{Z}\mapsto\mathbb{R}\), we define the V-norm distance between two arbitrary transition kernels \(\Pi_{1}\) and \(\Pi_{2}\) as follows:
\[\|\Pi_{1}-\Pi_{2}\|_{V}:=\sup_{z\in\mathcal{Z}}\frac{\|\Pi_{1}(z,\cdot)-\Pi_{ 2}(z,\cdot)\|_{V}}{V(z)}\.\]
The definition of this norm allows us to establish a convergence rate for our sampling method by deriving an upper bound of \(\|\Pi_{\theta}^{k}-\pi_{\theta}\|_{V}\) where \(k>0\) denotes the number of MCMC transitions. We recall that \(\Pi_{\theta}\) is the transition kernel defined by Line 4 of Algorithm 1 and \(\pi_{\theta}\) is the stationary distribution of our Markov chain at a given EBM model \(\theta\). This quantity characterizes how close to the target distribution our chain is getting, after a finite time of iterations and will eventually formalize the _V-uniform ergodicity_ of our method. We specify that strictly speaking, \(\pi_{\theta}\) is a probability measure, and not a transition kernel. However \(\|\Pi_{\theta}^{k}-\pi_{\theta}\|_{V}\) is well-defined if we consider \(\pi_{\theta}\) as a kernel:
\[\pi(z,\mathcal{C}):=\pi(\mathcal{C})\quad\text{for}\quad\mathcal{C}\subset \mathcal{Z},\quad z\in\mathcal{Z}\.\]
Here, for some \(\beta\in]0,1[\) we define the \(V_{\theta}\) function, also know as the _drift_, for all \(z\in\mathcal{Z}\) as follows:
\[V_{\theta}(z):=c_{\theta}\pi_{\theta}(z)^{-\beta}\, \tag{6}\]
where \(c_{\theta}\) is a constant, with respect to the chain state \(z\), such that for all \(z\in\mathcal{Z}\), \(V_{\theta}(z)\geq 1\). Note that the V norm depends on the chain state noted \(z\)_and_ of the global model parameter \(\theta\) varying through the
optimization procedure. Yet, in both main results, the ergodicity and the convergence rate, including the underlying drift condition, are established uniformly on the parameter space \(\Theta\). We also define the auxiliary functions, independent of the parameter \(\theta\) as:
\[V_{1}(z):=\inf_{\theta\in\Theta}V_{\theta}(z)\quad\text{and}\quad V_{2}(z):=\sup _{\theta\in\Theta}V_{\theta}(z)\, \tag{7}\]
and assume the following:
**H3**.: _There exists a constant \(a_{0}>0\) such that for all \(\theta\in\Theta\) and \(z\in\mathcal{Z}\), the function \(V_{2}^{a_{0}}(z)\), defined in (7), is integrable against the kernel \(\Pi_{\theta}(z,\cdot)\) and we have_
\[\limsup_{a\to 0}\sup_{\theta\in\Theta,z\in\mathcal{Z}}\Pi_{\theta}V_{2}^{a}(z)=1\.\]
### Convergence Results
The result consists in showing V-uniform ergodicity of the chain, the irreducibility of the transition kernels and their aperiodicity, following Meyn and Tweedie (2012); Allassonniere and Kuhn (2015). We also prove a drift condition which states that the transition kernels tend to bring back elements into a small set. Then, V-uniform ergodicity of the transition kernels \((\Pi_{\theta})_{\theta\in\Theta}\) boils down from the latter proven drift condition.
**Important Note:** The stationary distributions depend on \(\theta\in\Theta\) as they vary at each model update during the EBM optimization phase. Thus uniform convergence of the chain is important in order to characterize the sampling phase _throughout the entire training phase_. Particularly at the beginning, the shape of the distributions one needs to sample from varies a lot from a parameter to another.
Theorem 1 shows two important convergence results for our sampling method. First, it establishes the existence of a small set \(\mathcal{O}\) leading to the crucially needed aperiodicity of the chain and ensuring that each transition moves towards a better state. Then, it provides a uniform ergodicity result of our sampling method in STANLEY, via the so-called _drift condition_ providing the guarantee that our transition kernels \((\Pi_{\theta})_{\theta\in\Theta}\) attract the states into the small set \(\mathcal{O}\). Moreover, the independence on the EBM model parameter \(\theta\) of \(V\) in (9) leads to _uniform_ ergodicity as shown in the Corollary 1.
**Theorem 1**.: _Assume H1-H3. For any \(\theta\in\Theta\), there exists a drift function \(V_{\theta}\), a set \(\mathcal{O}\subset\mathcal{Z}\), a constant \(0<\epsilon\leq 1\) such that_
\[\Pi_{\theta}(z,\mathcal{B})\geq\epsilon\int_{\mathcal{B}}\mathbbm{1}_{ \mathcal{X}}(z)\text{d}y. \tag{8}\]
_Moreover there exists \(0<\mu<1\), \(\delta>0\) and a drift function \(V\), independent of \(\theta\) such that for all \(z\in\mathcal{Z}\):_
\[\Pi_{\theta}V(z)\leq\mu V(z)+\delta\mathbbm{1}_{\mathcal{O}}(z). \tag{9}\]
**Corollary 1**.: _Assume H1-H3. A direct consequence of Theorem 1 is that the family of transition kernels \((\Pi_{\theta})_{\theta\in\Theta}\) are uniformly ergodic,i.e., for any compact \(\mathcal{C}\subset\mathcal{Z}\), there exist constants \(\rho\in]0,1[\) and \(e>0\) such for any MCMC iteration \(k>0\),we have:_
\[\sup_{z\in\mathcal{C}}\|\Pi_{\theta}^{k}u(\cdot)-\pi_{\theta}u(\cdot)\|_{V} \leq e\rho^{k}\|u\|_{V}\, \tag{10}\]
_where \(V\) is the drift function in Theorem 1 and \(u(\cdot)\) is any bounded function we apply a transition to._
While Theorem 1 is critical for proving the aperiodicity and irreducibility of the chain, we establish the geometric convergence speed of the chain. We do not only show the importance of the _uniform_ ergodicity of the chain, which makes it appealing for the EBM training since the model parameter \(\theta\) is often updated, but we also derive a geometrical rate in Corollary 1.
We encourage the readers to read through the sketch of the main Theorem of our paper provided on the first page of the supplemental as we give the important details leading to the desired ergodicity results. Those various techniques are common in the MCMC literature and we refer the readers to several MCMC handbooks such as Neal (2011); Meyn and Tweedie (2012) for more understanding.
## 5 Numerical Experiments
We conduct a collection of experiments to show the effectiveness of our method, both on synthetic and real datasets. After verifying the advantage of STANLEY on a Gaussian Mixture Model (GMM) retrieving the synthetic data observations, we then investigate its performance when learning a distribution over high-dimensional natural images such as pictures of flowers, see the Flowers dataset in Nilsback and Zisserman (2008), or general concepts featured in CIFAR-10 (Krizhevsky and Hinton, 2009). For both methods, we use the Frechet Inception Distance (FID), as a reliable performance metrics as detailed in Heusel et al. (2017). In the sequel, we tune the learning rates over a fine grid and report the best result for all methods. For our method STANLEY, the threshold parameter th, crucial for the implementation of the stepsize (4) is tuned over a grid search as well. As mentioned above, we also define a Brownian motion as \(B_{k}:=\epsilon\,\xi_{k}\), and tune the scaling factor \(\epsilon\) for better performances.
### Toy Example: Gaussian Mixture Model
**Datasets.** We first demonstrate the outcomes of both methods including our newly proposed STANLEY for low-dimensional toy distributions. We generate synthetic 2D rings data and use an EBM to learn the true data distribution and put it to the test of generating new synthetic samples.
**Methods and Settings.** We consider two methods. Methods are ran with _nonconvergent_ MCMC, _i.e.,_, we do not necessitate the convergence to the stationary distribution of the Markov chains. The number of transitions of the MCMC is set to \(K=100\) per EBM iteration. We use a standard deviation of \(0.15\) as in Nijkamp et al. (2020). Both methods have a constant learning rate of \(0.14\). The value of the threshold th for our STANLEY method is set to \(\textsf{th}=0.01\). The total number of EBM iterations is set to \(T=10\,000\). The global learning rate \(\eta\) is set to a constant equal to \(0.0001\).
**Network architectures.** For the backbone of the EBM model, noted \(f_{\theta}(\cdot)\) in (1), we chose a CNN of \(5\) 2D convolutional layers and Leaky ReLU activation functions, with the leakage parameter set to \(0.05\). The number of hidden neurons varies between \(32\) and \(64\).
**Results.** We observe Figure 1 the outputs using both methods on the toy dataset. While they achieve a great representation of the truth after a large number of iterations, we notice that STANLEY learns an energy that closely approximates the true density during the first thousands of iterations if the training process. The sharpness of the data generated by STANLEY in the first iterations shows an empirically better ability to sample from the 2D dataset.
### Image Generation
**Datasets.** We run our method and several baselines detailed below on the _CIFAR-10_ dataset (Krizhevsky and Hinton, 2009) and the _Oxford Flowers 102_ dataset (Nilsback and Zisserman, 2008). _CIFAR-10_ is a popular computer-vision dataset of \(50\,000\) training images and \(10\,000\) test images, of size \(32\times 32\). It is composed of tiny natural images representing a wide variety of objects and scenes, making the task of self supervision supposedly harder. The _Oxford Flowers 102_ dataset is composed of 102 flower categories. Per request of the authors, the images have large scale, pose and light variations making the task of generating new samples particularly challenging.
**Methods and Settings for the Flowers dataset.** Nonconvergent MCMC are also used in this experiment and the number of MCMC transitions is set to \(K=50\). Global learning parameters of the gradient descent update is set to \(0.001\) for both methods. We run each method during \(T=100\,000\) iterations and plot the results using the final vector of fitted parameters.
**Methods and Settings for CIFAR-10.** We employ the same nonconvergent MCMC strategies for this experiment. The value of the threshold th for our STANLEY method is set to \(\mathsf{th}=0.0002\). The total number of EBM iterations is set to \(T=100\,000\). The global learning rate \(\eta\) is set to a constant equal to \(0.0001\). In this experiment, we slightly change the last step of our method in Algorithm 1. Indeed, Line 11 in Algorithm 1 is not a plain Stochastic Gradient Descent here but we rather use the Adam optimizer (Kingma and Ba, 2015). The scaling factor of the Brownian motion is \(0.01\).
**Network architectures for both.** The backbone of the energy function for this experiment is a vanilla ConvNet composed of \(3\times 3\) convolution layers with stride \(1\). \(5\) Convolutional Layers using ReLU activation functions are stacked.
Figure 1: (Rings Toy Dataset) Top: our method, namely STANLEY Bottom: vanilla Langevin Dynamics. Methods are used with the same backbone architecture. Generated samples are plotted through the iterations ever \(2\,000\) steps.
Results. (_Flowers)_Visual results are provided in Figure 2 where we have used both methods to generate synthetic images of flowers. For each threshold iterations number (\(5\,000\) iterations) we sample \(10\,000\) synthetic images from the EBM model under the current vector of parameters and use the same number of data observations to compute the FID similarity score as advocated in Heusel et al. (2017). The evolution of the FID values are reported in Figure 3 (Left) through the iterations. We note that our method outperforms the other baselines for all iterations threshold, including the Vanilla Langevin (in blue) which is an ablated form our STANLEY (no adaptive stepsize).
Figure 3: (FID values per method against 100k iterations elapsed). Left: Oxford Flowers dataset. Right: CIFAR-10.
Figure 2: Left: Langevin Method. Right: STANLEY. After 100k iterations.
(Cifar-10)_Visual results are provided in Figure 4 where we have used both methods to generate synthetic images of flowers. The FID values are reported in Figure 3 (Right) and have been computed using \(10\,000\) synthetic images from each model. The similarity score is then evaluated every \(5\,000\) iterations. While the FID curves for the Flowers dataset exhibits a superior performance of our method throughout the training procedure, we notice that in the case of CIFAR-10, vanilla method seems to be slightly better than STANLEYduring the first iterations, _i.e.,_ when the model is still learning the representation of the images. Yet, after a certain number of iterations, we observe that STANLEY leads to more accurate synthetic images. _This behavior can be explained by the importance of incorporating curvature informed metrics into the training process when the parameter reaches a neighborhood of the solution._
### Image Inpainting
The image inpainting experiment aims to fill missing regions of a damaged image with synthesized content.
**Datasets.** We use the CelebA dataset (Liu et al., 2015) to evaluate our learning algorithm, which contains more than 200k RGB color facial image. We use 100k images for training and 100 images for testing.
**Methods and Settings.** Nonconvergent MCMC are also used in this experiment and the number of MCMC transitions is set to \(K=50\). Global learning parameters of the gradient descent update is set to \(0.01\). We run each method during \(T=50\,000\) iterations and plot the results using the final vector of fitted parameters.
**Results.** Figure 5 displays the FID curves for all methods. We note that along the iterations, the STANLEY outperforms the other baseline and is similar to HMC, while only requiring first order information for the computation of the stepsize whereas HMC computes second order quantity. Even with second order information, the HMC samples does not lead to a better FID.
Figure 4: 1: Langevin 2: STANLEY 3: MH 4: HMC 5: GD without noise. After 100k iterations.
Figure 5: FID values per method against 50k iterations elapsed.
Figure 6 shows the visual check on different samples between our method and its ablated form, _i.e.,_ the vanilla Langevin sampler based EBM. In addition to a measurable metrics comparison, visual check provide empirical insights on the effectiveness of adding a curvature-informed stepsize in the sampler of our generative model as in STANLEY.
## 6 Conclusion
We propose in this paper, an improvement of the so-called MCMC based Energy-Based models. In the particular case of a highly nonlinear structural model of the EBM, more precisely a Convolutional Neural Network in our paper, we tackle the complex task of sampling negative samples from the energy function. The multi-modal and highly curved landscape one must sample from inspire our technique called STANLEY, and based on a Stochastic Gradient Anisotropic Langevin Dynamics, that updates the Markov Chain using an anisotropic stepsize in the vanilla Langevin update. We provide strong theoretical guarantees for our novel method, including uniform ergodicity and geometric convergence rate of the transition kernels to the stationary distribution of the chain. Our method is tested on several benchmarks data and image generation tasks including toy and real datasets.
|
2307.11979 | Magnetism and superconductivity in doped triangular-lattice Mott
insulators | Inspired by recent advances in the fabrication of surface superlattices, and
in particular the triangular lattice made of tin (Sn) atoms on silicon, we
study an extended Hubbard mode on a triangular lattice. The observations of
magnetism in these systems justify the inclusion of a strong on-site repulsion
and the observation of superconductivity suggests including an effective,
nearest-neighbor attractive interaction. The attractive interaction mimics the
effect of strong on-site repulsion near half filling, which can be seen in
strong coupling vertex calculations such as the Eliashberg method. With this
extended Hubbard model on a triangular lattice with its geometrical
frustration, we find a rich phase diagram of various magnetic orders and
pairing functions, within the framework of self-consistent mean field theory.
We uncover the competition among magnetism and unconventional
superconductivity, and their coexistence for triplet pairings. We follow the
Fermi surface of the system as the system is doped away from half filling and
find nesting vectors and a Lifshitz transition which provide an intuitive
understanding of the phase transitions between the many orders we consider. | Kun Woo Kim, T. Pereg-Barnea | 2023-07-22T04:35:41Z | http://arxiv.org/abs/2307.11979v1 | # Magnetism and superconductivity in doped triangular-lattice Mott insulators
###### Abstract
Inspired by recent advances in the fabrication of surface superlattices, and in particular the triangular lattice made of tin (Sn) atoms on silicon, we study an extended Hubbard mode on a triangular lattice. The observations of magnetism in these systems justify the inclusion of a strong on-site repulsion and the observation of superconductivity suggests including an effective, nearest-neighbor attractive interaction. The attractive interaction mimics the effect of strong on-site repulsion near half filling, which can be seen in strong coupling vertex calculations such as the Eliashberg method. With this extended Hubbard model on a triangular lattice with its geometrical frustration, we find a rich phase diagram of various magnetic orders and pairing functions, within the framework of self-consistent mean field theory. We uncover the competition among magnetism and unconventional superconductivity, and their coexistence for triplet pairings. We follow the Fermi surface of the system as the system is doped away from half filling and find nesting vectors and a Lifshitz transition which provide an intuitive understanding of the phase transitions between the many orders we consider.
## I Introduction
Recent theoretical studies of superconductivity in triangular lattice systems have been motivated by a series of experimental investigation [1; 2; 3; 4] where Sn adatoms placed on the surface of Si(111) form a two-dimensional triangular lattice, showing some evidence for unconventional chiral d-wave superconductivity. The magnetic ordering of the same superlattice system has been intensively studied as well both theoretically and experimentally [5; 6; 7; 8]. These kind of systems combine the geometric frustration of the triangular lattice with strong electronic correlations coexist and can, at least in theory, bring about a variety of states of matter such as a spin liquid, collinear antiferromagnet and spiral magnetic order at half filling as well as several unconventional superconducting orders away from half filling. Here, we explore the competition between magnetism and superconductivity in a wide range of doping and interaction strengths.
The studies of magnetic orders in a triangular lattice [9; 10; 11; 12] and others investigate the possibility of unconventional superconductivity [13; 14; 15; 16; 17] and point to the possibility of both singlet and triplet superconductors with and without topological numbers. In this manuscript we focus on the competition between many order parameters, both magnetic and superconducting. We do so in self-consistent mean field theory where the on-site Hubbard \(U\) interaction favors magnetic order while pairing on bonds is favored by an attraction term on nearest neighbor sites. This attraction is an effective description of strong correlations; the result of a coulomb repulsion and fermi surface nesting [14]. The tight binding model in the triangular model comes with the long range hoping sharpening the saddle of dispersion relation. We confirm not only the appearance of chiral d-wave superconductivity and collinear antiferromagnetic ordering as reported in experiments, but also find triplet pairing and the spiral magnetic ordering at other fillings and interaction strength. Crucially, we provide direct and intuitive understanding of magnetic phases from Fermi surface nesting and can relate the favored superconducting state to a synergy between the Fermi surface and the pairing function such that nodes are avoided and gaps are maximized.
The organization of the manuscript is the following. In section II we introduce a model with on-site repulsion and nearest neighbor attraction, and then the mean field Hamiltonian with magnetism and superconductivity is constructed. In section III we construct the grand potential and derive the self consistency relations for multiple order parameters. In sections IV and V we present our results and discuss them.
## II Extended Hubbard model on a triangular lattice
The kinetic part of our Hamiltonian is composed of tight binding hopping parameters \(t_{l}\) between the \(l^{\text{th}}\) neighbors on the triangular lattice, proposed to match ARPES data in Ref. [8]. The parameters where chosen to fit the lowest energy band of the Sn/Si(111) surface:
\[\hat{H}_{0}=\sum_{l=1}^{6}t_{l}\sum_{\langle ij\rangle_{l}}\hat{c}_{i}^{ \dagger}\hat{c}_{j}, \tag{1}\]
where \(t_{1}=-52.7\)meV, and the longer-range hopping amplitudes \(t_{2}/t_{1},...,t_{6}/t_{1}\) are -0.3881, 0.1444, -0.0228, 0 and -0.0318, respectively. The indices \(\langle i,j\rangle_{l}\) run over all pairs of \(l\)th nearest neighbor sites. The Fermi surface is depicted as a function of filling in Fig. 1(b) along with the density of states (DoS) as a function of energy. We in
clude the onsite and extended Hubbard interaction terms:
\[\hat{H}_{\text{int}}=\sum_{i}U_{0}\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}+\sum_{ \langle ij\rangle,\sigma\sigma^{\prime}}U_{1}\hat{n}_{i\sigma}\hat{n}_{j\sigma^ {\prime}}, \tag{2}\]
such that \(U_{0}>0\) is repulsive, and \(U_{1}<0\) is attractive. The on-site repulsive interaction favors magnetism. It is convenient to express the Hubbard \(U_{0}\) term as (Appendix A):
\[\hat{H}^{\text{(on)}}=\sum_{i}U_{0}\left[\frac{1}{4}\hat{n}_{i}\hat{n}_{i}- \hat{S}_{im}\hat{S}_{im}\right], \tag{3}\]
where \(\hat{n}_{i}=\hat{n}_{i\uparrow}+\hat{n}_{i\downarrow}\) is the occupancy at site \(i\), with \(\langle\hat{n}_{i}\rangle\in[0,2]\). The spin operator at site \(i\) in direction \(\hat{m}\) is defined as \(\hat{S}_{im}=\frac{1}{2}\sum_{\sigma\sigma^{\prime}}\hat{c}^{\dagger}_{i \sigma}(\vec{\sigma}\cdot\hat{m})_{\sigma\sigma^{\prime}}\hat{c}_{i\sigma^{ \prime}}\), where \(\vec{\sigma}\) is the vector of Pauli matrices. We can therefore define a local spin order parameter as \(\vec{m}=\langle\hat{S}_{im}\rangle\), where the direction of \(\vec{m}\) may give either ferromagnetic order or antiferromagnetic (AF) order with collinear or spiral spin directions as depicted in Fig.2(b). The magnitude \(m=|\vec{m}|\) represents the strength of the order. The mean field Hamiltonian is then given by:
\[\hat{H}^{\text{(on)}}_{\text{MF}}= -U_{0}N_{\text{lat}}\left[\frac{1}{4}\langle\hat{n}_{i}\rangle^{2} -|\vec{m}|^{2}\right]\] \[+U_{0}\sum_{i}\left(\hat{c}^{\dagger}_{i\uparrow}\quad\hat{c}^{ \dagger}_{i\downarrow}\right)\left[\frac{1}{2}\langle\hat{n}_{i}\rangle\sigma _{0}-\vec{m}\cdot\vec{\sigma}\right]\begin{pmatrix}\hat{c}_{i,\uparrow}\\ \hat{c}_{i,\downarrow}\end{pmatrix}, \tag{4}\]
where the first term describes the onsite potential energy cost of having the magnetic order while the second term provides a possible energy benefit from a spin splitting. Note that the band energy shifts by \(\frac{1}{2}\langle\hat{n}_{i}\rangle\sigma_{0}\) due to the mean repulsive energy from the onsite interaction. To take into account the collinear and spiral AF magnetism we extended the unit cell to include 2 or 3 atoms with rotated spins in our calculations.
The attractive interaction \(U_{1}<0\) can induce superconductivity or a charge density wave. We focus on superconductivity using BCS-like the self-consistent mean field:
\[\hat{H}^{\text{(nn)}}_{\text{MF}}= -U_{1}\sum_{\langle ij\rangle,\sigma\sigma^{\prime}}|\Delta_{ji, \sigma^{\prime}\sigma}|^{2}\] \[+U_{1}\sum_{\langle ij\rangle,\sigma,\sigma^{\prime}}\left[c^{ \dagger}_{i,\sigma}c^{\dagger}_{j,\sigma^{\prime}}\Delta_{ji,\sigma^{\prime} \sigma}+\Delta^{\ast}_{ij,\sigma\sigma^{\prime}}c_{j\sigma^{\prime}}c_{i\sigma }\right]. \tag{5}\]
The superconducting order parameter \(\Delta_{ji,\sigma^{\prime}\sigma}=\langle c_{j\sigma^{\prime}}c_{i\sigma}\rangle\) may describe either singlet or triplet spin pairing with spatial symmetry of \(s\)-, \(p\)-,\(d\)- or \(f\)-wave. We consider superconductivity and magnetism together to determine which combination of order parameters yields the lowest grand potential. It is worth noting that in order to consider superconductivity and magnetism simultaneously, the Hamiltonian in Eq.(4) needs to be written in the Bogoliubov-de Gennes (BdG) form with particle-hole symmetry (see Appendix B). The mean field hamiltonian
Figure 1: (a) A triangular lattice with six nearest neighbors of the central atom (in red) marked with different colors. (b) The dispersion relation \(E(k_{x},k_{y})\) and the filling \(\nu=\langle\hat{n}_{i\uparrow}+\hat{n}_{i\downarrow}\rangle\equiv\langle \hat{n}_{i}\rangle\). Energy in unit of the nearest neighbor hopping amplitude, \(|t_{1}|=52.8\)(meV). (c) The relation between filling and the energy, and the DoS and energy.
is
\[\hat{H}_{\rm MF}=E_{0}(n_{i},\vec{m},\tilde{\Delta}_{ji,\sigma^{ \prime}\sigma}^{(s/t)})\] \[+\frac{1}{2}\sum_{k}^{BZ/2}\Psi^{\dagger}\begin{pmatrix}H_{kk}& &\tilde{\Delta}_{k,-k}\\ &H_{-k,-k}&\tilde{\Delta}_{-k,k}\\ &-\tilde{\Delta}_{k,-k}^{*}&-H_{kk}^{*}\\ -\tilde{\Delta}_{-k,k}^{*}&&-H_{-k-k}^{*}\end{pmatrix}\Psi, \tag{6}\]
where \(\tilde{\Delta}_{k^{\prime}k^{\prime\prime}}=U_{1}\Delta_{k^{\prime}k^{\prime \prime}}\) and \(H_{k^{\prime}k^{\prime\prime}}\) are 2 by 2 matrices in spin space. The BdG Hamiltonian is written in the basis \(\Psi^{\dagger}=\left(c_{k\uparrow}^{\dagger},c_{k\downarrow}^{\dagger},c_{- k\uparrow}^{\dagger},c_{-k\downarrow}^{\dagger},c_{k\uparrow},c_{k \downarrow},c_{-k\uparrow},c_{-k\downarrow}\right)\). The constant energy \(E_{0}(n_{i},\vec{m},\tilde{\Delta}_{ji,\sigma^{\prime}\sigma}^{(s/t)})\) contains the usual BCS ground state energy as well as terms resulting from the anti-commutation relation of the operators which compose the magnetic order parameters:
\[E_{0} =-U_{0}N_{\rm lat}\left[\frac{1}{4}\langle\hat{n}_{i}\rangle^{2}- |\vec{m}|^{2}\right]\] \[+\frac{1}{2}U_{0}N_{\rm lat}\langle\hat{n}_{i}\rangle-U_{1}\sum_{ \langle ij\rangle,\sigma\sigma^{\prime}}|\Delta_{ji,\sigma^{\prime}\sigma}|^ {2}, \tag{7}\]
The Hamiltonian written in this structure visibly satisfies the particle hole symmetry which is represented by the operator \(\mathcal{P}=K\tau_{x}\), where \(K\) is complex conjugation and the Pauli matrix \(\tau_{x}\) acts exchanges particles and holes, such that \(\mathcal{P}H_{BdG}\mathcal{P}^{-1}=-H_{BdG}\) (see Appendix B for the discussion of the PHS in the basis \(\Psi^{\dagger}\)). The block diagonal Hamiltonian in Eq. (6) is
\[H_{kk}=\left[\epsilon_{\vec{k}}+\frac{1}{2}U_{0}\langle\hat{n}_{i}\rangle \right]\sigma_{0}-\vec{m}\cdot\vec{\sigma}, \tag{8}\]
where \(\epsilon_{\vec{k}}\) is the Fourier transform of Eq. (1) (see [18] for its explicit expression). The summation over crystal momentum is on the half of the Brillouin zone because our basis contains both \(|k\sigma\rangle\) and \(|\)-\(k\sigma\rangle\) state. The BdG Hamiltonian (8 by 8) in the full basis is block diagonal with two 4 by 4 blocks with opposite sign of eigenvalues. Note that one could choose to either work with this \(8\times 8\) block-diagonal Hamiltonian and sum over half of the Brillouin zone or work with only one of the blocks and sum over the entire Brillouin zone. The pairing potential \(\Delta_{k,-k}\) is given by:
\[(\Delta_{k,-k})_{\sigma\sigma^{\prime}}=\sum_{\vec{\xi}_{ji}}\Delta_{ji, \sigma^{\prime}\sigma}e^{-i\vec{k}\cdot\vec{\xi}_{ji}},\]
where the sum is over the six nearest neighbor vectors, \(\vec{\xi}_{ji}=\vec{r}_{j}-\vec{r}_{i}\in\{(1,0),(\frac{1}{2},\frac{\sqrt{3}}{ 2}),(-\frac{1}{2},\frac{\sqrt{3}}{2}),(-1,0),(-\frac{1}{2},-\frac{\sqrt{3}}{2} ),(\frac{1}{2},-\frac{\sqrt{3}}{2})\}\), with angles \(\theta_{ji}\in\{0,\frac{5}{3},\frac{2\pi}{3},\pi,\frac{4\pi}{3},\frac{5\pi}{3}\}\) relative to the \(x\)-axis. The symmetry of the pairing function in real space (odd or even with respect to exchanging the sites \(i\) and \(j\)) determines whether the spins of the Cooper pair is in the singlet or triplet configuration such that \(\Delta_{ji,\sigma\sigma^{\prime}}\) is antisymmetric to the exchange of both position and spin.
\[\Delta_{ji,\sigma\sigma^{\prime}}=\chi_{\sigma\sigma^{\prime}}^{\rm(s/t)}\otimes \phi_{ji}^{\rm(even/odd)}. \tag{9}\]
and:
\[\chi^{(s)}=\begin{pmatrix}0&+\Delta_{s}\\ -\Delta_{s}&0\end{pmatrix},\ \chi^{(t)}=\begin{pmatrix}\Delta_{\uparrow\uparrow}& \Delta_{t}\\ \Delta_{t}&\Delta_{\downarrow\downarrow}\end{pmatrix}, \tag{10}\]
We label the spatial pairing functions by their angular momentum; the phase winding number around a central atom such that for s-wave \(\phi_{ji}^{(s)}=1\) and for angular momentum \(l\) a chiral pairing function winds \(l\) times and is given by \(\phi_{jj^{\prime}}^{(l)}\propto e^{il\theta_{jj^{\prime}}}\). However, since we do not want to impose chirality a priori, we minimize the mean field energy with two pairing functions for each angular momentum:
\[\phi_{ji}^{(p_{x})}=\cos\theta_{ji}, \phi_{ji}^{(p_{y})}=\sin\theta_{ji}, \tag{11}\] \[\phi_{ji}^{(d_{x^{2}-y^{2}})}=\cos 2\theta_{ji}, \phi_{ji}^{(d_{xy})}=\sin 2\theta_{ji},\] (12) \[\phi_{ji}^{(f_{x(x^{2}-3y^{2})})}=\cos 3\theta_{ji}, \phi_{ji}^{(f_{y(\Im^{2}-y^{2})})}=\sin 3\theta_{ji}, \tag{13}\]
such that the chiral \(p\)-, \(d\)-, or \(f\)-wave is obtained if both pairing functions are non-zero and there's a \(\pi/2\) phase difference between them. Otherwise we end up with a non-chiral state. In the case of \(f\)-wave, the \(f_{y(3x^{2}-y^{2})}\) is zero for nearest neighbor links in the triangular lattice and we therefore end up with a non-chiral, real order parameter of the form of \(f_{x(x^{2}-3y^{2})}\). A chiral \(f\)-wave order would require longer range attraction.
## III Self-consistency equations for order parameters
The grand potential is obtained from the grand canonical partition function \(Z={\rm Tr}\,e^{-\beta(\hat{H}_{\rm MF}-\mu\hat{K})}\) in the diagonal basis.
\[\hat{H}_{\rm MF}-\mu\hat{N}=E_{0}+\frac{1}{2}\sum_{k,\alpha}^{BZ/2}\left[\zeta_{ k,\alpha}\hat{\gamma}_{k\alpha}^{\dagger}\hat{\gamma}_{k\alpha}-\zeta_{k,\alpha} \hat{\gamma}_{k\alpha}\hat{\gamma}_{k\alpha}^{\dagger}\right], \tag{14}\]
where \(\alpha\) labels the eigen values and \(\zeta_{k,\alpha}\) is an eigenvalue of one sub-block BdG Hamiltonian (6). Due to particle hole symmetry, the eigenvalues in the two sub blocks of the Hamiltonian each have a pair of identical eigenvalues with opposite sign. Summing over all (single particle) eigen states we obtain the grand potential:
\[\Omega=-\frac{1}{\beta}\ln Z=E_{0}-\frac{1}{\beta}\sum_{k,\alpha}^{\frac{1}{2}{ \rm BZ}}\ln\left[2\cosh\frac{\beta\zeta_{k,\alpha}}{2}\right], \tag{15}\]
where the temperature \(\beta=k_{B}T=0.1|t_{1}|\) is taken and the sum above is only over positive \(\zeta_{k,\alpha}\) values. Note
that this temperature of about 61K is required for convergence at our momentum resolution. The minimum of the grand potential is the self consistent solution for the order parameters and we therefore set the grand potential derivatives with respect to the order parameters to zero. We write the derivatives as (see Appendix C):
\[\frac{\partial\Omega}{\partial m_{j}} =\frac{\partial E_{0}}{\partial m_{j}}+\frac{\partial}{\partial m _{j}}\left[\Omega-E_{0}\right]=0, \tag{16}\] \[\frac{\partial\Omega}{\partial\Delta_{\nu}^{*}} =\frac{\partial E_{0}}{\partial\Delta_{\nu}^{*}}+\frac{\partial }{\partial\Delta_{\nu}^{*}}\left[\Omega-E_{0}\right]=0, \tag{17}\]
where \(j\in\{x,y,z\}\) and \(\nu\in\{s,t,\uparrow\uparrow,\downarrow\downarrow\}\). We solve the above conditions through iterations and update all order parameters at every step. Using the fact that the energy \(E_{0}\) contains \(\sum_{j=x,y,z}m_{j}^{2}\) and \(\Delta_{\nu}\Delta_{\nu}^{*}\), we can update an order parameter through iteration. That is,
\[m_{j}^{\rm(new)} =m_{j}^{\rm(old)}\left[1-\eta\frac{\partial\Omega}{\partial m_{j }}\left(\frac{\partial E_{0}}{\partial m_{j}}\right)^{-1}\right]_{\rm old}, \tag{18}\] \[\Delta_{\nu}^{\rm(new)} =\Delta_{\nu}^{\rm(old)}\left[1-\eta\frac{\partial\Omega}{ \partial\Delta_{\nu}^{*}}\left(\frac{\partial E_{0}}{\partial\Delta_{\nu}^{*} }\right)^{-1}\right]_{\rm old}. \tag{19}\]
where the right hand side is computed using a set of order parameters to be updated. \(\eta\left(\simeq 0.2\right)\) controls the rate of approaching speed toward a convergence over iterations. When converged, \(m_{j}^{\rm(new)}=m_{j}^{\rm(old)}\) and \(\left.\partial_{m_{j}}\Omega\right|_{\rm old}=0\). The partial differentiation is numerically obtained by computing the difference of grand potentials, \(\partial_{m_{j}}\Omega\simeq\lim_{\Delta m_{j}\to 0}\Delta\Omega/\Delta m _{j}\). The mean field Hamiltonian is also a function of filling \(\nu\), see Eq.(4). Along with other order parameters, the filling is updated in each iteration by computing
\[\nu=-\frac{1}{N_{\rm lat}}\frac{\partial\Omega}{\partial\mu}=\langle\hat{n}_ {i\uparrow}+\hat{n}_{i\downarrow}\rangle. \tag{20}\]
## IV Results: Topological Superconductivity and Magnetism
_Magnetic orders-_: The Stoner criterion gives a heuristic rule for when magnetic order might develop; if the density of states at the Fermi level exceeds a critical value set by the onsite repulsive interaction, magnetic order could develop to reduce the interaction energy. This intuition is indeed in line with our findings - Fig. 1 (c) shows that at filling near \(\nu=0.77,2\) the DoS is peaked and this is where we find ferromagnetism. Another important factor in determining whether or not magnetism will appear and the kind of magnetism that will develop is the shape of the Fermi surface. A magnetic order which reduces the periodicity is most effective if its ordering vector connects many states near the Fermi level, i.e., opens a gap through nesting.
Around, \(\nu\sim 1\), collinear antiferromagnetism appears. This order reduces the periodicity by folding the Brillouin zone in one of three directions; one such spin configuration is shown in Fig. 2(b). The periodicity of the collinear antiferromagnet (CAF) in real space is \(\sqrt{3}a\), which doubles the size of the unit cell. Thus, the size of the nesting vector \(|\tilde{Q}_{\rm CAF}|=2\pi/\sqrt{3}a\) in momentum space is half of the shortest reciprocal lattice vector. The possible nesting Fermi surface lines are indicated in Fig. 2(c). However, since the CAF ordering appears only in one direction, the Fermi surface is not fully gapped by this order, and the system remains metallic.
Right next to the CAF order, the spiral antiferromagnetic (SAF) is the most energetically favorable phase. The filling is a bit higher, and the corresponding Fermi surface is marked by green in Fig. 2(c). The size of the nesting vector is \(\frac{4\pi}{3a}\) which is smaller than that of the CAF case, while the directions of the three vectors are rotated by \(\pi/2\). The spiral AF gaps the Fermi surface completely, lowering the grand potential even more than the CAF.
The development of the three magnetic orderings in a small doping range can be understood from the density of states (DoS) and the shape of the Fermi surface of the long range hopping tight binding model in the triangular lattice model for filling \(\nu\in[0.8,1.5]\). The result of the
Figure 2: (a) The phase diagram at \(U_{1}/U_{0}=-0.5\). (b) Three types of magnetic orderings. (c) The collinear AF and spiral AF appear from the Fermi surface nesting depicted. The ferromagnetism appears from the Stoner’s criterion.
self consistent mean field with multiple order parameters is shown in Fig. 2 (a) for \(U_{0}/|t_{1}|=6\) and \(U_{1}/|t_{1}|=-3\). The bandwidth of the hopping model in Eq. (1) (\(\sim 10|t_{1}|\)) as shown in Fig.1 is the larger than employed interaction strength. We note that previous authors [8] found that the CAF is preferred over the SAF at half filling \(\nu=1\) using a more sophisticated numerical method. When the attractive nearest neighbor attraction \(U_{1}\) increases further, topological superconductivity appears next to the antiferromagnetic orders.
_Superconductivity_:- The study of superconductivity in the one band triangular lattice models was ignited by the experimental observation of superconductivity in Sn/Si(111) [1; 2]. We therefore set out to find out what kind of superconductivity can emerge from our effective nearest neighbor attraction while on-site repulsion is present. This approach to interaction based unconventional superconductivity provides an intuitive understanding on how order parameters compete to balance decreasing the interaction energy while increasing the kinetic energy through orderings. We therefore minimize our grand potential by considering both magnetism and superconductivity together. Figure 3(a) shows the phase diagram as a function of filling with \(U_{1}/U_{0}=-0.8\). The \(f\)-wave SC appears first and as can be seen in Fig. 3(b), it has three pairs of line nodes. For filling \(\nu\in[0.2,0.7]\), the Fermi surface is located where the nodal lines are avoided and therefore the \(f\)-wave pairing fully gaps the system, reducing the interaction energy which results from \(U_{1}\).
As the filling is increased beyond \(0.7\), the Fermi surface undergoes a Lifshitz transition from six pockets to a single snow-flake-shaped surface at the center of the Brillouin zone. Once the transition has occurred, the \(f\)-wave can not fully gap the Fermi surface and the preferred pairing changes. The Fermi surface that favors the CAF order for reducing the on-site interaction energy favors the chiral \(d+id\) superconductor for reducing the \(U_{1}\) interaction energy. The competition between the two phases is studied by minimizing the grand potential with the two orders present and this yields the result that \(d+id\) superconductors appears before the collinear antiferromagnet.
Around \(\nu=3/4\) the spiral antiferromagnet is favored (despite the significant \(U_{1}\)) but at higher filling of \(\nu>1.5\) another topological superconductor with \(p+ip\) structure is developed. This order parameter has nodal points which are avoided by the small Fermi surface around the zone center as shown in Fig. 3(b)(bottom-left) such that the Fermi surface is fully gapped.
Lastly, when every site is nearly fully filled, \(\nu\sim 2\), the s-wave pairing potential which has a circular line node away from \(\vec{k}=0\) is preferred. At the high filling, there is a transition from the ferromagnetic ordering to the \(s\)-wave superconductivity with increasing attractive extended interaction strength \(|U_{1}|\), see Fig. 4(a).
We note that the order of superconducting phases (chiral p-wave and f-wave) we find is inconsistent with that of Wolf _et al._ in Refs. [14; 16] but it is consistent with that of Cheng _et al._ in Ref. [13]. While we rely on an effective interaction for our mean field calculation, we believe that the filling in which we find the various orders is very plausible since it is compatible with the intuition provided by the Fermi surface shapes at the various filling as discussed above.
_From magnetic ordering to superconducting_:- The phase diagram is drawn in Fig. 4(a) for \(U_{1}/U_{0}\in[-0.2,1.2]\). At small \(U_{1}\) only magnetic orders appear. For \(U_{1}/U_{1}<-0.4\) superconductivity begins to appear. While the singlet pairing does not coexist with the magnetic orderings, the triplet pairing may. We verify the coexistence and also plot the superconductor and the magnetic order parameters separately in Fig. 4(b) and (c), respectively. In particular \(f\)-wave superconductivity coexists with ferromagnetism near \(\nu=0.75\), and the CAF and SAF orders give way to \(p+ip\) pairing when \(|U_{1}|\) is increased.
Figure 3: (a) The phase diagram at \(U_{1}/U_{0}=-0.8\). Chiral d-wave and p-wave superconductivity appear next to the anti-ferromagnetic orderings. At lower and higher filling, f-wave and s-wave superconductivity appear, respectively. (b) The magnitude of pairing \(|\Delta^{(\mathrm{pairing})}_{k_{-}k_{B}\sigma^{\prime}}|\) is plotted for the orbital symmetries. The overlaid Fermi surface (blue lines) at filling \(\nu=1/3,1,5/3\) show that nodal lines of the pairing are avoided maximizing the superconducting energy gap opening.
## V Conclusions
In this work we explored the possibility of magnetic and superconducting orders in the extended Hubbard model on a triangular lattice. Our model includes a repulsive on-site Hubbard interaction \(U_{0}\) and an effective nearest neighbor attractive interaction \(U_{1}\). We treat the model with variational, self-consistent mean field theory which considers a large set of magnetic and superconducting orders together. We map the phase diagram and find a ferromagnetic phase and two antiferromagnetic phases when the attractive interaction is weak. for higher values of the attractive interaction we find superconducting states with \(s\)-, \(d\)-, \(p\)- and \(f\)-wave symmetry. The \(p\)-wave and \(d\)-wave superconducting order parameters are found to be chiral/topological.
Near filling \(\nu=1-1.5\) the collinear antiferromagnetism and the spiral magnetism are consistent with previous studies [8], are found to coexist with a \(p+ip\) triplet topological superconductivity when the attractive interaction is significant. Our finding of \(f\)-wave and \(p+ip\)-wave superconductivity at low and high filling is also consistent with a previous study [13], and we find a \(d+id\) topological superconductor to emerge when long range hopping is included in the kinetic energy.
Our study has been inspired by recent advances in surface manipulation and in particular the creation of a triangular superlattice of tin atoms on a silicon surface [9; 10; 11; 12]. However, we expect these results to hold for other similar compounds. In particular, our finding of chiral topological superconductivity could lead to the realization of Majorana zero modes at vortex cores in these compounds. The proximity of these phases to other, non-topological phases suggests that the existence of Majorana modes could be controlled through gating and external fields.
Lastly we would like to add a note about temperature. Due to our limited momentum resolution we could not perform calculations at a very low temperature and used \(\beta=k_{B}T=0.1|t_{1}|=5.28(\text{meV})\) which is \(\sim 61(\text{K})\). This meant that in order to see gaps open we needed to work with large interaction values. We therefore believe that at lower temperature one could see superconductivity with even weaker interactions.
_Acknowledgements -_ K.W.K. acknowledges that this research was supported by the Chung-Ang University Research Grants in 2021. TPB acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
## Appendix A Mean-Field Decomposition of The Hubbard Interaction
The onsite Hubbard interaction can be written as the density and spin operators using \(\hat{n}_{i\uparrow}=\sum_{\alpha\beta}\frac{1}{2}(1+\sigma_{z})_{\alpha\beta} \hat{c}^{\dagger}_{i\alpha}\hat{c}_{i\beta}\). More in general, the interaction can be written on the basis of arbitrary spin direction \(\sigma_{l}=\hat{l}\cdot\vec{\sigma}\). Dropping the site index,
\[\hat{n}_{\uparrow}\hat{n}_{\downarrow} =\sum_{\alpha\beta\gamma\delta}\frac{1}{4}(1+\sigma_{l})_{\alpha \beta}(1-\sigma_{l})_{\gamma\delta}\hat{c}^{\dagger}_{\alpha}\hat{c}_{\beta} \hat{c}^{\dagger}_{\gamma}\hat{c}_{\delta},\] \[=\sum_{\alpha\beta\gamma\delta}\frac{1}{4}\left(\delta_{\alpha \beta}\delta_{\gamma\delta}-(\sigma_{l})_{\alpha\beta}(\sigma_{l})_{\gamma \delta}\right)\hat{c}^{\dagger}_{\alpha}\hat{c}_{\beta}\hat{c}^{\dagger}_{ \gamma}\hat{c}_{\delta},\]
where from the first line to the second the following relation is used: \(\sum_{\alpha\beta\gamma\delta}(\sigma_{l})_{\gamma\delta}\delta_{\alpha\beta} \hat{c}^{\dagger}_{\alpha}\hat{c}_{\beta}\hat{c}^{\dagger}_{\gamma}\hat{c}_{ \delta}=\sum_{\alpha\beta\gamma\delta}(\sigma_{l})_{\alpha\beta}\delta_{\gamma \delta}\hat{c}^{\dagger}_{\gamma}\hat{c}_{\delta}\hat{c}^{\dagger}_{\alpha} \hat{c}_{\beta}\). As a result,
\[\hat{H}^{(\text{on})}=U_{0}\sum_{i}\left[\frac{1}{4}\hat{n}_{i} \hat{n}_{i}-\hat{S}_{il}\hat{S}_{il}\right], \tag{11}\]
where \(\hat{n}_{i\sigma}=\hat{c}^{\dagger}_{i\sigma}\hat{c}_{i\sigma}\), \(\hat{S}_{il}=\frac{1}{2}\sum_{\alpha,\beta}\hat{c}^{\dagger}_{i\alpha}(\hat{l }\cdot\vec{\sigma})_{\alpha\beta}\hat{c}_{i\beta}\). Neglecting the fluctuating part of the interaction, the mean field approximation is
\[\hat{n}^{2}_{i} \simeq 2\hat{n}_{i}\langle\hat{n}_{i}\rangle-\langle\hat{n}_{i} \rangle^{2}, \tag{12}\] \[\hat{S}^{2}_{il} \simeq 2\hat{S}_{il}\langle\hat{S}_{il}\rangle-\langle\hat{S}_{il} \rangle^{2}. \tag{13}\]
Figure 4: (a) The phase diagram as a function of filling \(\nu\) and the attractive interaction strength \(U_{1}\) for \(U_{0}=-6|t_{1}|\). (b,c) The corresponding superconducting and magnetic order parameters are plotted in the same domain. They show the portion of order parameters for magnetism coexisting with triplet pairing superconductivity, p+ip and f-wave SC.
The mean field Hamiltonian (4) is obtained.
## Appendix B Particle-Hole Symmetry and Grand Potential
The BdG Hamiltonian (6) can be written as the sum of two sub blocks:
\[\hat{H}-\mu\hat{N} =\frac{1}{2}\sum_{k}^{\text{BZ}/2}\begin{pmatrix}\hat{C}_{k}^{ \dagger}&\hat{C}_{-k}\end{pmatrix}\begin{pmatrix}\tilde{H}_{k,k}&\tilde{\Delta} _{k,-k}\\ -\tilde{\Delta}_{-k,k}^{*}&-\tilde{H}_{-k,-k}^{*}\end{pmatrix}\begin{pmatrix} \hat{C}_{k}\\ \hat{C}_{-k}^{\dagger}\end{pmatrix}\] \[+\frac{1}{2}\sum_{k}^{\text{BZ}/2}\begin{pmatrix}\hat{C}_{k}& \hat{C}_{-k}^{\dagger}\end{pmatrix}\begin{pmatrix}-\tilde{H}_{k,k}&-\tilde{ \Delta}_{k,-k}\\ \tilde{\Delta}_{-k,k}^{*}&\tilde{H}_{-k,-k}^{*}\end{pmatrix}^{*}\begin{pmatrix} \hat{C}_{k}\\ \hat{C}_{-k}^{\dagger}\end{pmatrix}, \tag{10}\]
where \(\hat{C}_{k}^{\dagger}=\begin{pmatrix}c_{k\uparrow}^{\dagger}&c_{k_{\downarrow} }^{\dagger}\end{pmatrix}\) and \(\tilde{H}_{k,k}=H_{k,k}-\mu\). The hermiticity of the sub block Hamiltonians are guaranteed because \(\Delta^{T}=-\Delta\). By the diagonalization,
\[\begin{pmatrix}\tilde{H}_{k,k}&\tilde{\Delta}_{k,-k}\\ -\tilde{\Delta}_{-k,k}^{*}&-\tilde{H}_{-k,-k}^{*}\end{pmatrix} =U\begin{pmatrix}\zeta_{1}&0\\ 0&\zeta_{2}\end{pmatrix}U^{\dagger}, \tag{11}\] \[\begin{pmatrix}-\tilde{H}_{k,k}&-\tilde{\Delta}_{k,-k}\\ \tilde{\Delta}_{-k,k}^{*}&\tilde{H}_{-k,-k}^{*}\end{pmatrix}^{*} =U^{*}\begin{pmatrix}-\zeta_{1}&0\\ 0&-\zeta_{2}\end{pmatrix}U^{T}, \tag{12}\]
which verifies that the BdG Hamiltonian (6) contains pairs of eigenvalues with the same magnitude and the opposite sign. Note that in general \(\zeta_{1}\neq-\zeta_{2}\) when \(\tilde{H}_{k,k}\neq\tilde{H}_{-k,-k}\). Thus, within each sub block Hamiltonian, there is no particle hole symmetry as it is sometimes assumed in literature. The full spin and particle-hole basis must be employed for a system without inversion symmetry. In the eigen state basis, the Hamiltonian can be written as
\[\hat{H}-\mu\hat{N} =\frac{1}{2}\sum_{k}^{\text{BZ}/2}\begin{pmatrix}\hat{\gamma}_{k,1}^{\dagger}&\hat{\gamma}_{k,2}^{\dagger}\end{pmatrix}\begin{pmatrix}\zeta_ {k,1}&0\\ 0&\zeta_{k,2}\end{pmatrix}\begin{pmatrix}\hat{\gamma}_{k,1}\\ \hat{\gamma}_{k,2}\end{pmatrix}\] \[+\frac{1}{2}\sum_{k}^{\text{BZ}/2}\begin{pmatrix}\hat{\gamma}_{k,1}&\hat{\gamma}_{k,2}\end{pmatrix}\begin{pmatrix}-\zeta_{k,1}&0\\ 0&-\zeta_{k,2}\end{pmatrix}\begin{pmatrix}\hat{\gamma}_{k,1}^{\dagger}\\ \hat{\gamma}_{k,2}^{\dagger}\end{pmatrix}, \tag{13}\]
where \(\begin{pmatrix}\hat{\gamma}_{k,1}^{\dagger}&\hat{\gamma}_{k,2}^{\dagger}\end{pmatrix}= \begin{pmatrix}\hat{C}_{k}^{\dagger}&\hat{C}_{-k}\end{pmatrix}U\) and \(U^{*}=U^{\dagger T}\) are used. This is the mean field Hamiltonian Eq. (14) used for the construction of the grand canonical partition function.
Let us deduce the PHS relation that the block Hamiltonian matrix satisfies. The second term in Eq. (10) can be written as
\[\hat{H}-\mu\hat{N} =\frac{1}{2}\sum_{k}^{\text{BZ}/2}\begin{pmatrix}\hat{C}_{k}^{ \dagger}&\hat{C}_{-k}\end{pmatrix}\begin{pmatrix}\tilde{H}_{k,k}&\tilde{ \Delta}_{k,-k}\\ -\tilde{\Delta}_{-k,k}^{*}&-\tilde{H}_{-k,-k}^{*}\end{pmatrix}\begin{pmatrix} \hat{C}_{k}\\ \hat{C}_{-k}^{\dagger}\end{pmatrix}\] \[+\frac{1}{2}\sum_{k}^{\text{BZ}/2}\begin{pmatrix}\hat{C}_{-k}^{ \dagger}&\hat{C}_{k}\end{pmatrix}\tau_{x}\begin{pmatrix}-\tilde{H}_{k,k}&- \tilde{\Delta}_{k,-k}\\ \tilde{\Delta}_{-k,k}^{*}&\tilde{H}_{-k,-k}^{*}\end{pmatrix}^{*}\tau_{x} \begin{pmatrix}\hat{C}_{-k}^{\dagger}\\ \hat{C}_{k}^{\dagger}\end{pmatrix}. \tag{14}\]
The first term on the right hand side is the sum over momentum in one half of the Brillouin zone, and the second term is over the other half. And, the Hamiltonian in the second term is related to the first one by the following relation:
\[H_{\text{bld}}(-k) =\tau_{x}\begin{pmatrix}-\tilde{H}_{k,k}&-\tilde{\Delta}_{k,-k}\\ \tilde{\Delta}_{-k,k}^{*}&\tilde{H}_{-k,-k}^{*}\end{pmatrix}^{*}\tau_{x}, \tag{15}\] \[=-\tau_{x}\mathcal{K}H_{\text{bld}}(k)\tau_{x}\mathcal{K}, \tag{16}\]
where subscript \({}_{\text{bld}}\) is to indicate the Hamiltonian matrices in Eq. (14). \(H_{\text{bdg}}(k)\) is the Hamltonian matrix in the first term on the right hand side.
Next, let us deduce the PHS relation when the Hamiltonian matrix is constructed in the extended basis (also see pedagogical note in Ref.[19]) as in Eq. (6). There is an unitary transformation between creation operators in real and momentum space:
\[\Psi_{k}=\begin{pmatrix}\vec{C}_{k}\\ \hat{C}_{k}^{\dagger}\end{pmatrix}=\begin{pmatrix}V&\begin{pmatrix}\vec{C}_{r} \\ V^{*}\end{pmatrix}\begin{pmatrix}\vec{C}_{r}\\ \vec{C}_{r}^{\dagger}\end{pmatrix}. \tag{17}\]
where the Fourier transformation \((V)_{ij}=\frac{1}{\sqrt{N_{\text{lat}}}}e^{-i\vec{k}_{i}\cdot\vec{r}_{j}}\) and
\[\vec{C}_{k} =(c_{k_{1}},\cdots,c_{k_{N}})^{T}, \tag{18}\] \[\vec{C}_{k}^{\dagger} =(c_{k_{1}}^{\dagger},\cdots,c_{k_{N}}^{\dagger})^{T},\] (19) \[\vec{C}_{r} =(c_{r_{1}},\cdots,c_{r_{N}})^{T},\] (20) \[\vec{C}_{r}^{\dagger} =(c_{r_{1}}^{\dagger},\cdots,c_{r_{N}}^{\dagger})^{T}, \tag{21}\]
where spin and sublattice degree of freedom can be added in the operator vectors. Let \(\Psi_{r}=\begin{pmatrix}\vec{C}_{r}\\ \vec{C}_{r}^{\dagger}\end{pmatrix}\). Likewise,
\[\Psi_{k}^{\dagger}=\Psi_{r}^{\dagger}\begin{pmatrix}V^{\dagger}\\ &V^{*\dagger}\end{pmatrix}. \tag{22}\]
Note that \(\Psi_{k}^{\dagger}\) and \(\Psi_{k}\) take the same form with the one used in the mean field Hamiltonian, Eq. (6). The Hamiltonian operator is
\[\hat{H} =\Psi_{k}^{\dagger}H_{k}\Psi_{k}=\Psi_{r}^{\dagger}\begin{pmatrix}V ^{\dagger}&\\ &V^{*\dagger}\end{pmatrix}H_{k}\begin{pmatrix}V&\\ V^{*}\end{pmatrix}\Psi_{r}, \tag{23}\] \[=-\Psi_{r}^{\dagger}\tau_{x}\mathcal{K}\begin{pmatrix}V^{\dagger} \\ &V^{*\dagger}\end{pmatrix}H_{k}\begin{pmatrix}V&\\ V^{*}\end{pmatrix}\tau_{x}\mathcal{K}\Psi_{r},\] (24) \[=-\Psi_{r}^{\dagger}\begin{pmatrix}V^{\dagger}&\\ &V^{*\dagger}\end{pmatrix}\tau_{x}\mathcal{K}H_{k}\tau_{x}\mathcal{K}\begin{pmatrix}V &\\ &V^{*}\end{pmatrix}\Psi_{r},\] (25) \[=\Psi_{k}^{\dagger}\tau_{x}(-H_{k}^{*})\tau_{x}\Psi_{k}, \tag{26}\]
where in the second line we used \(H_{r}=-\tau_{x}\mathcal{K}H_{r}\tau_{x}\mathcal{K}\). In the third line
\[\tau_{x}\mathcal{K}\begin{pmatrix}V&\\ &V^{*}\end{pmatrix}\tau_{x}\mathcal{K}=\begin{pmatrix}V&\\ &V^{*}\end{pmatrix} \tag{27}\]
is used. As shown in the last line, the PHS relation in the momentum space is \(H_{k}=-\tau_{x}\mathcal{K}H_{k}\mathcal{K}\tau_{x}\). This verifies that when Hamiltonian is written in the basis containing the creation and annihilation operators with the same set of indices as in \(\Psi_{r}\) and \(\Psi_{k}\), the particle-hole symmetry relation is equally applied in the form of \(\mathcal{P}H_{r}\mathcal{P}=-H_{r}\) and \(\mathcal{P}H_{k}\mathcal{P}=-H_{k}\). Note the sign difference in momentum compared to Eq. (21), which applies to block diagonal Hamiltonian matrices in \(H_{k}\).
## Appendix C Self-Consistency Relations
The mean field Hamiltonian includes order parameters whose values are determined self consistently. We start by some initial value for each order parameter and update the values at every iteration step in the following way. With the 'old' set of order parameter values we calculate the mean field energies which are then used to calculate the grand potential. The 'new' set of order parameter values are then calculated from the grand potential and the process is continued until conversions, when the values do not change between iterations.
Calculating the order parameter from the grand potential can be demonstrated for ferromagnetic order in the \(z\) direction, \(m_{z}\). Differentiating the grand potential by \(m_{z}\) gives,
\[\frac{d\Omega}{dm_{z}}=\frac{\partial(\Omega-E_{0})}{\partial m_{z}}+\frac{dE_ {0}}{dm_{z}}, \tag{22}\]
where \(\partial_{m_{z}}E_{0}=U_{0}N_{\text{lat}}\left(2m_{z}\right)\), and
\[\frac{\partial(\Omega-E_{0})}{\partial m_{z}} =-\beta^{-1}\frac{1}{Z}\frac{\partial Z}{\partial m_{z}}\] \[=-\frac{U_{0}}{Z}\text{Tr}\left[\left(\sum_{i}\hat{c}_{i\uparrow} ^{\dagger}\hat{c}_{i\uparrow}-\hat{c}_{i\downarrow}^{\dagger}\hat{c}_{i \downarrow}\right)e^{-\beta(\hat{H}-\mu\hat{N})}\right]\] \[\equiv-U_{0}(\sum_{i}^{N_{\text{lat}}}\hat{c}_{i\uparrow}^{ \dagger}\hat{c}_{i\uparrow}-\hat{c}_{i\downarrow}^{\dagger}\hat{c}_{i \downarrow}), \tag{23}\]
Therefore, when the minimum of the grand potential is found, \(\partial_{m_{z}}\Omega=0\),
\[m_{z}=\frac{1}{2}\langle\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{i\uparrow}-\hat {c}_{i\downarrow}^{\dagger}\hat{c}_{i\downarrow}\rangle, \tag{24}\]
which is consistent with the way \(\hat{S}_{iz}\) operator is defined in Eq.(3). For CAF and the SAF, we prepare the order parameter in an enlarged unit cell. The relative spin angle within the unit cell is arranged in such a way that it realizes the desired spin ordering and the strength of the magnetism is the order parameter which is found self-consistently.
The self-consistent relation for the superconducting order parameter similarly follows. Let us begin from the real space expression Eq.(5).
\[\frac{\partial(\Omega-E_{0})}{\partial\Delta_{ji,\sigma^{\prime} \sigma}} =\frac{U_{1}}{Z}\text{Tr}\left[\left(\hat{c}_{i\sigma}^{\dagger} \hat{c}_{j\sigma^{\prime}}^{\dagger}\right)e^{-\beta(\hat{H}-\mu\hat{N})} \right], \tag{25}\] \[=U_{1}\langle\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma^{\prime} }^{\dagger}\rangle, \tag{26}\]
where we pick a specific site index \(i,j\) for the pairing order parameter, hence there is no summation. Since \(\partial_{\Delta_{ji,\sigma^{\prime}\sigma}}E_{0}=-U_{1}\Delta_{ji,\sigma^{ \prime}\sigma}^{*}\) we arrive at,
\[\Delta_{ji,\sigma^{\prime}\sigma}^{*}=\langle\hat{c}_{i\sigma}^{\dagger}\hat{ c}_{j\sigma^{\prime}}^{\dagger}\rangle. \tag{27}\]
Note the positions of indices, \((\Delta^{\dagger})_{ij,\sigma\sigma^{\prime}}=\Delta_{ji,\sigma^{\prime}\sigma}^ {*}\). For a general search of superconductivity with a certain pairing symmetry, we transform the Hamiltonian to momentum space using \(\hat{c}_{i\sigma}^{\dagger}=\frac{1}{\sqrt{N_{\text{lat}}}}\sum_{k}e^{ikr_{i }}\hat{c}_{k\sigma}^{\dagger}\):
\[H_{\text{sc}}=U_{1}\sum_{k,\sigma\sigma^{\prime}\delta}^{\text{BZ}}\Delta_{ \delta,\sigma^{\prime}\sigma}\hat{c}_{k\sigma}^{\dagger}c_{-k\sigma^{\prime}} ^{\dagger}e^{-ik\delta}+h.c. \tag{28}\]
where \(\delta\) goes over the six nearest neighbor vectors. The summation in momentum is then divided by half. This is to prepare the BdG Hamiltonian where the basis \(\Psi^{(\dagger)}\) includes both \(k\) and \(-k\).
\[H_{\text{sc}} =U_{1}\sum_{k,\sigma\sigma^{\prime}}^{\text{BZ}/2}\left(\sum_{ \delta}\Delta_{\delta,\sigma^{\prime}\sigma}e^{-ik\delta}\right)\hat{c}_{k \sigma}^{\dagger}c_{-k\sigma^{\prime}}^{\dagger}\] \[+U_{1}\sum_{k,\sigma\sigma^{\prime}}^{\text{BZ}/2}\left(\sum_{ \delta}\Delta_{\delta,\sigma^{\prime}\sigma}e^{ik\delta}\right)\hat{c}_{-k \sigma}^{\dagger}c_{k\sigma^{\prime}}^{\dagger}+h.c. \tag{29}\]
where the bracket in the first and the second term are \(\Delta_{k-k,\sigma\sigma^{\prime}}\) and \(\Delta_{-kk,\sigma\sigma^{\prime}}\) in the BdG Hamiltonian Eq.(6). The order parameter \(\Delta_{\delta,\sigma^{\prime}\sigma}=\chi_{\sigma^{\prime}\sigma}\phi_{\delta}\). Where the spin configuration is in \(\chi_{\sigma,\sigma^{\prime}}\) and the desired order parameter structure is \(\phi_{\delta}\) which encodes a uniform order parameter whose magnitude and phase may depend on the bond direction. The magnitude of the order parameter is included in \(\chi_{\sigma^{\prime}\sigma}\) and we therefore find it iteratively by performing the differentiation of the grand potential,
\[\frac{\partial(\Omega-E_{0})}{\partial\chi_{\sigma^{\prime}\sigma}}\] \[=\frac{U_{1}}{Z}\text{Tr}\left[\left(\sum_{k\delta}^{\text{BZ}} \phi_{\delta}e^{-ik\delta}\hat{c}_{k\sigma}^{\dagger}\hat{c}_{-k\sigma^{ \prime}}^{\dagger}\right)e^{-\beta(\hat{H}-\mu\hat{N})}\right],\] \[=U_{1}\sum_{k\delta}^{\text{BZ}}\phi_{\delta}e^{-ik\delta} \langle\hat{c}_{k\sigma}^{\dagger}\hat{c}_{-k\sigma^{\prime}}^{\dagger}\rangle, \tag{30}\] \[=U_{1}\sum_{k\delta}^{\text{BZ}}\phi_{\delta}e^{-ik\delta}\frac{1} {N_{\text{lat}}}\langle\sum_{i}e^{-ikr_{i}}\hat{c}_{i\sigma}^{\dagger}\sum_{j}e^{ ikr_{j}}\hat{c}_{j\sigma^{\prime}}^{\dagger}\rangle, \tag{31}\]
where in the last line we bring it back to real space expression using \(\hat{c}_{k\sigma}^{\dagger}=\frac{1}{\sqrt{N_{\text{lat}}}}\sum_{i}e^{-ikr_{i}} \hat{c}_{i\sigma}^{\dagger}\). the summation
over momentum yields the delta function \(\delta(r_{j}-r_{i}-\delta)\). As a result,
\[\frac{\partial(\Omega-E_{0})}{\partial\chi_{\sigma^{\prime}\sigma}} =U_{1}\sum_{r_{i}\delta}\phi_{\delta}\langle\hat{c}^{\dagger}_{i \sigma}\hat{c}^{\dagger}_{i+\delta^{\prime}_{\sigma}}\rangle,\] \[=U_{1}N_{\text{lat}}\sum_{\delta}\phi_{\delta}\langle\hat{\Delta}^ {\dagger}_{\delta,\sigma\sigma^{\prime}}\rangle,\] \[=U_{1}N_{\text{lat}}\langle\hat{\chi}^{\dagger}_{\sigma\sigma^{ \prime}}\rangle\sum_{\delta}|\phi_{\delta}|^{2}, \tag{116}\]
where the summation over momentum is restored to the whole BZ by combining \(k\) and \(-k\) terms. Because the constant energy term provides
\[\partial_{\chi_{\sigma^{\prime}\sigma}}E_{0} =-U_{1}\sum_{ij}\partial_{\chi_{\sigma^{\prime}\sigma}}|\Delta_{ \delta,\sigma^{\prime}\sigma}|^{2},\] \[=-U_{1}\chi^{*}_{\sigma^{\prime}\sigma}N_{\text{lat}}\sum_{ \delta}|\phi_{\delta}|^{2}. \tag{117}\]
where \(\Delta_{\delta,\sigma^{\prime}\sigma}=\chi_{\sigma^{\prime}\sigma}\phi_{\delta}\) is used in the first line. Therefore, at the minimum of the grand potential, when \(\partial_{\chi_{\sigma^{\prime}\sigma}}\Omega=0\) we have,
\[\chi^{*}_{\sigma^{\prime}\sigma}=\langle\hat{\chi}^{\dagger}_{\sigma\sigma^{ \prime}}\rangle, \tag{118}\]
where \(\langle\hat{\chi}^{\dagger}_{\sigma\sigma^{\prime}}\rangle=\langle\hat{\chi}^ {*}_{\sigma^{\prime}\sigma}\rangle\). Therefore,
\[\Delta^{\dagger}_{\delta,\sigma\sigma^{\prime}}=\langle\hat{\Delta}^{\dagger }_{\delta,\sigma\sigma^{\prime}}\rangle, \tag{119}\]
This verifies the superconductivity self consistency relation.
|
2301.04744 | An Oxygen Target for (Anti)neutrinos | We discuss a method to obtain an effective oxygen target within a low-density
detector allowing an accurate characterization of the various event topologies
in $\nu (\bar \nu)$-oxygen interactions. Results can be of interest for
long-baseline neutrino oscillation experiments utilizing water targets. In
particular, the combination of both oxygen and hydrogen targets within the same
detector can provide in-situ measurements of nuclear effects and of the
(anti)neutrino flux, which are the leading sources of systematic uncertainties
in long-baseline oscillation analyses. These measurements can also provide
useful information about the nuclear modifications of bound nucleons, as well
as about the isospin symmetry in nucleons and nuclei. | R. Petti | 2023-01-11T22:37:35Z | http://arxiv.org/abs/2301.04744v1 | # An Oxygen Target for (Anti)neutrinos
###### Abstract
We discuss a method to obtain an effective oxygen target within a low-density detector allowing an accurate characterization of the various event topologies in \(\nu(\bar{\nu})\)-oxygen interactions. Results can be of interest for long-baseline neutrino oscillation experiments utilizing water targets. In particular, the combination of both oxygen and hydrogen targets within the same detector can provide in-situ measurements of nuclear effects and of the (anti)neutrino flux, which are the leading sources of systematic uncertainties in long-baseline oscillation analyses. These measurements can also provide useful information about the nuclear modifications of bound nucleons, as well as about the isospin symmetry in nucleons and nuclei.
Introduction
Measurements of high-energy neutrino interactions are challenging both at the source and at the detector sides. The high intensity of modern (anti)neutrino beams obviates the endemic lack of statistics of older neutrino experiments. However, the fact that the energy of the projectile (anti)neutrino is unknown on an event-by-event basis still represents an intrinsic limitation - even when its overall energy spectrum is known with high precision - making the detector itself the critical element in most cases. Besides factors like detector resolutions and energy scale uncertainties, the use of nuclei as (anti)neutrino targets appears ineluctably problematic. The initial momentum of the target nucleon within the nucleus is unknown and hadrons produced in the primary interaction can undergo an additional unknown modification as they can be absorbed or re-interact within the nucleus. Neutrino detectors have to infer the (anti)neutrino energy from the reconstructed final state particles emerging from the nucleus, which are affected by a substantial nuclear smearing and related systematic uncertainties.
The issues above are exacerbated in long-baseline (LBL) neutrino oscillation experiments, in which the need of a multi-kton mass imposes heavy nuclear targets combined with relatively coarse detector resolutions. Their observation of CP violation in the leptonic sector relies on the detection of tiny differences between neutrino and antineutrino Charged Current (CC) interactions. Nuclear effects can introduce asymmetries between neutrinos and antineutrinos potentially mimicking the effect of CP violation, since they are in general isospin and flavor dependent. The physics sensitivity achievable by modern LBL oscillation experiments is thus largely determined by their control of the various systematic uncertainties. In particular, the physics promise of next-generation projects like DUNE [1] and Hyper-Kamiokande [2] is accompanied by an impressive percent-level precision required in systematics.
Near detectors are the critical elements taking on the challenge of controlling systematic uncertainties in LBL experiments. To this end, they must fulfill two separate tasks characterized by conflicting requirements. On one side, they need the highest possible resolution - together with a precise calibration of the energy scales - in order to characterize in great details the various event topologies in \(\nu(\bar{\nu})\) interactions on the same nuclear target used in the far detectors. The main goal is to provide in-situ measurements of nuclear effects and of the (anti)neutrino flux, which are typically the leading sources of systematic uncertainties in the LBL oscillation analyses. On the other side, they must provide a calibration of the event reconstruction in the far detectors, requiring an identical detector technology and necessarily a coarser resolution. This second task is complicated by the impossibility of having identical detectors at the near and far sites, due to differences in rates, event containment, (anti)neutrino energy spectra, etc. In practice the two tasks can be factorized using two separate detector technologies. Reconstruction effects can also be controlled with dedicated test-beam exposures of the key detector elements, supplemented by appropriate calibration samples in the far detectors. In the following we will focus on the first task.
Different nuclear targets have been used by LBL experiments, including lead (A=207) in OPERA 1[3], iron (A=56) in MINOS [4], carbon-based liquid scintillator (\(<\)A\(>\)=15.9) in NOvA [5], and water (\(<\)A\(>\)=14.3) in T2K [6]. Future projects will be based on an argon target (A=40) in DUNE and a water target in Hyper-Kamiokande, as well as in the THEIA proposal [7]. In principle, the use of light isoscalar nuclei like carbon or oxygen can
benefit LBL measurements, although the corresponding detector technologies are usually characterized by somewhat coarser resolutions. In all cases the near detector measurements are the key factor in determining the ultimate physics sensitivity. In this paper we discuss a method to obtain an effective oxygen target based on a low-density detector allowing a precise characterization of nuclear effects and of the (anti)neutrino flux at the near detector sites [8; 9].
The paper is organized as follows. Sec. II briefly summarizes the detector technology designed to offer an accurate control of the neutrino targets. In Sec. III we discuss the "solid" oxygen concept, while in Sec IV we describe different ways to obtain a corresponding water target. Section V outlines the main features of those targets together with some of the physics measurements that they can enable.
## II Control of Targets
A detector technology designed to offer a control of the configuration, chemical composition, and mass of the neutrino targets similar to electron scattering experiments is a Straw Tube Tracker (STT), in which the targets are physically separated from the actual tracking system. A large number of thin planes - each typically 1-2% of radiation length \(X_{0}\) - of various passive materials with comparable thickness are alternated and dispersed throughout active layers - made of four straw planes - of negligible mass in order to guarantee the same acceptance to final state particles produced in (anti)neutrino interactions. The STT allows to minimize the thickness of individual active layers and to approximate the ideal case of a pure target detector - the targets constitute about 97% of the mass - while keeping the total thickness of the stack comparable to one radiation length. Each target plane can be removed or replaced with different materials during data taking, providing a flexible target configuration.
The low average density \(\rho\leq 0.17\) g/cm\({}^{3}\) and the overall dimensions comparable to one \(X_{0}\) allow an accurate reconstruction of the four-momenta of the visible final state particles, as well as of the event kinematics in a plane transverse to the beam direction. The lightness of the tracking straws and the chemical purity of the targets, together with the physical spacing among the individual target planes, make the vertex resolution less critical in associating the interactions to the correct target material. For events with a single reconstructed charged track the corresponding uncertainty is given by the ratio between the thickness of the straw walls (\(<20\mu m\)) and the one of a single target layer, typically below 0.5%. For events with at least two reconstructed charged tracks this uncertainty is reduced to less than 0.1%, thanks to a vertex resolution (\(\ll 1\) mm [10]) much smaller than the target thickness.
The detector must be placed inside a magnetic field for the momentum measurement and surrounded by an electromagnetic calorimeter for the detection of neutral particles. The use of a distributed target mass within a relatively large volume (\(\sim 40\) m\({}^{3}\)) and a high track sampling of 0.15-0.30% \(X_{0}\) reduce the impact of multiple scattering on the measurements. The detector is optimized for the "solid" hydrogen technique, in which \(\nu(\bar{\nu})\) interactions on free protons are obtained by subtracting measurements on dedicated graphite (C) targets from those on polypropylene (CH\({}_{2}\)) targets [8; 9]. This technique is conceived to be model-independent, as the data from the graphite targets automatically include all types of processes, as well as detector effects, relevant for the selection of interactions on H. For CC interactions the dilution factor with respect to a pure H\({}_{2}\) target can be reduced by a factor 5-7 with a kinematic analysis based on energy-momentum conservation [11]. The thickness
of the two default target materials, as well as the average density of the detector, depend on the value of the magnetic field available, in order to limit the multiple scattering contribution to the momentum and angular resolutions. For B=0.6 T we can use a thickness up to about 7 mm for the CH\({}_{2}\) targets and 4 mm for the C targets 2. Detector simulations with GEANT4 [12] indicate that a single hit resolution of 200 \(\mu m\) is sufficient for the various physics measurements. The average momentum resolution expected for muons is \(\delta p/p\sim 3.5\%\) and the average angular resolution better than 2 mrad with the default CH\({}_{2}\) and C targets. The momentum scale can be calibrated to about 0.2% using reconstructed \(K_{0}\rightarrow\pi^{+}\pi^{-}\) decays [13; 14].
Footnote 2: The C targets can be built from isotropic graphite, which is characterized by good mechanical properties, a density of about 1.8 g/cm\({}^{3}\), and a high purity.
## III Oxygen Target
Since a pure oxygen target in liquid or gaseous form is not feasible due to safety and practical considerations, we are restricted to the oxygen available within chemical compounds. The precise control of the targets offered by the STT (Sec. II) allows the implementation of a "solid" oxygen target from a subtraction between thin polyoxymethylene (CH\({}_{2}\)O) and polypropylene (CH\({}_{2}\)) targets. The former is an engineering thermoplastic (acetal, delrin) used for precision parts and characterized by high strength, hardness and rigidity, with \(X_{0}=27.28\) cm and \(\rho=1.41\) g/cm\({}^{3}\). Several CH\({}_{2}\)O planes can be easily integrated into the detector by replacing some of the default CH\({}_{2}\) targets. The distribution of the generic kinematic variables \(\vec{x}\) in \(\nu(\bar{\nu})\)-oxygen interactions can then be obtained as:
\[N_{\rm O}(\vec{x})\equiv N_{\rm CH_{2}O}(\vec{x})-\frac{M_{\rm CH_{2}/CH_{2}O }}{M_{\rm CH_{2}}}N_{\rm CH_{2}}(\vec{x}) \tag{1}\]
where \(N_{\rm CH_{2}O}\) and \(N_{\rm CH_{2}}\) are the numbers of events selected from the polyoxymethylene and polypropylene targets, respectively. The interactions from this latter are normalized by the ratio between the total fiducial masses of CH\({}_{2}\) within the polypropylene and the acetal targets, \(M_{\rm CH_{2}/CH_{2}O}/M_{\rm CH_{2}}\). Both targets must have comparable thickness in terms of radiation and nuclear interaction lengths and must be alternated throughout the detector volume to guarantee the same acceptance for final state particles. To this end, a solid acetal slab 4.5 mm thick can be used, corresponding to about 0.016 \(X_{0}\). The oxygen content by mass within acetal is dominant at 53.3%. We note that polypropylene is the main target material required for the "solid" hydrogen concept in STT. We therefore expect the statistical uncertainty on the measured CH\({}_{2}\) background to be much smaller compared to the one of the acetal target.
## IV Water Targets
In addition to direct measurements on an oxygen target, it can be useful to have a complementary water target within the same detector. To this end, we can exploit the simultaneous presence of polyoxymethylene, polypropylene, and graphite targets in STT. The distribution of the generic kinematic variables \(\vec{x}\) in \(\nu(\bar{\nu})\)-water interactions can then be
simply obtained from a subtraction between CH\({}_{2}\)O and C targets:
\[N_{\rm H_{2}O}(\vec{x})\equiv N_{\rm CH_{2}O}(\vec{x})-\frac{M_{\rm C/CH_{2}O}}{M _{\rm C}}N_{\rm C}(\vec{x}) \tag{2}\]
where \(N_{\rm CH_{2}O}\) and \(N_{\rm C}\) are the numbers of events selected from the polyoxymethylene and graphite targets, respectively. The interactions from this latter are normalized by the ratio between the total fiducial masses of C within the graphite and CH\({}_{2}\)O targets, \(M_{\rm C/CH_{2}O}/M_{\rm C}\). The advantages of this minimal approach are that we do not need to introduce additional targets, we can design all targets to have the same acceptance, and we avoid extraneous materials achieving a high chemical purity. The water content by mass within acetal is 60%. Similarly to the case of the oxygen target discussed above, the available mass of the graphite target is expected to be significantly larger than the C content within acetal, as it is an essential component of the "solid" hydrogen technique. We note that the simultaneous presence of the three materials within STT would allow a complete characterization of the water target together with its separate constituent elements, O and H.
We can also explicitly integrate thin water targets within STT, replacing some of the main polypropylene ones. Such passive water targets must be contained within sealed plastic shells. In order to minimize the total thickness of individual targets in terms of radiation length, as well as the amount of spurious materials to be subtracted from the shell, we can use 12 mm water layers encapsulated inside acetal shells 1.5 mm thick. The total effective thickness of such targets would be equivalent to about 0.044 \(X_{0}\). The corresponding C content to be subtracted following Eq.(2) to obtain a pure water target is only about 10.4%. An interesting application of such water targets in STT is the measurement of \(\nu\) and \(\bar{\nu}\) interactions off the bound neutron in the deuteron (D), which can be obtained from a subtraction between heavy water (D\({}_{2}\)O) and ordinary water (H\({}_{2}\)O) targets [8]. To this end, both targets must be enclosed into identical acetal shells, which must be filled in such a way as to contain the same total mass of oxygen.
## V Measuring Nuclear Effects
Nuclear effects and the (anti)neutrino flux are the leading sources of systematic uncertainties in high-energy neutrino scattering measurements [9; 15], as well as in modern long-baseline oscillation experiments [16; 17]. Both issues arise because in conventional (anti)neutrino beams the energy of the incoming neutrino is unknown on an event-by-event basis. The need to infer the neutrino energy from the detected final state particles constitutes an intrinsic limitation of high-energy neutrino experiments using nuclear targets, as
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Target material & Composition & Density & Thickness & Rad. length & Nucl. int. length \\ \hline \hline Polypropylene & CH\({}_{2}\) & 0.91 g/cm\({}^{3}\) & 7.0 mm & 0.015 \(X_{0}\) & 0.008 \(\lambda_{I}\) \\ Graphite & C & 1.80 g/cm\({}^{3}\) & 4.0 mm & 0.016 \(X_{0}\) & 0.008 \(\lambda_{I}\) \\ Polyoxymethylene & CH\({}_{2}\)O & 1.41 g/cm\({}^{3}\) & 4.5 mm & 0.016 \(X_{0}\) & 0.008 \(\lambda_{I}\) \\ \hline \end{tabular}
\end{table}
Table 1: Possible parameters of the individual targets to be alternated within STT (for B=0.6 T) in the “solid” oxygen and hydrogen techniques. The thickness can be fine-tuned depending on the specific detector configuration and application. See text for details.
the nuclear smearing introduces substantial systematic uncertainties in the process (Sec. I). The availability of both H and nuclear targets within the same detector can help to mitigate such problems in STT [8; 9]. The relative \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) fluxes as a function of energy can be determined in-situ with an accuracy around 1% using exclusive \(\nu_{\mu}p\to\mu^{-}p\pi^{+}\) and \(\bar{\nu}_{\mu}p\to\mu^{-}n\) processes on H at small energy transfer [14]. The combined use of \(\nu\)-H and \(\bar{\nu}\)-H CC interactions can provide a control sample free from nuclear effects to calibrate the neutrino energy scale in CC interactions from the nuclear targets [8].
The STT offers a tool to measure nuclear modifications of cross-sections and to constrain the systematic uncertainties associated to the nuclear smearing for the various integrated nuclear targets. Each individual target is designed to be transparent to final state particles (Tab. 1) allowing, together with the low average density of the detector, an accurate reconstruction and characterization of the various event topologies in \(\nu(\bar{\nu})\) interactions. Simulations of the detector response with GEANT4 [12] result in a rather uniform acceptance over the full \(4\pi\) angle, with values of 95-99% for \(\mu^{\pm},\pi^{\pm},K^{\pm},e^{\pm}\). A key requirement is to guarantee the same acceptance across all nuclear targets, which is achieved by the combined effect of their thinness (Tab. 1) and of their alternation throughout the detector volume. Detailed detector simulations indicate that in this way the acceptance difference between targets can be kept within \(10^{-3}\) for all particles. The subtraction procedure required to obtain interactions on H, O, and H\({}_{2}\)O can then be considered model-independent. Furthermore, the detector acceptance effectively cancels out in comparisons among the selected interactions on the H, C, and O targets.
The high intensity of modern (anti)neutrino beams complements well the relatively small mass of the various targets in STT. For illustration, a fiducial mass of one tonne of water at the future Long-Baseline Neutrino Facility (LBNF) [1; 18] will collect about \(1.4\times 10^{6}\)\(\nu_{\mu}\) CC events/year with the default low-energy spectrum (a factor of two higher with the planned PIP-II upgrade) and about \(6.6\times 10^{6}\)\(\nu_{\mu}\) CC events/year with the high-energy beam spectrum and the upgraded beam 3. With such high event rates a limited number of acetal and/or water targets in STT would suffice to obtain sensible physics measurements. Assuming as a reference a STT configuration with a "solid" hydrogen mass equivalent to about 10 m\({}^{3}\) of liquid H\({}_{2}\)4, about 20 modules equipped with the acetal targets described above would provide an O target mass similar to the graphite one. An overall water target mass close to one tonne is therefore relatively easy to achieve. We note that the statistical uncertainties expected from such a water target at LBNF would be roughly comparable with the systematics from the 0.2% energy scale uncertainty in STT, and smaller than the ones from the in-situ determination of the flux using exclusive processes on H [14].
Footnote 3: On-axis rates expected at the near detector site.
Footnote 4: A fiducial mass of “solid” hydrogen around 700 kg can be obtained from the combination of about 5 tons of polypropylene and about 600 kg of graphite.
Comparing measurements of the bound nucleon structure functions \(F_{2,3}^{O}\) from the "solid" oxygen with the ones of the free nucleons in H with similar acceptance can provide insights on the nuclear modifications of the nucleon properties [8; 19; 20; 21]. The oxygen target can also provide complementary measurements with respect to the C and Ca targets to test the isospin (charge) symmetry [8]. The isotopic content expected for a standard O target is 99.76% of \({}^{16}\)O, 0.2% of \({}^{18}\)O, and 0.04% of \({}^{17}\)O, resulting on average in the smallest isovector component among stable elements \(\beta=(2Z-A)/A=6\times 10^{-5}\). A comparison between \(\nu\) and \(\bar{\nu}\) interactions on oxygen through the ratios \({\cal R}_{2}^{\rm O}=F_{2}^{\bar{\nu}}/F_{2}^{\nu}-1\) and \({\cal R}_{3}^{\rm O}=xF_{3}^{\bar{\nu}}/xF_{3}^{\nu}-1\) for the structure functions \(F_{2}\) and \(xF_{3}\) can provide useful information about the isospin symmetry in nucleons and nuclei. |
2302.12476 | Asymptotic behaviour of the semidiscrete FE approximations to weakly
damped wave equations with minimal smoothness on initial data | Exponential decay estimates of a general linear weakly damped wave equation
are studied with decay rate lying in a range. Based on the $C^0$-conforming
finite element method to discretize spatial variables keeping temporal variable
continuous, a semidiscrete system is analysed, and uniform decay estimates are
derived with precisely the same decay rate as in the continuous case. Optimal
error estimates with minimal smoothness assumptions on the initial data are
established, which preserve exponential decay rate, and for a 2D problem, the
maximum error bound is also proved. The present analysis is then generalized to
include the problems with non-homogeneous forcing function, space-dependent
damping, and problems with compensator. It is observed that decay rates are
improved with large viscous damping and compensator. Finally, some numerical
experiments are performed to validate the theoretical results established in
this paper. | P. Danumjaya, Anil Kumar, Amiya K. Pani | 2023-02-24T06:32:03Z | http://arxiv.org/abs/2302.12476v2 | Asymptotic behaviour of the semidiscrete FE approximations to weakly damped wave equations with minimal smoothness on initial data
###### Abstract
Exponential decay estimates of a general linear weakly damped wave equation are studied with decay rate lying in a range. Based on the \(C^{0}\)-conforming finite element method to discretize spatial variables keeping temporal variable continuous, a semidiscrete system is analysed, and uniform decay estimates are derived with precisely the same decay rate as in the continuous case. Optimal error estimates with minimal smoothness assumptions on the initial data are established, which preserve exponential decay rate, and for a 2D problem, the maximum error bound is also proved. The present analysis is then generalized to include the problems with non-homogeneous forcing function, space-dependent damping, and problems with compensator. It is observed that decay rates are improved with large viscous damping and compensator. Finally, some numerical experiments are performed to validate the theoretical results established in this paper.
**Keywords.** Weakly damped wave equation, uniform decay estimates, Galerkin finite elements, optimal error estimates, numerical experiments.
**Mathematics Subject Classification.** 65M60, 65M15, 35L20.
## 1 Introduction
This paper deals with uniform exponential decay rates for the semidiscrete finite element solution of the following weakly damped wave equation:
\[u^{\prime\prime}+\alpha\,u^{\prime}+Au=0,\;x\in\Omega,\;t>0 \tag{1.1}\]
with initial conditions
\[u(x,0)=u_{0}(x),\quad u_{t}(x,0)=u_{1}(x),\;x\in\Omega \tag{1.2}\]
and the boundary condition
\[u=0,\quad(x,t)\in\partial\Omega\times(0,\infty), \tag{1.3}\]
where \(u^{\prime}=\frac{\partial u}{\partial t}\), \(\Omega\) is a convex polygonal or polyhedral domain in \(\mathbb{R}^{d}\) with boundary \(\partial\Omega\), and \(\alpha\) is a fixed positive constant. Here, \(A\) is a second order linear elliptic operator given by
\[A\phi=-\sum_{i,j=1}^{d}\frac{\partial}{\partial x_{i}}\big{(}a_{ ij}(x)\frac{\partial\phi}{\partial x_{j}}\big{)}+a_{0}(x)\phi, \tag{1.4}\]
where the coefficients \(a_{ij}\) and \(a_{0}\) are smooth with \(a_{0}(x)\geq 0\) and uniformly positive definite matrix \(\{a_{ij}\}_{1\leq i,j\leq 2}\)
The equation (1.1) is known as the damped wave or telegraphers equation [11, 12], which arises in many applications such as acoustics, linear elasticity, electro-magnetics, heat transfer, or particle transport, etc. Due to many applications, the damped wave equation has attracted significant interest in the literature. For the existence of a weak solution with regularity results using the Bubnov-Galerkin method and weak compactness arguments, we may refer to [26, Theorems 4.1-4.2 of Chapter II] and [1, Section 1.8].
In the literature, explicit nonuniform decay rates have been established using a control-theoretic method for weakly damped linear systems in Hilbert space, see, [23]. The decay rate for the problem (1.1) is given in terms of the first positive eigenvalue of the operator \(A\) and the weak damping coefficient \(\alpha>0\) in [26, Proposition 1.2 of Chapter 4]. In this paper, we have proved a better decay rate not only for the first energy, but also for higher order energy. When the damped coefficient \(\alpha=\alpha(x)>0\), the decay rate involves the minimum and maximum of this coefficient and the first positive eigenvalue of \(A\) in [20]. For related papers, see, also [5]-[7] and references, therein. In all these papers mentioned above, a large damping coefficient does not necessarily give rise to a large decay rate as it also depends on the minimum positive eigenvalue of the associated elliptic eigenvalue problem. Subsequently, Chen [8] has developed and analysed improved decay rates by a new stabilization scheme that combines viscous damping and compensation. We shall discuss it in section 4.3 under generalizations.
We use the standard notation for Sobolev spaces and their norms. In particular, let \(L^{2}(\Omega)\) denote the space of square integrable functions on \(\Omega\) with natural inner product \((\cdot,\cdot)\) and induced norm \(\|\cdot\|\). For a nonnegative integer \(k\), let \(H^{k}\) denote the Hilbert Sobolev space \(H^{k}(\Omega)\) with norm \(\|\cdot\|_{k}\). Let
\[H^{1}_{0}(\Omega)=\{\phi\in H^{1}(\Omega):\phi=0\;\;\mbox{on}\;\partial\Omega\}.\]
With \(A\) in (1.4) as a linear self-adjoint and positive definite operator on \(L^{2}=:L^{2}(\Omega)\) with dense domain \(D(A)=H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\), we define \(\dot{H}^{r}=\dot{H}^{r}(\Omega)=:D(A^{r/2})\) as a subspace of \(H^{r}\) with norm \(|v|_{r}=\|A^{r/2}\|\). Essentially, for \(r\geq 0\) and \(r/2-1/4\) is not an integer, the space \(\dot{H}^{r}=\{v\in H^{r}:A^{j}v=0\;\mbox{on}\;\partial\Omega,\;\mbox{for}\;j<r /2-1/4\}\) and its norm is equivalent to \(H^{r}\)-norm. Now, \(H^{1}_{0}=D(A^{1/2})=\dot{H}^{1}_{0}\) and \(\dot{H}^{2}=H^{2}\cap H^{1}_{0}\).
In general, uniform decay property of the continuous problem (1.1) may not be preserved by the approximate solution when standard numerical schemes are applied. This may be due to existence of high frequency modes which are only weakly damped. Therefore, several stabilized methods have been developed and analysed, which give rise to uniform decay property of the semidiscrete-in-space schemes, keeping time variable continuous, see, [19] and [28] and references, therein. It is to be noted that mixed finite element methods are also employed to preserve uniform exponential decay property, see, [12]. This paper follows a different strategy to discuss uniform decay property of the semidiscrete scheme, when \(C^{0}\)-conforming finite element method is applied in the spatial direction. The key to the success of the present scheme is based on the energy arguments with the bound on the Poincare type inequality, which provides a decay estimate in a range and similar to the decay rate as predicted by the continuous problem. However, the decay rate given by the present analysis may not be optimal and this is due to non-conservative bounds in our estimates. The main contributions of this paper are as follows.
* The first part of this paper focusses on the problem (1.1)-(1.3) and higher order in time regularity results are derived along with exponential decay properties using energy arguments of [26, Theorems 4.1-4.2 of Chapter II] and [1, Section 1.8]. It is observed that the decay rate is calculated in a range involving the damping parameter \(\alpha\) and the first positive eigenvalue of the operator \(A.\) In case, \(u_{0}\in D(A^{(k)})\) and \(u_{1}\in D(A^{(k-1/2)})\), the corresponding energy \(2\mathcal{E}_{A^{(k)}}(u)=\|A^{(k-1/2)}u^{\prime}\|^{2}+\|A^{(k)}u\|^{2}\) decays exponentially with the same decay rate. In fact, it is observed that for large damping parameter, the decay rates may not be higher.
* Based on \(C^{0}\)-conforming finite element (FE) discretization in spatial variables keeping time variable continuous, a semidiscrete scheme is proposed, and uniform exponential decay estimates, which are uniform with respect to the discretizing parameter, are derived.
* Optimal error estimates are established with minimal smoothness assumption on the initial data, that is, when \(u_{0}\in H^{3}\cap H^{1}_{0}\) and \(u_{1}\in H^{2}\cap H^{1}_{0}\), which have the same decay rate as observed for the semidiscrete solution. When \(d=2\), the maximum norm estimate is also obtained with exactly same decay rate.
* The analysis is then extended to include the nonhomogeneous problem and the problem with the space dependent viscous damping.
* Decay rates are improved by the new stabilization method of combined with viscous damping and compensator for the semidiscrete solution. Compared to (i)-(iii), the decay rates can be made larger by choosing large damping parameter and large compensator.
* Finally, several numerical experiments are conducted to confirm our theoretical findings.
We now emphasise here that Rauch [21] has earlier initiated the discussion on optimal order of convergence using \(C^{0}\)-conforming linear FE method to a second linear order wave equation with minimal initial data, that is, \(u_{0}\in H^{3}\cap H^{1}_{0}\) and \(u_{1}=0\), see also [24], [25] and [17] and references, therein.
An outline of this paper is as follows. In Section 2, we discuss weak formulation, regularity, and decay estimates for the continuous problem. Section 3 deals with the semidiscrete scheme. We establish decay estimates and optimal error estimates for the semidiscrete scheme. Section 4 is devoted to some generalizations involving inhomogeneous problems, space dependent damping problems, and problems with damping and compensator. Section 5 discusses a completely discrete scheme along with its energy conservation properties. Finally, several numerical experiments are conducted, whose results confirm our theoretical findings.
## 2 Weak formulation, Regularity results and Decay properties
This section deals with the weak formulation, some regularity results, and also the decay properties for the continuous problem.
Note that the operator \(A\) in (1.4) is, infact, a linear self-adjoint and positive definite operator on \(L^{2}=:L^{2}(\Omega)\) with dense domain \(D(A)=H^{2}(\Omega)\cap H^{1}_{0}(\Omega).\) Then, the problem (1.1)-(1.3) in its abstract form in \(L^{2}\) is to seek \(u(t)\in D(A)\) for \(t>0\) satisfying
\[u^{\prime\prime}+\alpha u^{\prime}+Au=0,\ t>0,\ \ u(0)=u_{0},\ \ u_{t}(0)=u_{1}. \tag{2.1}\]
Let us recall here the following Poincare inequalities for our subsequent use. For \(v\in D(A^{1/2})\)
\[\|v\|\leq\frac{1}{\sqrt{\lambda_{1}}}\|A^{1/2}v\|, \tag{2.2}\]
and for \(v\in D(A)=:H^{2}\cap H^{1}_{0},\)
\[\|A^{1/2}v\|\leq\frac{1}{\sqrt{\lambda_{1}}}\|Av\|, \tag{2.3}\]
where \(\lambda_{1}\) is the minimum positive eigenvalue of \(A.\)
We now state the following theorem without proof on the existence of a unique weak solution, whose proof can be found in [18, Theorem 1.1], [26, Theorem 4.1 of Chapter II].
**Theorem 2.1**.: _Assume that \(u_{0}\in D(A)\) and \(u_{1}\in D(A^{1/2}).\) Then, the problem (1.1)-(1.3) admits a unique strong solution \(u\) satisfying_
\[u\in L^{\infty}\left(0,T;D(A)\right),\quad u^{\prime}\in L^{\infty}\left(0,T ;D(A^{1/2})\right),\quad u^{\prime\prime}\in L^{\infty}\left(0,T;L^{2}( \Omega)\right),\]
_and_
\[u^{\prime\prime}+\alpha\,u^{\prime}+Au=0,\ \text{ a.e.\ }t>0\]
_with_
\[u(0)=u_{0},\quad u_{t}(0)=u_{1}.\]
Now, the bilinear form \(a(\cdot,\cdot)\) on \(V=D(A^{1/2})\) associated with \(A\) is defined for \(v,w\in D(A^{1/2})\) by
\[a(v,w)=:(A^{1/2}v,A^{1/2}w):=\sum_{i,j=1}^{d}\left(a_{ij}\frac{\partial v}{ \partial x_{j}},\frac{\partial w}{\partial x_{i}}\right)+\left(a_{0}v,w\right). \tag{2.4}\]
Then, rewrite (2.1) as
\[(u^{\prime\prime},\chi)+\alpha(u^{\prime},\chi)+a(u,\chi)=0,\ \chi \in D(A^{1/2}), \tag{2.5}\] \[u(0)=u_{0},\quad\text{and}\quad u^{\prime}(0)=u_{1}. \tag{2.6}\]
### Decay Property
This subsection focuses on the decay properties for the continuous problem (1.1)-(1.3). Now, define the energy functional
\[\mathcal{E}^{(1)}(t)=\frac{1}{2}\left(\|u^{\prime}\|^{2}+\|A^{1/2}u\|^{2}\right). \tag{2.7}\]
**Theorem 2.2**.: _The solution \(u\) of (1.1)-(1.3) satisfies for \(0<\delta\leq\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right)\)_
\[\mathcal{E}^{(1)}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}^{(1)}(0),\;t\geq 0. \tag{2.8}\]
Proof.: Set \(\chi=u^{\prime}+\epsilon u\) in (2.5) to obtain
\[(u^{\prime\prime},u^{\prime}+\epsilon u)+\alpha(u^{\prime},u^{\prime}+\epsilon u )+a\left(u,u^{\prime}+\epsilon u\right)=0, \tag{2.9}\]
and hence, rewrite it as
\[\frac{1}{2}\frac{d}{dt}\left(\|u^{\prime}\|^{2}+\|A^{1/2}u\|^{2}+\epsilon\, \alpha\|u\|^{2}\right)+\epsilon(u^{\prime\prime},u)+\alpha\|u^{\prime}\|^{2} +\epsilon\|A^{1/2}u\|^{2}=0. \tag{2.10}\]
A use of the energy (2.7) shows
\[\frac{d}{dt}\mathcal{E}^{(1)}(t)+\frac{\epsilon\,\alpha}{2}\frac{d}{dt}\|u\|^ {2}+\epsilon(u^{\prime\prime},u)+\alpha\|u^{\prime}\|^{2}+\epsilon\|A^{1/2}u \|^{2}=0. \tag{2.11}\]
Note that
\[\epsilon(u^{\prime\prime},u)=\epsilon\frac{d}{dt}(u^{\prime},u)-\epsilon\|u^ {\prime}\|^{2}. \tag{2.12}\]
Using (2.12) in (2.11), we obtain
\[\frac{d}{dt}\left(\mathcal{E}^{(1)}(t)+\epsilon(u^{\prime},u)+\frac{\alpha \epsilon}{2}\|u\|^{2}\right)+\left((\alpha-\epsilon)\|u^{\prime}\|^{2}+ \epsilon\|A^{1/2}u\|^{2}\right)=0. \tag{2.13}\]
Define
\[\mathcal{E}^{(1)}_{\epsilon}(t)=\mathcal{E}^{(1)}(t)+\epsilon(u^{\prime},u)+ \frac{\alpha\epsilon}{2}\|u\|^{2},\]
and
\[F(t)=(\alpha-\epsilon)\|u^{\prime}\|^{2}+\epsilon\|A^{1/2}u\|^{2},\]
to rewrite (2.13) as
\[\frac{d}{dt}\mathcal{E}^{(1)}_{\epsilon}(t)+F(t)=0. \tag{2.14}\]
Observe that
\[\mathcal{E}^{(1)}_{\epsilon}(t) = \mathcal{E}^{(1)}(t)+\epsilon(u^{\prime},u)+\frac{\alpha\epsilon }{2}\|u\|^{2}\] \[\geq \mathcal{E}^{(1)}(t)-\epsilon\|u^{\prime}\|\|u\|+\frac{\alpha \epsilon}{2}\|u\|^{2}.\]
Using the Young's inequality, we obtain
\[\mathcal{E}^{(1)}_{\epsilon}(t)\geq\mathcal{E}^{(1)}(t)-\epsilon\left(\frac{ \|u^{\prime}\|^{2}}{2\alpha}+\frac{\alpha}{2}\|u\|^{2}\right)+\frac{\alpha \epsilon}{2}\|u\|^{2}=\mathcal{E}^{(1)}(t)-\frac{\epsilon}{2\alpha}\|u^{ \prime}\|^{2}, \tag{2.15}\]
and hence,
\[\mathcal{E}^{(1)}_{\epsilon}(t)\geq\left(1-\frac{\epsilon}{\alpha}\right) \mathcal{E}^{(1)}(t). \tag{2.16}\]
Now, choose \(\epsilon\) in a suitable manner with \(\frac{\epsilon}{\alpha}\leq\frac{1}{2}\), i.e., \(0<\epsilon\leq\frac{\alpha}{2}\) to arrive at
\[\mathcal{E}_{\epsilon}^{(1)}(t)\geq\frac{1}{2}\mathcal{E}^{(1)}(t). \tag{2.17}\]
Again, recall the definition of \(\mathcal{E}_{\epsilon}^{(1)}(t)\) and use the Cauchy-Schwarz and Young's inequalities to find that
\[\mathcal{E}_{\epsilon}^{(1)}(t)\leq\frac{3}{2}\mathcal{E}^{(1)}(t)+\left( \frac{\epsilon}{2\alpha}-\frac{1}{4}\right)\|u^{\prime}\|^{2}-\frac{1}{4}\|A ^{1/2}u\|^{2}+\epsilon\alpha\|u\|^{2}. \tag{2.18}\]
Using Poincare inequality (2.2), the inequality (2.18) becomes
\[\mathcal{E}_{\epsilon}^{(1)}(t)\leq\frac{3}{2}\mathcal{E}^{(1)}(t)+\left( \frac{\epsilon}{2\alpha}-\frac{1}{4}\right)\|u^{\prime}\|^{2}+\left(\frac{ \epsilon\alpha}{\lambda_{1}}-\frac{1}{4}\right)\|A^{1/2}u\|^{2}. \tag{2.19}\]
In order to derive an estimate of the form \(\mathcal{E}_{\epsilon}^{(1)}(t)\leq\frac{3}{2}\mathcal{E}^{(1)}(t)\), we must have
\[\frac{\epsilon}{2\alpha}-\frac{1}{4}\leq 0,\quad\text{and}\quad\frac{\epsilon \alpha}{\lambda_{1}}-\frac{1}{4}\leq 0, \tag{2.20}\]
that is, choose
\[0<\epsilon\leq\frac{1}{2}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right).\]
Therefore, combining (2.17) and (2.20), we obtain
\[\frac{1}{2}\mathcal{E}^{(1)}(t)\leq\mathcal{E}_{\epsilon}^{(1)}(t)\leq\frac{3 }{2}\mathcal{E}^{(1)}(t), \tag{2.21}\]
provided \(0<\epsilon\leq\frac{1}{2}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right)\). A use of the definition of \(F\) and \(\mathcal{E}^{(1)}\) shows
\[F(t)\leq(\alpha-2\epsilon)\|u^{\prime}\|^{2}+2\epsilon\mathcal{E}^{(1)}(t).\]
Note that for \(0<\epsilon=\min\left(\frac{\alpha}{2},\frac{\lambda_{1}}{4\alpha}\right)\), \(\alpha-2\epsilon\geq 0\), there holds
\[F(t)\geq 2\epsilon\mathcal{E}^{(1)}(t)\geq\frac{4\epsilon}{3}\mathcal{E}_{ \epsilon}^{(1)}(t).\]
Thus, from (2.13), we obtain
\[\frac{d}{dt}\mathcal{E}_{\epsilon}^{(1)}(t)+\frac{4\epsilon}{3}\mathcal{E}_{ \epsilon}^{(1)}(t)\leq\frac{d}{dt}\mathcal{E}_{\epsilon}^{(1)}(t)+F(t)=0,\]
which implies that
\[\frac{d}{dt}\mathcal{E}_{\epsilon}^{(1)}(t)+\frac{4\epsilon}{3}\mathcal{E}_{ \epsilon}^{(1)}(t)\leq 0.\]
An integration with respect to \(t\) implies
\[\mathcal{E}_{\epsilon}^{(1)}(t)\leq\mathcal{E}_{\epsilon}^{(1)}(0)\,e^{-\frac{ 4}{3}\epsilon t}\leq\frac{3}{2}\,\mathcal{E}^{(1)}(0)\,e^{-\frac{4}{3} \epsilon t},\]
and a use of (2.21) yields
\[\mathcal{E}^{(1)}(t)\leq 3\,\mathcal{E}^{(1)}(0)\,e^{-2\delta t}, \tag{2.22}\]
where \(\delta\in\left(0,\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha} \right)\right)\). This completes the rest of the proof.
The next theorem is on time derivatives.
**Theorem 2.3**.: _The solution \(u\) of (1.1)-(1.3) satisfies for \(0<\delta\leq\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right)\)_
\[\mathcal{E}^{(j)}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}^{(j)}(0),\;j=1,2,3,\ldots, \;t\geq 0, \tag{2.23}\]
_where_
\[\mathcal{E}^{(j)}(t)=\frac{1}{2}\left(\|u^{(j)}\|^{2}+\|A^{1/2}u^{(j-1)}\|^{2 }\right),\]
_and \(u^{(j)}\) stands for \(j^{\mbox{th}}\) time derivative of \(u\)._
Proof.: For proving the result for \(\mathcal{E}^{(j)}\), we apply the induction hypothesis. Assume that the result is true for \(j-1\), that is
\[\mathcal{E}^{(j-1)}(t)\leq 3\,\mathcal{E}^{(j-1)}(0)\,e^{-2\delta t},\]
we shall show the result (2.23) holds for \(j\)
\[(u^{(j+1)},v)+\alpha(u^{(j-1)},v)+a(u^{(j-1)},v)=0. \tag{2.24}\]
Choose \(w=u^{(j-1)}\) in (2.24) and set \(v=w^{\prime}+\epsilon\,w\) the resulting equation to arrive at
\[\frac{d}{dt}\mathcal{E}^{(1)}_{\epsilon}(w)(t)+F(w)(t)=0, \tag{2.25}\]
where
\[\mathcal{E}^{(1)}_{\epsilon}(w)(t)=\mathcal{E}^{(1)}(w)(t)+\epsilon(w^{ \prime},w)+\frac{\epsilon\alpha}{2}\|w\|^{2},\;\;\mbox{and}\;\;F(w)(t)=(\alpha -\epsilon)\|w^{\prime}\|^{2}+\epsilon\|A^{1/2}w\|^{2}.\]
Since equation (2.25) is similar to the equation (2.14) when \(u\) is replaced by \(w\), on repeating the same arguments as earlier, we obtain
\[\mathcal{E}^{(1)}(w)(t)\leq 3\,\mathcal{E}^{(1)}(w)(0)\,e^{-2\delta t}. \tag{2.26}\]
Now replacing \(w=u^{(j-1)}\) in (2.26), we arrive at
\[\mathcal{E}^{(1)}u^{(j-1)}(t)\leq 3\,\mathcal{E}^{(1)}u^{(j-1)}(0)\,e^{-2\delta t},\]
which can be written as
\[\mathcal{E}^{(j)}(t)\leq 3\,\mathcal{E}^{(j)}(0)\,e^{-2\delta t}. \tag{2.27}\]
This completes the induction. Therefore, (2.23) holds true and this completes the rest of the proof of the theorem.
**Theorem 2.4**.: _The solution \(u\) of (1.1)-(1.3) satisfies for \(0<\delta\leq\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right)\)_
\[\mathcal{E}^{(1)}_{A}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}^{(1)}_{A}(0),\;j=1,2,3, \ldots,\;t\geq 0, \tag{2.28}\]
_where_
\[\mathcal{E}^{(1)}_{A}(t)=\frac{1}{2}\left(\|A^{1/2}w^{\prime}(t)\|^{2}+\|Au( t)\|^{2}\right).\]
Proof.: The analysis closely follow the proof technique of Theorem 2.2. Forming an inner product between (1.1) and \(Au^{\prime}\), we arrive at
\[(u^{\prime\prime},Au^{\prime})+\alpha(u^{\prime},Au^{\prime})+(Au,Au^{\prime} )=0,\]
and hence,
\[\frac{d}{dt}\mathcal{E}^{(1)}_{A}(t)+\alpha\|A^{1/2}u^{\prime}(t)\|^{2}=0. \tag{2.29}\]
Next take an inner product between (1.1) and \(-\epsilon Au\) to obtain
\[\epsilon\frac{d}{dt}(A^{1/2}u^{\prime},A^{1/2}u)+\frac{\alpha}{2}\|A^{1/2}u(t )\|^{2}-\epsilon\|A^{1/2}u^{\prime}(t)\|^{2}+\epsilon\|Au(t)\|^{2}=0. \tag{2.30}\]
Adding (2.29) with (2.30), we find that
\[\frac{d}{dt}\mathcal{E}^{(1)}_{A,\epsilon}+(\alpha-\epsilon)\|A^{1/2}u^{\prime}(t )\|^{2}+\epsilon\|Au(t)\|^{2}=0, \tag{2.31}\]
where
\[\mathcal{E}^{(1)}_{A,\epsilon}(t)=\mathcal{E}^{(1)}_{A}(t)+\epsilon\left(A^{1 /2}u^{\prime}(t),A^{1/2}u(t)\right)+\frac{\alpha\,\epsilon}{2}\|A^{1/2}u(t)\|^ {2}.\]
We now proceed exactly in the proof technique of Theorem 2.2 by replacing \(\mathcal{E}^{(1)}\) by \(\mathcal{E}^{(1)}_{A},\ \mathcal{E}^{(1)}_{\epsilon}\) by \(\mathcal{E}^{(1)}_{A,\epsilon}\) and using Poincare inequality (2.3) to arrive at
\[\mathcal{E}^{(1)}_{A}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}^{(1)}_{A}(0),\]
whenever \(0<\delta\leq\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right)\). This completes the rest of the proof.
**Remark 2.1**.: _Since_
\[\|Au(t)\|^{2}\leq 3e^{-2\delta t}\mathcal{E}^{(1)}_{A}(0)\leq\frac{3}{2} \left(\|A^{1/2}u_{t}(0)\|^{2}+\|Au_{0}\|^{2}\right)\leq\frac{3}{2}\left(\|u_{ 1}\|_{1}^{2}+\|u_{0}\|_{2}^{2}\right).\]
_A use of elliptic regularity yields \(\|\Delta u(t)\|\geq C_{R}\|u(t)\|_{2}\). Hence, using the Sobolev embedding result, we obtain_
\[\|u(t)\|_{L^{\infty}}\leq C\|u(t)\|_{2}\leq Ce^{-\delta t}\left(\|u_{1}\|_{1}+ \|u_{0}\|_{2}\right). \tag{2.32}\]
**Remark 2.2**.: _Following the proof technique of Theorem 2.4, the following result_
\[\mathcal{E}^{(j)}_{A}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}^{(j)}_{A}(0), \tag{2.33}\]
_where_
\[\mathcal{E}^{(j)}_{A}(t)=\frac{1}{2}\left(\|A^{1/2}u^{(j)}(t)\|^{2}+\|Au^{(j-1 )}(t)\|^{2}\right), \tag{2.34}\]
_can be proved by using induction hypothesis._
**Remark 2.3**.: _Assume that \(u_{0}\in D(A^{(k)})\) and \(u_{1}\in D(A^{(k-1/2)})\) for \(k>1.\) Then, following the arguments in Theorem 2.2-2.4 and using induction, there holds:_
\[\mathcal{E}^{(j)}_{A^{(k)}}\leq 3\,e^{-2\delta t}\,\mathcal{E}^{(j)}_{A^{(k)}}(0),\]
_where \(\mathcal{E}^{(j)}_{A^{(k)}}=\frac{1}{2}\left(\|A^{(k-1/2)}u^{(j)}(t)\|^{2}+ \|A^{(k)}u^{(j-1)}(t)\|^{2}\right).\)_
## 3 Semidiscrete scheme
This section analyses the semidiscrete method for the problem (1.1)-(1.3), see, [2], [10]; and discuss decay estimates along with the optimal error estimates.
Let \(\{S^{0}_{h}\}_{h>0}\) be a family of subspaces of \(H^{1}_{0}\) with the following approximation property:
\[\inf_{\chi\in S^{0}_{h}}\left(\|v-\chi\|+h\|v-\chi\|_{1}\right)\leq h^{r}\,\| v\|_{r},\ \ \text{for}\ v\in H^{r}\cap H^{1}_{0}. \tag{3.1}\]
The semidiscrete formulation is to find \(u_{h}:[0,\infty)\to S^{0}_{h}\) such that
\[(u^{\prime\prime}_{h}(t),\chi)+\alpha(u^{\prime}_{h}(t),\chi)+a(u _{h}(t),\chi)=0,\;\chi\in S^{0}_{h}, \tag{3.2}\] \[u_{h}(0)=u_{0,h},\;\text{and}\;u^{\prime}_{h}(0)=u_{1,h}, \tag{3.3}\]
where \(u_{0,h}\) and \(u_{1,h}\) are appropriate approximations of \(u_{0}\) and \(u_{1}\) respectively, in \(S^{0}_{h}\) to be defined later. Since \(S^{0}_{h}\) is finite dimensional, (3.2) gives rise to a system of linear ODEs. An application of the Picard's theorem yields the existence of a unique discrete solution \(u_{h}(t)\in S^{0}_{h},\;t\in(0,\infty)\).
Let us first define a discrete counterpart \(A_{h}:S^{0}_{h}\mapsto S^{0}_{h}\) of the operator \(A\) as
\[(A_{h}v_{h},\chi)=a(v_{h},\chi)\ \ \forall v_{h},\chi\in S^{0}_{h}. \tag{3.4}\]
Then, we rewrite (3.2) as
\[u^{\prime\prime}_{h}+\alpha u^{\prime}_{h}+A_{h}u_{h}=0,\ \ t>0 \tag{3.5}\]
### Decay Property
This subsection discusses the decay estimates for the solution of semidiscrete equation. Now, define the energy functional as
\[\mathcal{E}_{h}^{(1)}(t)=\frac{1}{2}\left(\|u_{h}^{\prime}\|^{2}+\|A_{h}^{1/2}u_{ h}\|^{2}\right),\]
where \(\|A_{h}^{1/2}u_{h}\|^{2}:=a(u_{h},u_{h}).\)
**Theorem 3.1**.: _For \(0<\delta\leq\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right)\), the solution \(u_{h}\) of (3.2)-(3.3) satisfies the following decay property_
\[\mathcal{E}_{h}^{(1)}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}_{h}^{(1)}(0),\ t \geq 0.\]
Proof.: A use of \(\chi=u_{h}^{\prime}(t)+\epsilon\,u_{h}(t)\) in (3.2) yields
\[\frac{d}{dt}\left(\mathcal{E}_{h}^{(1)}(t)+\epsilon(u_{h}^{\prime},u_{h})+ \frac{\alpha\epsilon}{2}\|u_{h}(t)\|^{2}\right)+(\alpha-\epsilon)\|u_{h}^{ \prime}(t)\|^{2}+\epsilon\|A_{h}^{1/2}u_{h}(t)\|^{2}=0. \tag{3.6}\]
Since \(u_{h}(t)\in S_{h}^{0}\subset H_{0}^{1},\) then by Poincare inequality (2.2)
\[\|u_{h}(t)\|\leq\frac{1}{\sqrt{\lambda_{1}}}\|\nabla u_{h}(t)\|. \tag{3.7}\]
We then proceed exactly like the proof of the Theorem 2.2 replacing \(u\) by \(u_{h}\) to obtain
\[\mathcal{E}_{h}^{(1)}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}_{h}^{(1)}(0),\ t \geq 0.\]
This completes the rest of the proof.
**Theorem 3.2**.: _For \(0<\delta\leq\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right)\), the solution \(u_{h}\) of (3.2)-(3.3) satisfies_
\[\mathcal{E}_{h}^{(j)}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}_{h}^{(j)}(0),\ j=1,2, 3,\ldots,\ t\geq 0,\]
_where_
\[\mathcal{E}_{h}^{(j)}(t)=\frac{1}{2}\left(\|u_{h}^{(j)}\|^{2}+\|A_{h}^{1/2}u_ {h}^{(j-1)}\|^{2}\right).\]
Proof.: We prove the result \(\mathcal{E}^{(j)}\) by using the induction hypothesis. Assume that the result is true for \(j-1\), that is,
\[\mathcal{E}_{h}^{(j-1)}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}_{h}^{(j-1)}(0).\]
We now consider
\[(u_{h}^{(j+1)},\chi)+\alpha(u_{h}^{(j-1)},\chi)+a(u_{h}^{(j-1)},\chi)=0.\]
Choose \(w_{h}=u_{h}^{(j-1)}\) and \(\chi=w_{h}^{\prime}+\epsilon\,w_{h}\) and follow similar steps like proof of Theorem 2.3 replacing \(\mathcal{E}\) by \(\mathcal{E}_{h}\) to obtain
\[\mathcal{E}_{h}^{(j)}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}_{h}^{(j)}(0),\ j=1,2,3, \ldots,\ t\geq 0.\]
This completes the rest of the proof.
**Remark 3.1**.: _If \(\|A_{h}^{1/2}u_{0h}\|\leq C\|u_{0}\|_{1}\) and \(\|u_{1h}\|\leq C\|u_{1}\|\) then_
\[\|A_{h}^{1/2}u_{h}(t)\|\leq Ce^{-\delta t}\left(\|u_{1h}\|+\|A_{h}^{1/2}u_{0h} \|\right)\leq Ce^{-\delta t}\left(\|u_{1}\|+\|u_{0}\|_{1}\right).\]
_Observe that using coercivity property of the bilinear form \(\|A_{h}^{1/2}u_{h}\|^{2}=a(u_{h},u_{h})\geq\alpha_{0}\|\nabla u_{h}\|^{2},\) we arrive at_
\[\|\nabla u_{h}(t)\|\leq Ce^{-\delta t}\left(\|u_{1}\|+\|u_{0}\|_{1}\right).\]
_As a consequence of the Sobolev embedding for \(d=2\), see, [27]_
\[\|u_{h}(t)\|_{L^{\infty}}\leq C\left(\log\left(\frac{1}{h}\right)\right)\| \nabla u_{h}(t)\|\leq C\left(\log\left(\frac{1}{h}\right)\right)e^{-\delta t }\left(\|u_{1}\|+\|\nabla u_{0}\|\right).\]
**Theorem 3.3**.: _For the energy_
\[\mathcal{E}_{h}^{(0)}(t)=\frac{1}{2}\left(\|u_{h}(t)\|^{2}+\|A_{h}^{1/2}\hat{u}_{ h}(t)\|^{2}\right), \tag{3.8}\]
_where \(\hat{u}_{h}(t)=\int_{0}^{t}u_{h}(s)\,ds,\) the following decay estimate holds_
\[\mathcal{E}_{h}^{(0)}(t)\leq 3\,e^{-2\,\delta\,t}\,\mathcal{E}_{h}^{(0)}(0),\, \,\delta\in\left(0,\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha} \right)\right).\]
Proof.: Integrate the equation (3.2) from \(0\) to \(t\) on both sides to obtain
\[(u_{h}^{\prime},\chi)+\alpha(u_{h},\chi)+a(\hat{u}_{h},\chi)=(u_{1,h},\chi)+ \alpha(u_{0,h},\chi). \tag{3.9}\]
Choose \(\chi=u_{h}+\epsilon\hat{u}_{h}\) in (3.9) to obtain
\[\frac{d}{dt}\mathcal{E}_{h}^{(0)}(t)+\alpha\|u_{h}\|^{2}+ \epsilon\|A_{h}^{1/2}\hat{u}_{h}\|^{2} + \epsilon(u_{h}^{\prime},\hat{u}_{h})+\frac{\alpha\epsilon}{2} \frac{d}{dt}\|\hat{u}_{h}\|^{2}=(u_{1,h},u_{h})+\alpha(u_{0,h},u_{h}) \tag{3.10}\] \[+\epsilon(u_{1,h},\hat{u}_{h})+\epsilon\alpha(u_{0,h},\hat{u}_{h}).\]
Using the fact \(\epsilon(u_{h}^{\prime},\hat{u}_{h})=\epsilon\frac{d}{dt}(u_{h},\hat{u}_{h})- \epsilon\|u_{h}\|^{2},\) we arrive at
\[\frac{d}{dt}\mathcal{E}_{h}^{(0)}(t)+(\alpha-\epsilon)\|u_{h}\|^ {2} + \epsilon\|A_{h}^{1/2}\hat{u}_{h}\|^{2}+\epsilon\frac{d}{dt}(u_{h}, \hat{u}_{h})+\frac{\alpha\epsilon}{2}\frac{d}{dt}\|\hat{u}_{h}\|^{2}\] \[=(u_{1,h},u_{h})+\alpha(u_{0,h},u_{h})+\epsilon(u_{1,h},\hat{u}_{ h})+\epsilon\alpha(u_{0,h},\hat{u}_{h}).\]
We rewrite the equation (3.11) as
\[\frac{d}{dt}\left(\mathcal{E}_{h}^{(0)}(t)+\epsilon(u_{h},\hat{u }_{h})+\frac{\alpha\epsilon}{2}\|\hat{u}_{h}\|^{2}\right)+(\alpha-\epsilon)\| u_{h}\|^{2}+\epsilon\|A_{h}^{1/2}\hat{u}_{h}\|^{2}\] \[=(u_{1,h},u_{h})+\alpha(u_{0,h},u_{h})+\epsilon(u_{1,h},\hat{u}_{ h})+\epsilon\alpha(u_{0,h},\hat{u}_{h}). \tag{3.11}\]
Define
\[\mathcal{E}_{h\epsilon}^{(0)}(t)=\left(\mathcal{E}_{h}^{(0)}(t)+\epsilon(u_{h },\hat{u}_{h})+\frac{\alpha\epsilon}{2}\|\hat{u}_{h}\|^{2}\right),\]
and then proceed in a similar manner exactly like the proof of Theorem 2.2 to obtain the required result. This concludes the rest of the proof.
**Theorem 3.4**.: _For \(0<\delta\leq\frac{1}{3}\min\left(\alpha,\frac{\lambda_{1}}{2\alpha}\right)\), and a positive constant \(C,\) the solution \(u_{h}\) of (3.2)-(3.3) satisfies_
\[\mathcal{E}_{A_{h}}^{(1)}(t)\leq 3\,e^{-2\delta t}\,\mathcal{E}_{A_{h}}^{(1)}(0), \,\,t\geq 0,\]
_where_
\[\mathcal{E}_{A_{h}}^{(1)}(t)=\frac{1}{2}\left(\|A^{1/2}u_{h}^{\prime}(t)\|^{2} +\|A_{h}u_{h}(t)\|^{2}\right).\]
Proof.: Forming inner product between equation (3.5) and \(A_{h}u_{h}^{\prime}+\epsilon u_{h}\) to obtain
\[\frac{d}{dt}\left(\mathcal{E}_{A_{h}}^{(1)}+\epsilon(A_{h}^{1/2}u_{h}^{\prime},A_{h}^{1/2}u_{h})+\frac{\alpha\,\epsilon}{2}\|A_{h}^{1/2}u_{h}\|^{2}\right)+( \alpha-\epsilon)\|A_{h}^{1/2}u_{h}^{\prime}\|^{2}+\epsilon\|A_{h}u_{h}\|^{2}=0.\]
We then proceed in a similar manner exactly like the proof of Theorem 2.2 and using for \(v_{h}\in S_{h}^{0},\) Poincare inequality (3.7),
\[\|A_{h}^{1/2}v_{h}\|^{2}=(A_{h}v_{h},v_{h}) \leq \|A_{h}v_{h}\|\,\|v_{h}\|\] \[\leq \frac{1}{\sqrt{\lambda_{1}}}\|A_{h}v_{h}\|\,\|A_{h}^{1/2}v_{h}\|,\]
that is, \(\|A_{h}^{1/2}v_{h}\|\leq\frac{1}{\sqrt{\lambda_{1}}}\|A_{h}v_{h}\|\) and obtain
\[\mathcal{E}_{A_{h}}^{(1)}(t)\leq C\,e^{-2\delta t}\,\mathcal{E}_{A_{h}}^{(1)}( 0),\,\,t\geq 0.\]
This completes the rest of the proof.
### Error estimates.
This subsection deals with optimal error estimates for the semidiscrete scheme. Throughout this subsection, we shall use \(r=2\), that is, \(S^{0}_{h}\) consisting of \(C^{0}\)-conforming piecewise linear elements; and for general \(r>2\), all the ensuing results hold under assumptions of higher regularity on the exact solution.
Let \(R_{h}u\) be the elliptic projection of \(u\) defined by
\[a(u-R_{h}u,\chi)=0,\;\forall\;\chi\in S^{0}_{h}. \tag{3.12}\]
We split the error as
\[e:=u-u_{h}=(u-R_{h}u)+(R_{h}u-u_{h}):=\eta+\theta. \tag{3.13}\]
Note that \(a(\cdot,\cdot)\) satisfies the boundedness and coercivity properties. Setting \(\eta=u-R_{h}u\), then the following estimates are easy to obtain
\[\|\eta\|_{j}+\|\eta_{t}\|_{j} \leq Ch^{r+1-j}\left(\sum_{m=0}^{1}\left\|\frac{\partial^{m}u}{ \partial t^{m}}\right\|_{r+1}\right),\;j=0,1. \tag{3.14}\]
For details, see, [4].
We subtract the equation (3.2) from (2.5), and using the elliptic projection (3.12), we obtain the error equation in \(\theta\) as
\[(\theta^{\prime\prime},\chi)+\alpha(\theta^{\prime},\chi)+a(\theta,\chi)=-( \eta^{\prime\prime},\chi)-\alpha(\eta^{\prime},\chi),\forall\,\chi\in S^{0}_{ h}. \tag{3.15}\]
**Lemma 3.1**.: _Let \(\theta\) satisfies (3.15). Then, there holds_
\[\mathcal{E}^{1}(\theta)(t)\leq 3e^{-2\delta t}\mathcal{E}^{1}(\theta)(0)+ \left(\frac{1}{\alpha}+\frac{\epsilon}{\lambda_{1}}\right)\int_{0}^{t}e^{-2 \delta(t-s)}\left(\|\eta^{\prime\prime}\|^{2}+\alpha\|\eta^{\prime}\|^{2} \right)\,ds.\]
**Proof.** Setting \(\chi=\theta^{\prime}\) in (3.2), we obtain
\[\frac{d}{dt}\mathcal{E}^{1}(\theta)(t)+\alpha\|\theta^{\prime}(t)\|^{2}=-( \eta^{\prime\prime},\theta^{\prime})-\alpha(\eta^{\prime},\theta^{\prime}). \tag{3.16}\]
A use of the Cauchy-Schwarz inequality with the Young's inequality in (3.16) shows
\[\frac{d}{dt}\mathcal{E}^{1}(\theta)(t)+\alpha\|\theta^{\prime}(t)\|^{2}\leq \left(\frac{1}{\alpha}\|\eta^{\prime\prime}(t)\|^{2}+\alpha\|\eta^{\prime}(t) \|^{2}\right)+\frac{\alpha}{2}\|\theta^{\prime}(t)\|^{2}, \tag{3.17}\]
and hence,
\[\frac{d}{dt}\mathcal{E}^{1}(\theta)(t)+\frac{\alpha}{2}\|\theta^{\prime}(t)\| ^{2}\leq\left(\frac{1}{\alpha}\|\eta^{\prime\prime}(t)\|^{2}+\alpha\|\eta^{ \prime}(t)\|^{2}\right). \tag{3.18}\]
Substituting \(\chi=\epsilon\,\theta\) in (3.2), we obtain
\[\epsilon(\theta^{\prime\prime},\theta)+\epsilon\alpha(\theta^{\prime},\theta) +\epsilon a(\theta,\theta)=-\epsilon(\eta^{\prime\prime},\theta)-\epsilon \alpha(\eta^{\prime},\theta). \tag{3.19}\]
Apply again the Cauchy-Schwarz inequality in (3.19) to arrive at
\[\epsilon(\theta^{\prime\prime},\theta)+\frac{\epsilon}{2}\frac{\alpha}{dt}\| \theta\|^{2}+\epsilon\|A_{h}^{1/2}\theta\|^{2}\leq\epsilon\left(\|\eta^{\prime \prime}\|+\alpha\|\eta^{\prime}\|\right)\|\theta\|. \tag{3.20}\]
A use of \(\epsilon(\theta^{\prime\prime},\theta)=\epsilon\frac{d}{dt}(\theta^{\prime}, \theta)-\epsilon\|\theta^{\prime}\|^{2}\) with the Poincare inequality and the Young's inequality yields
\[\epsilon\frac{d}{dt}\left((\theta^{\prime},\theta)+\frac{\alpha}{2}\,\|\theta \|^{2}\right)+\epsilon\|A_{h}^{1/2}\theta\|^{2}-\epsilon\,\|\theta^{\prime}\|^ {2}\leq\frac{\epsilon}{\lambda}\left(\|\eta^{\prime\prime}\|^{2}+\alpha\|\eta ^{\prime}\|^{2}\right)+\frac{\epsilon}{2}\|A_{h}^{1/2}\theta\|^{2}. \tag{3.21}\]
Set
\[\mathcal{E}^{1}_{\epsilon}(\theta)=\mathcal{E}^{1}(\theta)+\epsilon(\theta^{ \prime},\theta)+\frac{1}{2}\alpha\,\epsilon\|\theta(t)\|^{2},\]
to arrive at
\[\frac{d}{dt}\mathcal{E}^{1}_{\epsilon}(\theta)(t)+\left(\frac{\alpha}{2}-\epsilon \right)\|\theta^{\prime}(t)\|^{2}+\frac{\epsilon}{2}\|A_{h}^{1/2}\theta\|^{2} \leq\left(\frac{1}{\alpha}+\frac{\epsilon}{\lambda_{1}}\right)\|\eta^{\prime \prime}(t)\|^{2}+\alpha\left(1+\frac{\epsilon}{\lambda_{1}}\right)\|\eta^{ \prime}(t)\|^{2}. \tag{3.22}\]
Denote \(F(t)=\left(\frac{\alpha}{2}-\epsilon\right)\|\theta^{\prime}(t)\|^{2}+\frac{ \epsilon}{2}\|A^{1/2}\theta\|^{2}\), and obtain
\[F(t)=2\epsilon\mathcal{E}^{1}(\theta)(t)+\left(\frac{\alpha}{2}-\frac{3}{2} \epsilon\right)\|\theta^{\prime}\|^{2}.\]
With
\[0<\delta\leq\min\left(\frac{\alpha}{3},\frac{\lambda_{1}}{4\alpha}\right), \quad\frac{1}{2}(\alpha-3\epsilon)\geq 0,\]
it follows that
\[F(t)\geq 2\epsilon\,\mathcal{E}^{1}(\theta)(t)\geq\frac{4}{3}\epsilon\, \mathcal{E}^{1}_{\epsilon}(\theta)(t).\]
On substitution, we arrive at
\[\frac{d}{dt}\mathcal{E}^{1}_{\epsilon}(\theta)(t)+\frac{4}{3}\epsilon\mathcal{ E}^{1}_{\epsilon}(\theta)(t)\leq\left(\frac{1}{\alpha}+\frac{\epsilon}{ \lambda_{1}}\right)\|\eta^{\prime\prime}(t)\|^{2}+\alpha\left(1+\frac{ \epsilon}{\lambda_{1}}\right)\|\eta^{\prime}(t)\|^{2}. \tag{3.23}\]
We rewrite the above equation (3.23) as
\[\frac{d}{dt}\left(e^{\frac{4}{3}\epsilon t}\,\mathcal{E}^{1}_{\epsilon}(\theta )(t)\right)\leq\left(\frac{1}{\alpha}+\frac{\epsilon}{\lambda_{1}}\right)e^{ \frac{4}{3}\epsilon t}\|\eta^{\prime\prime}(t)\|^{2}+\alpha\left(1+\frac{ \epsilon}{\lambda_{1}}\right)e^{\frac{4}{3}\epsilon t}\|\eta^{\prime}(t)\|^{2}. \tag{3.24}\]
On integration from \(0\) to \(t\), it follows that
\[\mathcal{E}^{1}_{\epsilon}(\theta)(t) \leq e^{-\frac{4}{3}\epsilon t}\mathcal{E}^{1}_{\epsilon}(\theta)(0)+ \left(\frac{1}{\alpha}+\frac{\epsilon}{\lambda_{1}}\right)\int_{0}^{t}e^{- \frac{4}{3}\epsilon(t-s)}\|\eta^{\prime\prime}(s)\|^{2}\,ds \tag{3.25}\] \[+ \alpha\left(1+\frac{\epsilon}{\lambda_{1}}\right)\int_{0}^{t}e^{ -\frac{4}{3}\epsilon(t-s)}\|\eta^{\prime}(s)\|^{2}\,ds.\]
With \(2\delta=\frac{4}{3}\epsilon,\;\delta=\frac{2}{3}\epsilon\) and using \(\mathcal{E}^{1}_{\epsilon}\) in terms of \(\mathcal{E}^{1}\), we complete the rest of the proof. \(\blacksquare\)
**Remark 3.2**.: _When \(u_{0h}=R_{h}u_{0},\) then \(\theta(0)=0\) and therefore,_
\[\mathcal{E}^{1}(\theta)(0)=\frac{1}{2}\|\theta^{\prime}(0)\|^{2}.\]
_With \(u_{1h}\) either \(L^{2}\) projection or interpolant of \(u_{1}\) in \(S^{0}_{h}\), we obtain_
\[\mathcal{E}^{1}(\theta)(0)\leq Ch^{4}\|u_{1}\|_{2}^{2}.\]
_Therefore, we arrive at the following superconvergent result for \(\|A_{h}^{1/2}\theta(t)\|\)_
\[\|\theta^{\prime}(t)\|^{2}+\|A_{h}^{1/2}\theta(t)\|^{2}\leq Ch^{4}e^{-2\delta t }\left(\|u_{1}\|_{2}^{2}+\int_{0}^{t}e^{2\delta s}\left(\|u^{\prime\prime}(s) \|_{2}^{2}+\|u^{\prime}(s)\|_{2}^{2}\right)\,ds\right).\]
_Since from Remark 2.3 with \(k=1\) and \(j=3,\) there holds_
\[\|u^{\prime\prime}(s)\|_{2}^{2} \leq 6\,e^{-2\delta t}\,\mathcal{E}^{(3)}_{A^{(1)}}(0) \tag{3.26}\] \[\leq 3\,e^{-2\delta t}\left(\|A^{1/2}u^{(3)}(0)\|^{2}+\|Au^{(2)}(0)\|^ {2}\right)\]
_and_
\[\|u^{\prime}(s)\|_{2}^{2} \leq 6\,e^{-2\delta t}\,\mathcal{E}^{(2)}_{A^{(1)}}(0)\]
\[\|\theta(t)\|_{L^{\infty}}\leq Ch^{2}\left(\log\left(\frac{1}{h}\right)\right)\|u(t )\|_{W^{2,\infty}}\leq Ch^{2}\left(\log\left(\frac{1}{h}\right)\right)\big{(}\|u_ {0}\|_{W^{2,\infty}}+\|u_{1}\|_{W^{1,\infty}}\big{)},\]
then, when \(d=2\)
\[\|u(t)-u_{h}(t)\|_{L^{\infty}}\leq Ch^{2}\left(\log\left(\frac{1}{h}\right) \right)(1+t)^{1/2}e^{-\delta t}\big{(}\|u_{0}\|_{4}+\|u_{1}\|_{3}\big{)}, \tag{3.32}\]
provided \(\|u(t)\|_{W^{2,\infty}}=O\left(e^{-\delta t}\right)\).
From the superconvergence result for \(\|\nabla\theta(t)\|\) in (3.30), one obtains estimate of \(\|\theta(t)\|\), but with the assumption of higher regularity, that is, \(u_{0}\in H^{4}\cap H^{1}_{0}\) and \(u_{1}\in H^{3}\cap H^{1}_{0}\) and only with \(u_{0h}=R_{h}u_{0}\).
Below, we directly deduce using a modified version of Baker's arguments [2], an optimal error estimate of \(\|u(t)-u_{h}(t)\|\) requiring \(\|u^{\prime}(t)\|=O\left(e^{-2\delta t}\right)\) and \(u_{0h}\) as \(L^{2}\)-projection or interpolant of \(u_{0}\) onto \(S^{0}_{h}\).
**Theorem 3.6**.: _Let \(u\) and \(u_{h}\) be a solution of (2.5) and (3.2), respectively. Then, there exists a positive constant \(C\) independent of \(h\) such that_
\[\|u(t)-u_{h}(t)\|\leq Ch^{2}\left(1+t\right)^{1/2}e^{-\delta t}\big{(}\|u_{0}\|_ {3}+\|u_{1}\|_{2}\big{)}.\]
**Proof.** Integrate (3.15) with respect to \(t\) and obtain
\[(\theta^{\prime}(t),\chi)+\alpha\left(\theta(t),\chi\right)+a(\hat{\theta}(t ),\chi)=(e^{\prime}(0),\chi)+\alpha(e(0),\chi)-(\eta^{\prime},\chi)-\alpha \left(\eta,\chi\right). \tag{3.33}\]
With a choice of \(u_{0h}\) and \(u_{1h}\) as \(L^{2}\)-projection of \(u_{0}\) and \(u_{1}\), respectively, i.e.,
\[(e^{\prime}(0),\chi)=0,\quad\text{and}\quad(e(0),\chi)=0.\]
Set \(\chi=\theta\) in (3.33) to arrive at
\[\frac{1}{2}\frac{d}{dt}\|\theta(t)\|^{2}+\alpha\|\theta(t)\|^{2}+\frac{1}{2} \frac{d}{dt}\|A_{h}^{1/2}\hat{\theta}(t)\|^{2}=-(\eta^{\prime},\theta)-\alpha \left(\eta,\theta\right). \tag{3.34}\]
A use of the Cauchy-Schwarz inequality with the Young's inequality in (3.34), we obtain
\[\frac{1}{2}\frac{d}{dt}\|\theta(t)\|^{2}+\alpha\|\theta(t)\|^{2}+\frac{1}{2} \frac{d}{dt}\|A_{h}^{1/2}\hat{\theta}(t)\|^{2}\leq C\left(\|\eta\|^{2}+\|\eta^ {\prime}\|^{2}\right)+\alpha\|\theta(t)\|^{2}. \tag{3.35}\]
Integrating on both sides of the equation (3.35) from \(0\) to \(t\) and substituting the estimates of \(\|\eta\|\) and \(\|\eta^{\prime}\|\), we obtain the estimate for \(\theta(t)\). Substituting (3.14) in (3.13), and a use of triangle inequality completes the rest of the proof. \(\blacksquare\)
## 4 Some Generalizations
In the previous sections, we have discussed exponential decay estimates of a general linear weakly damped wave equation. We now present below the generalizations of our results to weakly damped wave equation with nonhomogeneous forcing function, space dependent damping coefficient, viscous damping and compensation and weakly damped beam equations.
### Inhomogeneous equations
This subsection is on the weakly damped wave equation with nonhomogeneous forcing function:
\[u^{\prime\prime}+\alpha\,u^{\prime}+Au=f,\;(x,y)\in\Omega,\;t>0, \tag{4.1}\]
with initial conditions
\[u(x,y,0)=u_{0}(x,y),\quad u_{t}(x,y,0)=u_{1}(x,y),\;(x,y)\in\Omega, \tag{4.2}\]
and the boundary condition
\[u=0,\quad(x,y,t)\in\partial\Omega\times(0,\infty). \tag{4.3}\]
Here, \(f=f(x,y)\) is a given function of \((x,y)\) only.
**Theorem 4.1**.: _If \(u_{\infty}\) is the solution of_
\[Au_{\infty}=f,\quad\text{with}\quad u_{\infty}=0\quad\text{on}\quad\partial\Omega, \tag{4.4}\]
_then with \(w(t)=u(t)-u_{\infty},\) there holds_
\[\mathcal{E}^{(j)}(w)(t)\leq 3e^{-\delta t}\mathcal{E}^{(j)}(w)(0)=3e^{-\delta t }\left(\|w^{(j)}(0)\|^{2}+\|A^{1/2}w^{(j-1)}(0)\|^{2}\right). \tag{4.5}\]
_Here, for \(j=1,\) there holds \(w^{(1)}=w(0)\), and for \(j>1\), it follows that \(w^{(j)}(0)=u^{(j)}(0).\)_
**Proof.** Now \(w(t)\) in its abstract form satisfies
\[w^{\prime\prime}+\alpha\,w^{\prime}+Aw = 0,\;t\in(0,\infty), \tag{4.6}\] \[w(0) = u_{0}-u_{\infty},\quad w^{\prime}(0)=u_{1}.\]
On following the decay properties in Theorem 2.2 and Theorem 2.3, we complete the rest of the proof. \(\blacksquare\)
**Remark 4.1**.: _Following Theorem 2.4 and Theorem 2.3, we again arrive at_
\[\mathcal{E}_{A}^{(j)}(w)(t)\leq 3e^{-2\delta t}\,\mathcal{E}_{A}^{(j)}(w)(0). \tag{4.7}\]
Thus, as in Remark 2.1, we find for \(d=2\)
\[\|w(t)\|_{L^{\infty}}\leq Ce^{-\delta t}\left(\|u_{1}\|_{1}+\|u_{0}-u_{\infty} \|_{2}\right). \tag{4.8}\]
This implies \(u(t)\to u_{\infty}\) in \(L^{\infty}(\Omega)\) as \(t\to\infty\).
For semidiscrete scheme, similar results holds good for the semidiscrete solution \(w_{h}(t)=u_{h}(t)-u_{\infty,h}\) and we derive \(\|u_{h}(t)-u_{\infty,h}\|_{\infty}=O\left(e^{-\delta t}\right)\).
**Remark 4.2**.: _In case \(f(t)=O\left(e^{-\delta_{0}t}\right)\), then also the solution decay exponentially with decay rate \(\delta^{*}=\min\,(\delta_{0},\delta)\)._
### On space dependent damping term
In this subsection, we briefly discuss weakly damped wave equation with space dependent damping coefficient of the form, (see, [5], [20] and [9])
\[u^{\prime\prime}+\alpha(x)\,u^{\prime}+Au=0,\;(x,y)\in\Omega,\;t>0 \tag{4.9}\]
with initial conditions
\[u(x,y,0)=u_{0}(x,y),\quad u_{t}(x,y,0)=u_{1}(x,y),\;(x,y)\in\Omega, \tag{4.10}\]
and the boundary condition
\[u=0,\quad(x,y,t)\in\partial\Omega\times(0,\infty). \tag{4.11}\]
Here, the space dependent damping coefficient \(\alpha(x)\), \(x\in\bar{\Omega}\) satisfies
\[0<\min_{x\in\bar{\Omega}}\alpha(x)=\alpha_{1}\leq\alpha(x)\leq\alpha_{2}=\max _{x\in\bar{\Omega}}\,\alpha(x).\]
To indicate the decay property, for simplicity, assume that \(\alpha_{2}\alpha_{1}\leq\lambda_{1}\), where \(\lambda_{1}\) is first positive eigenvalue of the operator \(A\). Now, an appropriate modification of the analysis of Rauch [20] shows that the continuous energy
\[\mathcal{E}^{(1)}(t)=\frac{1}{2}\left(\|u^{\prime}(t)\|^{2}+\|A^{1/2}u(t)\|^ {2}\right),\]
decays like
\[\mathcal{E}^{(1)}(t)\leq\max\left(4,\frac{\alpha_{1}^{2}}{2\lambda_{1}}\right) e^{-\alpha_{1}t}\mathcal{E}^{(1)}(0). \tag{4.12}\]
Similarly, by differentiating \(j\) times in the temporal variable, it is easily follows that
\[\mathcal{E}^{(j)}(t)\leq\max\left(4,\frac{\alpha_{1}^{2}}{2\lambda_{1}}\right) e^{-\alpha_{1}t}\mathcal{E}^{(j)}(0). \tag{4.13}\]
For the corresponding semidiscrete system: Find \(u_{h}(t)\in S^{0}_{h}\) such that
\[(u_{h}^{\prime\prime},\chi_{h})+(\alpha u_{h}^{\prime},\chi_{h})+a(u_{h},\chi _{h})=0\;\;\;\forall\chi_{h}\in S^{0}_{h}. \tag{4.14}\]
Setting \(w_{h}=e^{\frac{\alpha_{1}}{2}t}u_{h}(t)\), we now rewrite (4.14) in terms of \(w_{h}\) as
\[(w_{h}^{\prime\prime},\chi_{h})+a(w_{h},\chi_{h})+((\frac{\alpha_{1}^{2}}{4}- \frac{\alpha\alpha_{1}}{2})w_{h},\chi_{h})+((\alpha-\alpha_{1})w_{h}^{\prime}, \chi_{h})=0\ \ \ \forall\chi_{h}\in S_{h}^{0}. \tag{4.15}\]
Now choose \(\chi_{h}=w_{h}^{\prime}\) in (4.15) and define
\[\mathcal{I}_{h}(w_{h})(t)=\mathcal{E}_{h}^{(1)}(w_{h})(t)+\frac{1}{2}\int_{ \Omega}\left(\frac{\alpha_{1}^{2}}{4}-\frac{\alpha\alpha_{1}}{2}\right)\,|w_{ h}|^{2}\,dx.\]
Then, as \((\alpha-\alpha_{1})\geq 0\), there holds
\[\frac{d}{dt}\mathcal{I}_{h}(w_{h})(t)=-((\alpha-\alpha_{1})w_{h}^{\prime},w_{ h}^{\prime})\leq 0, \tag{4.16}\]
and an integration with respect to time shows
\[\mathcal{I}_{h}(w_{h})(t)\leq\mathcal{I}_{h}(w_{h})(0). \tag{4.17}\]
Note that \(\frac{\alpha_{1}^{2}}{4}-\frac{\alpha\alpha_{1}}{2}\leq-\frac{\alpha_{1}^{2}} {4}<0.\) Since \(e^{\frac{\alpha_{1}}{2}t}u_{h}^{\prime}=w_{h}^{\prime}-\frac{\alpha_{1}}{2}w_ {h}(t)\), it follows using \((a-b)^{2}\leq 2(a^{2}+b^{2})\) that
\[e^{\alpha_{1}t}\mathcal{E}_{h}^{(1)}(u_{h}(t)) = \frac{1}{2}\left(\|\left(w_{h}^{\prime}-\frac{\alpha_{1}}{2}w_{h} \right)(t)\|^{2}+\|A_{h}^{1/2}w_{h}(t)\|^{2}\right)\] \[\leq \left(\|w_{h}^{\prime}(t)\|^{2}+\frac{\alpha_{1}^{2}}{4}\|w_{h}(t )\|^{2}+\frac{1}{2}\|A_{h}^{1/2}w_{h}(t)\|^{2}\right)\] \[\leq 2\mathcal{E}_{h}^{(1)}(w_{h})(t)+\frac{\alpha_{1}^{2}}{4}\|w_{h} (t)\|^{2}-\frac{1}{2}\|A_{h}^{1/2}w_{h}(t)\|^{2}.\]
Since \(\alpha_{2}\alpha_{1}+2\left(\frac{\alpha_{1}^{2}}{4}-\frac{\alpha\alpha_{1}}{ 2}\right)\geq\frac{\alpha_{2}^{2}}{2}\), we obtain
\[e^{\alpha_{1}t}\mathcal{E}_{h}^{(1)}(u_{h}(t))\leq 2\,\mathcal{I}_{h}(w_{h})(t)+ \frac{1}{2}\alpha_{2}\alpha_{1}\|w_{h}(t)\|^{2}-\frac{1}{2}\|A_{h}^{1/2}w_{h}( t)\|^{2}.\]
A use of the Poincare inequality \(\|w_{h}(t)\|^{2}\leq\frac{1}{\lambda_{1}}\|A_{h}^{1/2}w_{h}(t)\|^{2}\) shows
\[e^{\alpha_{1}t}\mathcal{E}_{h}^{(1)}(u_{h}(t))\leq 2\,\mathcal{I}_{h}(w_{h}(t))+ \frac{1}{2}\left(\alpha_{1}\alpha_{2}-\lambda_{1}\right)\|w_{h}(t)\|^{2}.\]
If \(\alpha_{1}^{2}<\alpha_{1}\alpha_{2}\leq\frac{\lambda_{1}}{2}\), then \(\alpha_{2}\alpha_{1}-\lambda_{1}\leq 0\). Thus, a use of (4.17) shows
\[\mathcal{E}_{h}^{(1)}(u_{h}(t))\leq 2\,e^{-\alpha_{1}t}\,\mathcal{I}_{h}(w_{h}(t)) \leq 2\,e^{-\alpha_{1}t}\,\mathcal{I}_{h}(w_{h}(0)). \tag{4.18}\]
Since \(\left(\frac{\alpha_{1}^{2}}{4}-\frac{\alpha\alpha_{1}}{2}\right)\leq\frac{ \alpha_{2}^{2}}{4}\), we note with \(e^{-\frac{\alpha_{1}}{2}t}w_{h}^{\prime}(t)=u_{h}^{\prime}(t)+\frac{\alpha_{1}} {2}u_{h}(t)\) and Poincare inequality that
\[2\mathcal{I}_{h}(w_{h}(0)) = \|u_{h}^{\prime}(0)+\alpha u_{h}(0)\|^{2}+\frac{\alpha_{1}}{2}\|u _{h}(0)\|^{2}+\|A_{h}^{1/2}u_{h}(0)\|^{2}+\int_{\Omega}\left(\frac{\alpha_{1}^ {2}}{4}-\frac{\alpha\alpha_{1}}{2}\right)|u_{h}(0)|^{2}\ dx \tag{4.19}\] \[\leq 2\,\mathcal{E}_{h}^{(1)}(u_{h}(0))+\frac{1}{4}\alpha_{1}^{2}\|u _{h}(0)\|^{2}\ dx\] \[\leq \max\left(2,\frac{\alpha_{1}^{2}}{4\lambda_{1}}\right)\mathcal{E}_ {h}^{(1)}(u_{h})(0).\]
On substitution of (4.19) in (4.18), we arrive at
\[\mathcal{E}_{h}^{(1)}(u_{h}(t))\leq\max\left(2,\frac{\alpha_{1}^{2}}{4\lambda_ {1}}\right)e^{-\alpha_{1}t}\mathcal{E}_{h}^{(1)}(u_{h}(0)).\]
Similarly,
\[\mathcal{E}_{h}^{(j)}(u_{h}(t))\leq\max\left(2,\frac{\alpha_{1}^{2}}{4\lambda_ {1}}\right)e^{-\alpha_{1}t}\mathcal{E}_{h}^{(j)}(u_{h}(0)).\]
Moreover, we derive all the error estimates as in Section 3. In particular, when \(d=2\)
\[\|(u-u_{h})(t)\|_{\infty}\leq C\left(\log\frac{1}{h}\right)h^{2}\sqrt[2]{t}e^{- \frac{\alpha_{1}}{2}t}.\]
**Remark 4.3**.: _In section 3, since \(\alpha\) is a constant, the decay rate is \(O\left(e^{-\frac{\alpha_{1}}{2}t}\right)\), provided \(\alpha_{1}\alpha_{2}=\alpha^{2}\leq\lambda_{1}\). In fact, the analysis of this subsection improves the decay rate compared to the decay rate in the Section 3._
### On viscous damping and compensation
This subsection focuses on improved decay rates due to both viscous damping and compensation, which is influenced by the paper of Chen [8].
Now, consider the wave equation with viscous damping and compensation: Given positive constants \(\alpha\) and \(\beta\), find \(u\) in \(\Omega\times[0,\infty)\) such that
\[u^{\prime\prime}+\alpha\,u^{\prime}+\beta\ u+Au=0,\;x\in\Omega,\;t>0, \tag{4.20}\]
with initial conditions
\[u(x,0)=u_{0}(x),\quad u_{t}(x,0)=u_{1}(x),\;x\in\Omega, \tag{4.21}\]
and the boundary condition
\[u=0,\quad(x,t)\in\partial\Omega\times(0,\infty). \tag{4.22}\]
Here, \(\alpha\) and \(\beta\) are called the viscous damping and compensation coefficient, respectively. When \(A=-\Delta\), this problem is discussed in [5], and improved exponential decay rates are proved. For general second order linear self-adjoint positive elliptic operator, appropriate modifications will provide the following improved decay estimates for the energy in terms of the theorem.
**Theorem 4.2**.: _For any \(\delta>0\) with_
\[\alpha=\delta(3+\delta),\;\;\text{and}\;\;\beta=\delta(2+3\delta+2\delta^{2}), \tag{4.23}\]
_the energy_
\[\mathcal{E}^{(1)}(t)=\frac{1}{2}\left(\|u^{\prime}(t)\|^{2}+\|A^{1/2}u(t)\|^{ 2}\right),\]
_decays exponentially, that is,_
\[\mathcal{E}^{(1)}(t)\leq C(\lambda_{1},\delta)\;e^{-\delta t}\mathcal{E}^{(1) }(0), \tag{4.24}\]
_where the the positive constant \(C=O(\delta^{3})\)._
Note that for large \(\alpha\) and \(\beta\), it is possible to derive decay rate \(\delta>0\), which remains large. Moreover, for a given \(\delta>0\) with (4.23) and \(u_{0}\in D(A^{(k)})\) and \(u_{1}\in D(A^{(k-1/4)})\) for \(k>1\), there holds using the arguments to arrive at (4.24) and using induction
\[\mathcal{E}^{(j)}_{A^{(k)}}\leq C(\lambda_{1};\delta)\,e^{-\delta t}\, \mathcal{E}^{(j)}_{A^{(k)}}(0),\]
where \(\mathcal{E}^{(j)}_{A^{(k)}}=\frac{1}{2}\left(\|A^{(k-1/4)}u^{(j)}(t)\|^{2}+\| A^{(k)}u^{(j-1)}(t)\|^{2}\right).\)
Now, the corresponding semidiscrete system is to seek \(u_{h}(t)\in S^{0}_{h}\) such that
\[(u^{\prime\prime}_{h},\chi_{h})+\alpha(u^{\prime}_{h},\chi_{h})+a(u_{h},\chi_ {h})+\beta(u_{h},\chi_{h})=0\;\;\;\forall\chi_{h}\in S^{0}_{h}. \tag{4.25}\]
With a choice of \(\chi_{h}=u^{\prime}_{h}+\delta u_{h}\) in (4.25), it follows using definition \(A_{h}\) as in (3.4) with the energy
\[\mathcal{E}^{(1)}_{h}(t)=\frac{1}{2}\left(\|u^{\prime}_{h}(t)\|^{2}+\|A^{1/2} _{h}u_{h}(t)\|^{2}\right),\]
and extended energy
\[\mathcal{E}^{(1)}_{\delta,h}(t)=\mathcal{E}^{(1)}_{h}(t)+\frac{1}{2}(\beta+ \delta\alpha)\|u_{h}(t)\|^{2}+\delta(u^{\prime}_{h},u_{h}),\]
that
\[\frac{d}{dt}\mathcal{E}^{(1)}_{\delta,h}(t)+F_{h}(t)=0, \tag{4.26}\]
where
\[F_{h}(t):=(\alpha-\delta)\|u^{\prime}_{h}(t)\|^{2}+\delta\|A^{1/2}_{h}u_{h}(t)\|^{ 2}+\beta\delta\|u_{h}(t)\|^{2}.\]
Since from (4.23), the condition
\[\beta\delta\geq\frac{\delta}{2}(\delta+\beta+\alpha\delta), \tag{4.27}\]
shows using \(-\delta^{2}(u^{\prime}_{h},u_{h})\geq-\big{(}(\delta^{2}/2)\|u^{\prime}_{h}\|^ {2}+(\delta^{2}/2)\|u_{h}\|^{2}\big{)}\) that
\[F_{h}(t)\geq\delta{\cal E}^{(1)}_{\delta,h}(t)+\frac{\delta}{2}(3+\delta)\|u_ {h}\|^{2}\geq\delta{\cal E}^{(1)}_{\delta,h}(t). \tag{4.28}\]
On substitution of (4.28) in (4.26), we arrive at
\[\frac{d}{dt}{\cal E}^{(1)}_{\delta,h}(t)+\delta{\cal E}^{(1)}_{\delta,h}(t) \leq\frac{d}{dt}{\cal E}^{(1)}_{\delta,h}(t)+F_{h}(t)=0,\]
and hence, an integration with respect to time yields
\[{\cal E}^{(1)}_{\delta,h}(t)\leq e^{-\delta t}\;{\cal E}^{(1)}_{\delta,h}(0). \tag{4.29}\]
Again a use of (4.23) shows
\[{\cal E}^{(1)}_{\delta,h}(t) \geq \frac{1}{2}\Big{(}\|u^{\prime}_{h}(t)\|^{2}+\|A^{1/2}_{h}u_{h}(t) \|^{2}+(\beta+\delta\alpha)\|u_{h}(t)\|^{2}\Big{)}-\frac{1}{4}\|u^{\prime}_{h }(t)\|^{2}-\delta^{2}\|u_{h}(t)\|^{2} \tag{4.30}\] \[= \frac{1}{4}\Big{(}\|u^{\prime}_{h}(t)\|^{2}+2\|A^{1/2}_{h}u_{h}(t )\|^{2}\Big{)}+\big{(}\frac{1}{2}(\beta+\delta\alpha)-\delta^{2}\big{)}\|u_{h} (t)\|^{2}\] \[\geq \frac{1}{4}\Big{(}\|u^{\prime}_{h}(t)\|^{2}+\|A^{1/2}_{h}u_{h}(t) \|^{2}\Big{)}+\frac{1}{2}\big{(}2\delta+4\delta^{2}+3\delta^{3}\big{)}\|u_{h} (t)\|^{2}\] \[\geq \frac{1}{2}{\cal E}^{(1)}_{h}(t).\]
For obtaining an upper bound, we note using (4.23), \(\delta(u^{\prime}_{h},u_{h})\leq(1/2)(\|u^{\prime}_{h}\|^{2}+\delta^{2}\|u_{h} \|^{2})\) and Poincare inequality (3.7)
\[{\cal E}^{(1)}_{\delta,h}(t) \leq \frac{1}{2}\Big{(}2\|u^{\prime}_{h}(t)\|^{2}+\|A^{1/2}_{h}u_{h}(t )\|^{2}+(\beta+\delta\alpha+\delta^{2})\|u_{h}(t)\|^{2}\Big{)} \tag{4.31}\] \[\leq \|u^{\prime}_{h}(t)\|^{2}+\frac{1}{2\lambda_{1}}(1+\beta+\delta \alpha+\delta^{2})\|A^{1/2}_{h}u_{h}(t)\|^{2}\Big{)}\] \[\leq \frac{1}{2\lambda_{1}}\big{(}2\lambda_{1}+(2\delta+7\delta^{2}+3 \delta^{3})\;{\cal E}^{(1)}_{h}(t).\]
With \(\frac{1}{2}C(\lambda_{1},\delta)=\frac{1}{2\lambda_{1}}\big{(}2\lambda_{1}+(2 \delta+7\delta^{2}+3\delta^{3})\big{)}=O(\delta^{3})\), we arrive from (4.30)-(4.31) at
\[\frac{1}{2}{\cal E}^{(1)}_{h}(t)\leq{\cal E}^{(1)}_{\delta,h}(t)\leq\frac{1}{ 2}\;C(\lambda_{1},\delta)\;{\cal E}^{(1)}_{h}(t). \tag{4.32}\]
On substitution in (4.29), we obtain
\[{\cal E}^{(1)}_{h}(t)\leq C(\lambda_{1},\delta)\;e^{-\delta t}\;{\cal E}^{(1)} _{h}(0). \tag{4.33}\]
Moreover, following the similar line of arguments, there holds for \(j\geq 1\)
\[{\cal E}^{(j)}_{h}(t)\leq C(\lambda_{1},\delta)\;e^{-\delta t}\;{\cal E}^{(j)} _{h}(0).\]
Further, a use of definition of \(A_{h}\) in (3.4) yields
\[{\cal E}^{(j)}_{A_{h}}(t)\leq C(\lambda_{1},\delta)\;e^{-\delta t}\;{\cal E}^{( j)}_{A_{h}}(0).\]
Following the argument that leads to (4.33) and also the error analysis in section 3, we easily derive optimal error estimates for \(\delta>0\) as
\[\|u(t)-u_{h}(t)\|+h\|\nabla(u(t)-u_{h}(t)\|\leq Ch^{2}\,\sqrt{t}\,e^{-\frac{ \delta}{2}t},\]
and for \(d=2\)
\[\|u(t)-u_{h}(t)\|_{L^{\infty}}\leq Ch^{2}\left(\log\left(\frac{1}{h}\right) \right)te^{-\frac{\delta}{2}t}.\]
### On weakly damped beam equations
This subsection deals with beam equation with a weakly damping [13].
For a convex polygonal or polyhedral domain \(\Omega\) in \(\mathbb{R}^{d}\) with boundary \(\partial\Omega\) and fixed positive constant \(\alpha\), the problem is to find \(u(x,t)\) for \((x,t)\in\Omega\times(0,\infty)\) satisfying
\[u^{\prime\prime}+\alpha\,u^{\prime}+\Delta^{2}u=0,\;x\in\Omega,\;t>0, \tag{4.34}\]
with initial conditions
\[u(x,0)=u_{0}(x),\quad u_{t}(x,0)=u_{1}(x),\;x\in\Omega, \tag{4.35}\]
and either homogeneous clamped boundary conditions
\[u=\frac{\partial u}{\partial\nu}=0,\quad(x,t)\in\partial\Omega\times(0,\infty), \tag{4.36}\]
where \(\nu\) is the outward unit normal to the boundary \(\partial\Omega\) or hinged boundary conditions or simply supported boundary conditions.
With \(A=\Delta^{2}\) and \(D(A)=H^{4}(\Omega)\cap H^{2}_{0}(\Omega)\), results of the previous sections remain valid in the present case with appropriate changes. For semidiscrete FEM, choose \(S^{0}_{h}\) be a finite element subspace of \(H^{2}_{0}(\Omega)\) satisfying the following approximation property:
\[\inf_{\chi\in S^{0}_{h}}\sum_{j=0}^{2}h^{j}\|v-\chi\|_{H^{j}(\Omega)}\leq Ch^{ 3}|v|_{H^{3}(\Omega)}.\]
Then, the rest of the decay property holds similarly. Moreover, the following estimates are easy to prove
\[\|(u-u_{h})(t)\|_{j}=O\left(h^{3-j}e^{-\delta\,t}\right),\;j=1,2.\]
## 5 Numerical Experiments
In this section, we perform some numerical experiments and validate the theoretical results established in the previous sections. We shall carry our numerical experiments for the completely discrete scheme.
### Completely Discrete Scheme
Let \(k>0\) be the time step and let \(t_{n}=nk,\;n\geq 0\). Set \(\varphi^{n}=\varphi(t_{n})\),
\[\bar{\partial}_{t}\varphi^{n}=\frac{\varphi^{n}-\varphi^{n-1}}{k},\quad\text{ and}\quad\partial_{t}\varphi^{n}=\frac{\varphi^{n+1}-\varphi^{n}}{k}\]
with \(\bar{\partial}^{0}_{t}\varphi^{n}=\varphi^{n}\).
We define
\[\bar{\partial}^{(j+1)}_{t}\varphi^{n}=\frac{1}{k}\left(\bar{\partial}^{j}_{t} \varphi^{n}-\bar{\partial}^{j}_{t}\varphi^{n-1}\right),\;\;j\geq 0.\]
Let \(\varphi^{n+\frac{1}{2}}=\frac{\varphi^{n+1}+\varphi^{n}}{2}\). We define
\[\delta_{t}\varphi^{n} = \frac{\varphi^{n+1}-\varphi^{n-1}}{2k}=\bar{\partial}_{t}\varphi^ {n+\frac{1}{2}}=\frac{\varphi^{n+\frac{1}{2}}-\varphi^{n-\frac{1}{2}}}{k},\] \[\dot{\varphi}^{n} = \frac{1}{4}\left(\varphi^{n+1}+2\varphi^{n}+\varphi^{n-1}\right)= \frac{1}{2}\left(\varphi^{n+\frac{1}{2}}+\varphi^{n-\frac{1}{2}}\right),\] \[\partial_{t}\bar{\partial}_{t}\varphi^{n} = \frac{1}{k^{2}}\left(\varphi^{n+1}-2\varphi^{n}+\varphi^{n-1}\right) =\frac{1}{2k}\left(\varphi^{n+\frac{1}{2}}-\varphi^{n-\frac{1}{2}}\right)= \frac{1}{k}\left(\partial_{t}\varphi^{n}-\bar{\partial}_{t}\varphi^{n}\right).\]
The discrete time finite element approximations \(U^{n}\) of \(u(t_{n})\) is defined as solution of
\[(\partial_{t}\bar{\partial}_{t}U^{n},\chi)+\alpha(\delta_{t}U^{n},\chi)+( \nabla\bar{U}^{n},\nabla\chi)=0,\;\chi\in S^{0}_{h},\;n\geq 1 \tag{5.1}\]
with \(U^{0}=u_{0,h}\) and \(U^{1}=u_{1,h}\), where, \(u_{0,h},u_{1,h}\in S^{0}_{h}\) are appropriate approximations to be defined later.
We now define the discrete energy
\[\mathcal{E}^{n}=\frac{1}{2}\left(\|\partial_{t}U^{n}\|^{2}+\|\nabla U^{n+\frac{1 }{2}}\|^{2}\right),\;n\geq 0. \tag{5.2}\]
Setting \(\chi=\delta_{t}U^{n}\) in (5.1) to obtain
\[\left(\partial_{t}\bar{\partial}_{t}U^{n},\delta_{t}U^{n}\right)+\alpha\| \delta_{t}U^{n}\|^{2}+\left(\nabla\hat{U}^{n},\nabla\delta_{t}U^{n}\right)=0. \tag{5.3}\]
We note that
\[\left(\partial_{t}\bar{\partial}_{t}U^{n},\delta_{t}U^{n}\right)=\frac{1}{2k} \left(\partial_{t}U^{n}-\bar{\partial}_{t}U^{n},\partial_{t}U^{n}+\bar{ \partial}_{t}U^{n}\right)=\frac{1}{2k}\left(\|\partial_{t}U^{n}\|^{2}-\| \partial_{t}U^{n-1}\|^{2}\right), \tag{5.4}\]
and
\[\left(\nabla\hat{U}^{n},\nabla\delta_{t}U^{n}\right) = \frac{1}{2k}\left(\nabla\left(U^{n+\frac{1}{2}}+U^{n-\frac{1}{2} }\right),\nabla\left(U^{n+\frac{1}{2}}-U^{n-\frac{1}{2}}\right)\right), \tag{5.5}\] \[= \frac{1}{2k}\left(\|\nabla U^{n+\frac{1}{2}}\|^{2}-\|\nabla U^{n- \frac{1}{2}}\|^{2}\right).\]
Substituting (5.4)-(5.5) in (5.3), we obtain
\[\mathcal{E}^{n}-\mathcal{E}^{n-1}+\alpha\,k\|\delta_{t}U^{n}\|^{2}=0.\]
Taking summation for \(n=1\) to \(m\), we arrtive at
\[\mathcal{E}^{m}+\alpha\,k\sum_{n=1}^{m}\|\delta_{t}U^{n}\|^{2}=\mathcal{E}^{0}. \tag{5.6}\]
Therefore, the discrete energy satisfies
\[\mathcal{E}^{n}\leq\mathcal{E}^{0}.\]
For numerical experiments, examples 1, 2, and 6 are related to the homogeneous weakly damped wave equation with various damping parameter values. Examples 3, 4, 5, and 7 are related to the weakly damped wave equation with a nonhomogeneous forcing function. In all the examples 1-7, the equations are solved up to the final time \(T=1.0\) and the value of time step \(k=h^{2}\). The numerical experiments are performed using FreeFem++ with piecewise linear elements [14].
In each case, the experimental convergence rate of the error is computed using
\[\text{Rate}=\frac{\log(E_{h_{i}})-\log(E_{h_{i+1}})}{\log(\frac{h_{i}}{h_{i+1} })},\]
where \(E_{h_{i}}\) denotes the norm of the error using \(h_{i}\) as the spatial discretizing parameter at \(i^{\text{th}}\) stage.
**Example 1**.: [22] For weakly damped wave equation:
\[u^{\prime\prime}+\alpha\,u^{\prime}-\Delta u=0,\;(x,y)\in\Omega=(0,1)\times(0,1 ),\;t>0\]
with initial conditions
\[u(x,y,0)=\sin(\pi x)\sin(\pi y),\quad u^{\prime}(x,y,0)=-\pi\sin(\pi x)\sin( \pi y),\;(x,y)\in\Omega\]
and the boundary condition
\[u=0,\;(x,y)\in\partial\Omega,\;t>0,\]
the exact solution is given by
\[u(x,y,t)=e^{-\pi t}\sin(\pi x)\sin(\pi y).\]
In the Table 1, the errors and rate of converges in \(L^{2},\;L^{\infty}\) and \(H^{1}\)- norms are presented with convergence rates confirm with the theoretical results.
In the Figure 1, the decay estimates of errors in Example 1 are shown.
Observe that from Figure 1, we can see the exponentially decay phenomenon for all the three norms \(L^{\infty},\,L^{2}\) and \(H^{1}\).
**Example 2**.: [12] For the weakly damped wave equation:
\[u^{\prime\prime}+\alpha\,u^{\prime}-\Delta u=0,\;(x,y)\in\Omega=(0,1)\times(0,1 ),\;t>0\]
with initial conditions
\[u(x,y,0)=\sin(\pi x)\sin(\pi y),\quad u^{\prime}(x,y,0)=\left(-\frac{\alpha}{2} +\sqrt{\frac{\alpha^{2}}{4}-2\,\pi^{2}}\right)\sin(\pi x)\sin(\pi y),\;(x,y)\in\Omega\]
and the boundary condition
\[u=0,\;(x,y)\in\partial\Omega,\;t>0,\]
the exact solution is given by
\[u(x,y,t)=e^{\left(-\frac{\alpha}{2}+\sqrt{\frac{\alpha^{2}}{4}-2\,\pi^{2}} \right)t}\sin(\pi x)\sin(\pi y).\]
Table 2, shows the errors and rate of converges in \(L^{2},\;L^{\infty}\) and \(H^{1}\)-norms, confirming our theoretical findings.
From Figure 2, we observe that \(\alpha=8.9\) exponentially decay faster than \(\alpha=9.5\) and \(\alpha=10\). This confirms that exponentially decay phenomenon for all the three norms \(L^{\infty}\), \(L^{2}\) and \(H^{1}\).
**Example 3**.: Consider the following inhomogeneous weakly damped wave equation
\[u^{\prime\prime}+\alpha\,u^{\prime}-\Delta u=f(x,y,t),\;(x,y)\in\Omega=(0,1) \times(0,1),\;t>0\]
with initial conditions
\[u(x,y,0)=u_{0}(x,y),\quad u^{\prime}(x,y,0)=u_{1}(x,y),\;(x,y)\in\Omega\]
and the boundary condition
\[u=0,\;(x,y)\in\partial\Omega,\;t>0.\]
We compute the unknowns \(f,\;u_{0}\) and \(u_{1}\) with the help of the exact solution
\[u(x,y,t)=e^{-\pi t}\sin(\pi x)\sin(\pi y).\]
In Table 3, we show the numerical results for \(\alpha=0.1\), the errors and rate of converges in \(L^{2},\;L^{\infty}\) and \(H^{1}\)-norms.
We observe that errors decay exponentially in Figure 3.
**Example 4.**[20] Consider the following weakly damped wave equation with space dependent damping coefficient of the form
\[u^{\prime\prime}+\alpha(x)\,u^{\prime}-\Delta u=f(x,y,t),\;(x,y)\in\Omega=(1,2 )\times(1,2),\;t>0\]
with initial conditions
\[u(x,y,0)=u_{0}(x,y),\quad u^{\prime}(x,y,0)=u_{1}(x,y),\;(x,y)\in\Omega\]
and the boundary condition
\[u=0,\;(x,y)\in\partial\Omega,\;t>0,\]
where \(\alpha(x)=\alpha_{0}|x|^{-\gamma}\) with some \(\alpha_{0}>0\) and \(\gamma=[0,1)\). We compute the unknowns \(f,\;u_{0}\) and \(u_{1}\) with the help of the exact solution
\[u(x,y,t)=e^{-\pi t}\sin(\pi x)\sin(\pi y).\]
Below, in Table 4, the errors and rate of converges in \(L^{2},\;L^{\infty}\) and \(H^{1}\)-norms are shown.
In Figure 4, we observe that errors decay exponentially.
**Example 5**.: Consider the following semilinear weakly damped wave equation, see [3] and [16]
\[u^{\prime\prime}+\alpha\,u^{\prime}-\Delta u+f(u)=g(x,y,t),\;(x,y)\in\Omega=(0,1 )\times(0,1),\;t>0\]
with initial conditions
\[u(x,y,0)=u_{0}(x,y),\quad u^{\prime}(x,y,0)=u_{1}(x,y),\;(x,y)\in\Omega\]
and the boundary condition
\[u=0,\;(x,y)\in\partial\Omega,\;t>0,\]
where \(f(u)=u^{3}-u\).
We compute the unknowns \(g,\;u_{0}\) and \(u_{1}\) with the help of the exact solution
\[u(x,y,t)=e^{-\pi t}\sin(\pi x)\sin(\pi y).\]
The errors and rate of converges in \(L^{2},\,L^{\infty}\) and \(H^{1}\)-norms are shown in the Table 5.
In Figure 5, we observe that errors decay exponentially.
**Example 6**.: Consider the following semilinear weakly damped wave equation, see [15]
\[u^{\prime\prime}+\alpha\,u^{\prime}-\Delta u+f(u)=0,\;(x,y)\in\Omega=(0,1) \times(0,1),\;t>0\]
with initial conditions
\[u(x,y,0)=\sin(\pi x)\sin(\pi y),\quad u^{\prime}(x,y,0)=-\pi\sin(\pi x)\sin( \pi y),\;(x,y)\in\Omega\]
and the boundary condition
\[u=0,\;(x,y)\in\partial\Omega,\;t>0,\]
where \(f(u)=u^{3}-u\).
We observe that errors decay exponentially in Figure 6.
In Figure 7, we show the decay plots for different values of damping coefficient \(\alpha\). We observe that \(\alpha=7\) decay exponentially faster than \(\alpha=3\) and \(\alpha=5\). This confirms that exponentially decay phenomenon for the norms \(L^{2}\) and \(L^{\infty}\).
**Example 7**.: Consider the following wave equation with viscous damping and compensation
\[u^{\prime\prime}+\alpha\,u^{\prime}+\beta\,u-\Delta u=0,\;(x,y)\in\Omega=(0,1) \times(0,1),\;t>0\]
with initial conditions
\[u(x,y,0)=\sin(\pi x)\sin(\pi y),\quad u^{\prime}(x,y,0)=-\pi\sin(\pi x)\sin(\pi y ),\;(x,y)\in\Omega\]
and the boundary condition
\[u=0,\;(x,y)\in\partial\Omega,\;t>0.\]
We calculate the values of damping coefficient \(\alpha\) and compensation coefficient \(\beta\) from (4.23). If we choose \(\delta=2\) and \(\delta=5\), that is, decay rate \(1\) and \(5/2\), respectively, then we obtain \(\alpha=10,\;\beta=32\) and \(\alpha=40,\;\beta=335\), respectively. Now the decay plots for different values of damping coefficient \(\alpha\) and
compensation coefficient \(\beta\) are shown in Figure 8.
From Figure 8(a),(c), we observe that when \(\alpha=40\) and \(\beta=335\), the errors decay exponentially faster than \(\alpha=10\) and \(\beta=32\). This confirms that for any arbitrary \(\delta\), one may choose damping coefficient \(\alpha\) and compensation parameter \(\beta\) appropriately so that errors in \(L^{2},H^{1}\) and \(L^{\infty}\)-norms decay exponentially with decay rate \(\delta/2.\) Now, we examine the decay estimates by setting compensation coefficient \(\beta=0\). We observe that the decay estimates of Figure 8(a),(c) and Figure 8(b),(d). Comparing both the decay estimates, we see that the errors in Figure 8(a),(c) decay exponentially faster than the errors in Figure 8(b),(d). This confirms that the compensation term \(\beta\) is helping in the weakly damping equation to get the errors decay exponentially faster.
It is further observe through numerical experiments for the wave equation with different viscous damping coefficients and compensation coefficients in Figure 9 that for large decay rates one may choose the compensation term and damping coefficient large as given in the subsection 4.3 which is better than the decay rate predicted in the Sections 2 and 3. Say, for example with decay rates 1, the damping coefficients \(\alpha=10\) and the compensation parameter 32, the predicted decay rate as in Sections 2 and 3 for the Example 7 is less than equal to 11/20 with \(\lambda_{1}=1\) and \(\frac{1}{3}\min(10,33/20)=11/20\), which confirms our results in subsection 4.3.
Figure 6: The decay estimate in \(L^{\infty}\), \(L^{2}\) and \(H^{1}\)-norms for \(\alpha=1\)
Figure 7: The decay estimate in \(L^{2}\) and \(L^{\infty}\)-norms
We now calculate decay rate \(\delta/2\) numerically. When \(\alpha=10,\;\beta=32\), from Figure 9, it is noticed that \(\delta\) is converging close to \(2\), that is, decay rate \(1\), which confirms theoretical result in subsection 4.3. Further with \(\alpha=10,\;\beta=32\), \(\delta\) is converging close to \(6.6\), that is, the decay rate in this case is roughly \(3.2\), which seems to be better than the decay rate \(2.5\) as predicted by the Theorem 4.2. This suggests that the choice of \(\alpha\) and \(\beta\) in terms of \(\delta\) may not be conservative.
Figure 8: The decay estimate in \(L^{\infty}\), \(L^{2}\) and \(H^{1}\)-norms
Figure 9: The decay estimates for different damping coefficients and compensation coefficients
### Conclusions
In this article, the uniform exponential decay estimates for the linear weakly damped wave equation are developed and analyzed for continuous and semidiscrete problem. Semidiscrete approximations are obtained by applying FEM to discretize in space directions keeping the time variable continuous. Compared to the existing literature, improved decay rates with rates lying in a range are derived. It is further observed that optimal error estimates, which depict the decay behaviour are proved with minimal smoothness assumptions on the initial data. The present analysis is extended to problems with inhomogeneous forcing function, space dependent damping coefficient, viscous damping and compensation. As a consequence of our abstract analysis, the proof technique is also generalized to a weakly damped beam equation. Several numerical experiments are performed to validate the theoretical results established in this article. The optimal rate of convergence is achieved in Table 1-5 and uniform exponential decay behaviour is observed in Figure 1-8. In examples 5-6, it is shown numerically that the semidiscrete solution of the semilinear weakly damped equation decays exponentially, and in future, we shall develop similar results as in linear case. Moreover, our future investigation will include the uniform exponential decay estimates for the complete discrete schemes.
|
2305.18842 | Generate then Select: Open-ended Visual Question Answering Guided by
World Knowledge | The open-ended Visual Question Answering (VQA) task requires AI models to
jointly reason over visual and natural language inputs using world knowledge.
Recently, pre-trained Language Models (PLM) such as GPT-3 have been applied to
the task and shown to be powerful world knowledge sources. However, these
methods suffer from low knowledge coverage caused by PLM bias -- the tendency
to generate certain tokens over other tokens regardless of prompt changes, and
high dependency on the PLM quality -- only models using GPT-3 can achieve the
best result.
To address the aforementioned challenges, we propose RASO: a new VQA pipeline
that deploys a generate-then-select strategy guided by world knowledge for the
first time. Rather than following the de facto standard to train a multi-modal
model that directly generates the VQA answer, RASO first adopts PLM to generate
all the possible answers, and then trains a lightweight answer selection model
for the correct answer. As proved in our analysis, RASO expands the knowledge
coverage from in-domain training data by a large margin. We provide extensive
experimentation and show the effectiveness of our pipeline by advancing the
state-of-the-art by 4.1% on OK-VQA, without additional computation cost. Code
and models are released at http://cogcomp.org/page/publication_view/1010 | Xingyu Fu, Sheng Zhang, Gukyeong Kwon, Pramuditha Perera, Henghui Zhu, Yuhao Zhang, Alexander Hanbo Li, William Yang Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Dan Roth, Bing Xiang | 2023-05-30T08:34:13Z | http://arxiv.org/abs/2305.18842v1 | # Generate then Select: Open-ended Visual Question Answering Guided by World Knowledge
###### Abstract
The open-ended Visual Question Answering (VQA) task requires AI models to jointly reason over visual and natural language inputs using world knowledge. Recently, pre-trained Language Models (PLM) such as GPT-3 have been applied to the task and shown to be powerful world knowledge sources. However, these methods suffer from low knowledge coverage caused by PLM bias - the tendency to generate certain tokens over other tokens regardless of prompt changes, and high dependency on the PLM quality - only models using GPT-3 can achieve the best result.
To address the aforementioned challenges, we propose RASO: a new VQA pipeline that deploys a generate-then-select strategy guided by world knowledge for the first time. Rather than following the de facto standard to train a multi-modal model that directly generates the VQA answer, RASO first adopts PLM to generate all the possible answers, and then trains a lightweight answer selection model for the correct answer. As proved in our analysis, RASO expands the knowledge coverage from in-domain training data by a large margin. We provide extensive experimentation and show the effectiveness of our pipeline by advancing the state-of-the-art by +4.1% on OK-VQA, without additional computation cost. Code and models are released at [http://cogcomp.org/page/publication_view/1010](http://cogcomp.org/page/publication_view/1010)
## 1 Introduction
Open-ended Visual Question Answering (VQA), that requires answering a question based on an image, has received much attention in machine learning research in the past decade (Antol et al., 2015; Goyal et al., 2017). Knowledge-based VQA(Marino et al., 2019; Schwenk et al., 2022) is a variant of VQA, where models have to use external knowledge that is not present in the image to generate the answer. It is a more challenging problem as it requires joint reasoning over visual and natural language inputs using world knowledge. For example, in Figure 1, the VQA model needs to conduct multiple levels of inference: to detect the objects in the image (e.g. laptops, whiteboard, etc), to retrieve external world knowledge (e.g, university is an institution and has lecture rooms, lecture rooms have laptops, stairs, and whiteboard, etc), and combine the important visual parts with retrieved knowledge to induce the final answer (e.g. university).
In this paper, we focus on improving the important step of external knowledge retrieval. A common procedure of previous VQA methods (Marino et al., 2021; Wu et al., 2022) is to retrieve with knowledge graphs from diverse knowledge bases (e.g. Wikipedia contributors, 2004), ConceptNet (Liu and Singh, 2004), etc.), with the results being input to an answer generation model. However, the retrieved knowledge could be noisy, irrelevant, and redundant, and therefore lead to mismatches that limit the VQA performance. Motivated by the development of large-scale PLMs such as GPT-3 (Brown et al., 2020) that obtain state-of-the-art (SOTA) performance in most NLP tasks including text generation (Chowdhery et al., 2022), more recent approaches PiCA (Yang et al., 2022) and KAT (Gui et al., 2022) propose to re
Figure 1: An example data from the OK-VQA dataset, which requires external knowledge not present in the image to answer the question.
trieve from GPT-3 and achieve better performance for their neat and high-quality knowledge. Specifically, PiCA directly treats GPT-3 output as the VQA answer, while KAT further uses GPT-3 outputs to train an answer generation model.
While achieving SOTA at the time, the two models suffer from the low knowledge coverage caused by PLM bias - the tendency to generate certain tokens over other tokens despite the prompt changes, and their performance are highly dependent on the PLM quality - only GPT-3 and Codex can achieve good results. As illustrated in Table 1, we report the knowledge coverage percentage of different PLMs on OK-VQA (Marino et al., 2019), a knowledge-based open VQA dataset. We use the accuracy of PiCA as a representation of knowledge coverage, and the first column indicates the PLM input prompts, where \(Prompt_{Q}\) is constructed by VQA question only, and \(Prompt_{QC}\) is constructed by image and question together. The top row lists five selected PLMs with parameter size varying from 6.7B to 175B: GPT J (Wang and Komatsuzaki, 2021), UL2 (Tay et al., 2022), OPT-175B (Zhang et al., 2022), GPT-3, and Codex (Chen et al., 2021). Table 1 proves that existing VQA approaches using PLMs can only cover less than half (37% - 53%) of the required external knowledge. Further, the
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & GPT-J & UL2 & GPT-3 & OPT & Codex \\ \hline \(Prompt_{Q}\) & 32.4 & 32.6 & - & 34.21 & 44.8 \\ \(Prompt_{QC}\) & 37.1 & 37.5 & 48.0 & 37.8 & 52.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Knowledge coverage (%) of different five PLMs, evaluated on OK-VQA. \(Prompt_{Q}\) means that the prompt to PLM is constructed by the VQA question only, and \(Prompt_{QC}\) means that the prompt is constructed by the VQA image and question together. Note that the GPT-3 score is taken from (Yang et al., 2022).
Figure 3: Our answer selection step. Before selecting the final answer, we first use the same PLM to generate a chain-of-thought rationale to guide the process. Then input being the image or its caption, the question, CoT rationale, and answer choices from Step 1, we train a model to output the correct answer. See Section 4.4 for details about the answer selection models we experiment with.
Figure 2: Our multiple choice generation step. Given an image, we use existing tools to get the caption and object tags. We then select most similar examples from the training data and construct the two prompts. We combine the PLM outputs and get the answer choice list. Note that the list is ranked by PLM probability from high to low. More details can be found in Section 3.1.
small difference (5% - 8%) between \(Prompt_{Q}\) and \(Prompt_{QC}\) coverage percentages show that PLM bias - the tendency to generate certain tokens over others given the same question - is not alleviated by prompt changes such as the inclusion of the image information or not.
To address these challenges, we propose RASO, a new VQA pipeline that expands world knowledge retrieval by requesting PLMs to generate multiple answer choices, followed by an answer selection model. As shown in Figure 2, we first propose a new prompting method to retrieve a long list of possible answers using in-context examples from in-domain training data. Note that for the example data in Figure 1, the PiCA end-task output would be "office" as in \(A_{QC}\) in Figure 2. With this prompting method, we expand the external knowledge coverage by more than +20% for each PLM, without additional training data. Then, as illustrated in Figure 3, we propose a chain-of-thought (CoT) (Wei et al., 2022) guided answer selection approach. By plugging in the previous SOTA method KAT (Gui et al., 2022) as the answer selector, we achieve the new SOTA performance 58.5% (+4.1%) on the OK-VQA dataset without additional computation effort.
Extensive experiments in Section 4 suggest that RASO provides a general way to increase the retrieved world knowledge coverage using PLMs, boosting end-task performance without additional computation cost. We believe our proposed pipeline motivates a new type of generate-then-select VQA method and facilitates future work.
Our main contributions are: (a) We provide a new prompting method using PLMs that extends the retrieved external knowledge coverage by 20% over previous approaches in VQA; (b) We are the first to propose a general generate-then-select VQA pipeline, different from the de facto tradition of direct generation approaches; (c) We achieve the new SOTA on the challenging OK-VQA benchmark.
## 2 Related Work
### VQA Methods
Visual question answering (VQA) has always been one of the most popular topics in the natural language and computer vision community over recent years. While the VQA task is free-form and open-ended as first proposed in (Antol et al., 2015), a large portion of previous methods (Shih et al., 2016; Anderson et al., 2018; Lu et al., 2019; Garderes et al., 2020) cast it as a classification problem. It's a common strategy for them to construct a target vocabulary from the dataset's training set by answer frequency, resulting in around two to four thousand candidates in the target vocabulary (Ben-Younes et al., 2017; Yu et al., 2019; Marino et al., 2021; Wu et al., 2022). These methods suffer from the limited answer vocabulary - if the gold answer is outside of the vocabulary, then there is no way for these models to have the correct answer.
Rather than closed-set classification, several recent methods focus on direct generating for the correct answer (Gui et al., 2022; Salaberria et al., 2023) using transformer-based models such as T5 (Raffel et al., 2020). Large-scale multi-modal models trained on multiple vision language tasks (Alayrac et al., 2022; Chen et al., 2022) have also become popular and achieved good performance on the OK-VQA dataset. However, these models are not publicly available and necessitate a vast quantity of data and computation resources.
Different from all the previous approaches that are either classification or direct generation, our proposed pipeline RASO is the first approach ever to follow a generate-then-select strategy, as far as this paper is written. We hope to benefit from less computation cost in the selection part compared to direct generation, while keeping the free-form open-ended answer vocabulary from the answer generation part.
### Knowledge-based VQA
While significant progress (Lu et al., 2016; Anderson et al., 2018; Lu et al., 2019; Jiang et al., 2020; Marino et al., 2021; Biten et al., 2022) has been made on the most famous VQA benchmarks (Antol et al., 2015; Goyal et al., 2017; Wang et al., 2017; Singh et al., 2019), researchers start to raise more challenging questions that require external knowledge not inside the image to answer (Marino et al., 2019; Zellers et al., 2019; Park et al., 2020; Schwenk et al., 2022; Fu et al., 2022).
Two-step approaches (Marino et al., 2021; Wu et al., 2022; Gui et al., 2022; Lin and Byrne, 2022; Gao et al., 2022; Hu et al., 2022; Lin et al., 2022) that explicitly retrieve world knowledge as input to the end-task model have received much attention. However, these methods could retrieve noisy and redundant information that limits the VQA performance, or have low knowledge coverage. In contrast, without retrieving documents, they
may suffer from PLM hallucinations. To address these problems, we treat LLM as a world knowledge source with wide coverage, and propose new prompt-engineering methods to retrieve succinct but higher-quality knowledge, represented as answer choices.
## 3 Method
Our method consists of two steps: answer choices generation and answer selection. The overview of the proposed model is shown in Figures 2 and 3.
**Problem Formulation** Given a training dataset \(D=\{(v_{i},q_{i},a_{i})\}_{i=1}^{N}\), where \(v_{i}\) denotes the i-th training image and \(N\) is the total number of the training images, \(q_{i}\) and \(a_{i}\) represent the i-th question and its corresponding answer, respectively. We deploy a generate-then-select strategy to first generate a set of answer choices using a frozen PLM \(g\), then trains a model \(p\) to select the correct answer from it. \(g\) takes \(v_{i}\) and \(q_{i}\) as inputs, and generates all the possible answers \(\hat{A}_{i}=\{\hat{a_{i0}},\hat{a_{i1}},\hat{a_{i2}},...\}\). Finally, \(p\) takes \(v_{i}\), \(q_{i}\), and \(\hat{A}_{i}\) as inputs and learns a set of parameters \(\theta\) to select from \(\hat{A}_{i}\) for the final answer.
### Answer Choices Generation
We design our generation process with inspirations from the previous work Yang et al. (2022); Gui et al. (2022). As demonstrated in Figures 2 and 4, we follow a similar strategy to use few-shot in-context learning and leverage a frozen PLM \(g\) to generate all the possible answer choices.
For each image-question pair, we first convert the image \(v_{i}\) into a textual context \(c_{i}\) following Yang et al. (2022), where \(c_{i}\) consists of a caption generated from an image captioning model Zhang et al. (2021) and a list of tags predicted by the public Microsoft Azure tagging API31. We then construct two carefully designed text prompts \(Prompt_{Q}\) and \(Prompt_{QC}\), where \(Q\) stands for question and \(QC\) stands for question and context. \(Prompt_{QC}\) consists of a general instruction sentence: "Please list all the possible answers to the question.", the textual context, the question, and few-shot in-context examples. The examples are context-question-answers triples taken from the training set that are most similar to the current image-question pair. Since we want to generate all the possible answers, we use all the gold answers and connect them with "or" in the few-shot examples. \(Prompt_{Q}\) has similar components: a slightly different instruction sentence, the question, and few-shot examples of question-answers pairs.
Footnote 1: Azure Tagging API:[https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91fe7278daf14a499f21b](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91fe7278daf14a499f21b)
Following Yang et al. (2022); Gui et al. (2022), we use 16-shot in-context examples and calculate the similarity scores using CLIP Radford et al. (2021) embedding of the images and the questions. We utilize the frozen PLM \(g\) to generate outputs for both \(Prompt_{Q}\) and \(Prompt_{QC}\) as demonstrated in Figure 4. The outputs are combined together to form the final answer choices \(\hat{A}_{i}=\{\hat{a_{i0}},\hat{a_{i1}},\hat{a_{i2}},...\}\) for the current image-question pair. Our goal is to have \(a_{i}\in\hat{A}_{i}\).
### Answer Selection
Given \(v_{i}\), \(c_{i}\), \(q_{i}\), \(\hat{A}_{i}\), this step trains a model \(p\) that selects \(\hat{a}_{i}\) from \(\hat{A}_{i}\). Our goal is for \(p\) to output \(a_{i}\) when \(a_{i}\in\hat{A}_{i}\).
Before training \(p\), we first generate chain-of-thought (CoT) Wei et al. (2022) style rationales to help guide the selection process, with inspirations from Schwenk et al. (2022). Specifically, a fixed prompt is pre-designed to generate CoT rationales, with details in Figure 6 in Appendix A.
We then construct the input for the answer selection model. In this paper, we plug in existing text generation models as \(p\), and require them to output one choice with further fine-tuning on OK-VQA. For each image-question pair, we concatenate the question \(q_{i}\), the image - represented by either \(c_{i}\) or the image embedding using CLIP model Radford et al. (2021), the CoT rationale \(cot_{i}\), and the generated answers choices \(\hat{A}_{i}\). We also add sentinel tokens such that the input turns out to be in the following format: \(Context:c_{i}\), \(question:q_{i}\), \(rationale:cot_{i}\), \(choices:\hat{A}_{i}\), \(answers:\) with minor adaptions for each specific \(p\). Check Figure 5 for inference.
## 4 Experiment
### Dataset
**OK-VQA**Marino et al. (2021) is a widely used VQA dataset that requires external world knowledge outside of the image to answer the question. The dataset contains 14,031 images from the COCO dataset Lin et al. (2014) and 14,055 crowdsourced questions covering a variety of knowledge categories, with 9,009 training data and 5,046 testing data. Each question has ten annotated answers
(possibly repeated), and we follow the standard evaluation metric recommended by the VQA challenge (Antol et al., 2015). The external knowledge required in OK-VQA is not provided and there is no designated external knowledge source (such as a knowledge base), leaving the benchmark more challenging.
### Publicly Available PLMs
We experiment with four different-sized PLMs that are publicly available as follows:
**Codex**(Chen et al., 2021) The Codex models are descendants of GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub. We use the version \(code-davinci-002\) of Codex.
**OPT-175b**(Zhang et al., 2022) Open Pre-trained Transformers (OPT) is a suite of decoder-only pretrained transformers ranging from 125M to 175B parameters trained on publicly available datasets. We use the version 175 billion parameters of OPT.
**UL2**(Tay et al., 2022) Unified Language Learner (UL2) is 20 billion parameter novel language pre-training paradigm that improves the performance
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline \hline PLM & Prompt Type & Top1 (\%) & Top3 (\%) & Top5 (\%) & All (\%) & Avg \(\#\) \\ \hline GPT J & \(Prompt_{Q}\) & 32.4 & 46.1 & 46.7 & 46.7 & 2.6 \\ & \(Prompt_{QC}\) & 37.1 & 49.5 & 50.7 & 50.7 & 3.0 \\ & both & 37.1 & 52.0 & 55.9 & 57.1 & 4.1 \\ \hline UL2 & \(Prompt_{Q}\) & 32.6 & 45.4 & 46.4 & 46.5 & 2.7 \\ & \(Prompt_{QC}\) & 37.5 & 51.3 & 52.8 & 52.9 & 3.0 \\ & both & 37.5 & 53.1 & 57.0 & 58.0 & 4.1 \\ \hline GPT-3 & \(Prompt_{QC}\) & 48.0 & - & - & - & - \\ \hline OPT & \(Prompt_{Q}\) & 34.21 & 48.45 & 49.7 & 49.8 & 3.0 \\ & \(Prompt_{QC}\) & 37.8 & 52.9 & 55.0 & 55.4 & 3.7 \\ & both & 37.8 & 55.6 & 61.0 & 63.4 & 5.2 \\ \hline
**Codex** & \(Prompt_{Q}\) & 44.8 & 58.8 & 59.8 & 59.8 & 3.1 \\ & \(Prompt_{QC}\) & 52.9 & 67.8 & 68.9 & 68.9 & 3.2 \\ & both & 52.9 & 68.6 & 72.6 & **73.5** & 4.5 \\ \hline
**ensembled** & both & 52.9 & 68.6 & 74.6 & **81.9** & 11.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Answer choices generation result on OK-VQA, representing the external knowledge coverage. Top 1, Top 3, Top 5, and All represent the highest accuracy that can be achieved using top 1, top 3, top 5, and all answer choices. All results are in accuracy scores evaluated following (Antol et al., 2015). “both” means that we combine the answer choices generated using both prompts. “ensembled” means that we combine the answer choices of all four PLMs. Note that the GPT-3 result is taken from (Yang et al., 2022).
Figure 4: An illustration of our proposed prompting method for choice generation enabling larger knowledge retrieval coverage, compared with standard prompting as in PiCA (Yang et al., 2022). Note that Model Input I and II corresponds to \(Prompt_{Q}\), \(Prompt_{QC}\) respectively, and correct answers are highlighted.
Figure 5: Example input for the answer selection model for the image in Figures 1 and 2.
of language models universally across datasets and setups released recently. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input.
**GPT-J**Wang and Komatsuzaki (2021) GPT-J is a 6 billion parameter, autoregressive text generation model trained following Wang (2021). The model consists of 28 layers with a model dimension of 4096, and a feed-forward dimension of 16384.
During prompting, we always set the temperature to 0.001 and max token to 15.
### Answer Choices Generation Results
The answer choice generation result is shown in Table 2. Top 1, Top 3,..., All represent the highest accuracy that can be achieved using top 1, top 3,..., and all answer choices, calculated following the standard VQA evaluation metric in Antol et al. (2015). Note that the GPT-3 score is taken from Yang et al. (2022). We do not experiment with GPT-3 in this paper due to the required cost. Avg # stands for the average number of answer choices.
While previous VQA methods also retrieve from PLMs, they have a similar result as if using \(Prompt_{QC}\) and Top1 choice. As discussed before, these generation results can represent the external knowledge coverage ratio. From the table, Codex covers the majority of the knowledge needed and has the highest score of 73.5%. Using our prompt-engineering method, the knowledge coverages of all PLMs increase by a large margin of at least 20% (which are the accuracy differences between Top1 choice by \(Prompt_{QC}\) and All choices by both prompts).
### Answer Selection Models
We plug in existing text-generation models as answer selectors and experiment on four methods:
**KAT**Gui et al. (2022) is a VQA method that uses a sequence-to-sequence model composed of an encoder and a decoder, similar to T5 Raffel et al. (2020). As far as this paper is written, KAT is known to be the SOTA method on OK-VQA benchmark.
**ClipCap**Mokady et al. (2021) uses the CLIP Radford et al. (2021) encoding as a prefix to generate textual captions by employing a simple mapping network over the raw encoding, and then fine-tunes a language model to generate a valid caption. The language model we use here is GPT-2. In this pa
\begin{table}
\begin{tabular}{c|l|c|c} \hline \hline
**Method** & External Knowledge Source & Answer Selector & Acc(\%) \\ \hline \multicolumn{2}{c|}{MUTAN+AN Ben-Younes et al. (2017)} & Wiki & - & 27.8 \\ ConceptBERT Gardres et al. (2020) & ConceptNet & - & 33.7 \\ KRISP Marino et al. (2021) & Wiki+ConceptNet & - & 38.9 \\ MAVEA Wu et al. (2022) & Wiki+ConceptNet+Google Images & - & 39.4 \\ PiCA Yang et al. (2022) & Frozen GPT-3 Wiki & - & 48.0 \\ KAT Gui et al. (2022) (ensemble) & Wiki+Frozen GPT-3 Wiki & - & 54.4 \\ \hline \hline \multicolumn{2}{c|}{ClipCap (Mokady et al., 2021).} & - & 22.8 \\ \hline \hline \multirow{5}{*}{RAS0} & Frozen GPT-J & & 29.5 \\ & Frozen UL2 & & 33.1 \\ & Frozen OPT & ClipCap & 31.3 \\ & Frozen Codex & & 35.3 \\ & All 4 Frozen PLMs & & 38.0 \\ \hline \multirow{5}{*}{RAS0} & Frozen GPT-J & & 29.6 \\ & Frozen UL2 & & 33.8 \\ & Frozen OPT & & 58.5 \\ & Frozen Codex & & 45.7 \\ \hline \multirow{5}{*}{RAS0} & Frozen GPT-J & & 47.2 \\ & Frozen UL2 & & 45.8 \\ & Frozen OPT & UnifiedQA & 47.8 \\ & Frozen Codex & (ensemble) & 51.2 \\ & All 4 Frozen PLMs & & 45.6 \\ \hline \multirow{5}{*}{RAS0} & Wiki+Frozen GPT-J & & 50.3 \\ & Wiki+Frozen UL2 & & 52.2 \\ \cline{1-1} & Wiki+Frozen OPT & KAT & 53.0 \\ \cline{1-1} & Wiki+Frozen Codex & (ensemble) & **58.5** \\ \cline{1-1} & Wiki+ All 4 Frozen PLMs & & 57.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: VQA results on the OK-VQA benchmark comparing to standard baselines. “Wiki” stands for “Wikipedia” and the “Wiki” resource in the last row’s block is brought by the answer selector KAT. “All 4 Frozen PLMs” means that we use all the answer choices generated by GPT-J, UL2, OPT, and Codex. When we have UnifiedQA or KAT as answer selector, we train with 3 random seeds and denote the results as \(ensemble\) following Gui et al. (2022).
per, we adapt this model by adding question tokens, CoT rationale tokens, and answer choices tokens to the prefix as input, with the target to generate answers instead of captions. We train the mapping network from scratch and also fine-tune GPT-2.
**IterPLM** Inspired by previous work Wang et al. (2022), we use iterative prompting with the same PLM in choice generation for correct answer selection. A snippet of an example prompt is shown in Figure 5. We use 8-shot in-domain examples with the temperature set to 0.001 and max token set to 5.
**UnifiedQA**Khashabi et al. (2022, 2020) is a multiple-choice question answering (QA) model that performs well across 20 QA datasets, using the T5ForConditionalGeneration model. We load UnifiedQA v2 Khashabi et al. (2022) checkpoint unifiedqa-v2-t5-large-1251000.
### End-task VQA Results
As illustrated in Table 3, we compare our proposed pipeline against several standard baseline approaches: MUTAN+AN Ben-Younes et al. (2017), ConceptBERT Garderes et al. (2020), KRISP Marino et al. (2021), MAVEx Wu et al. (2022), PiCA Yang et al. (2022), and KAT Gui et al. (2022), on the OK-VQA data test set. RAS0 outperforms the previous SOTA by an absolute 4% margin, achieving the new SOTA.
Comparing different answer selectors, it is surprising that the two transformer-based text-only models: UnifiedQA and KAT significantly outperform the multi-modal ClipCap model by around 20% on average, even though their sizes (T5 large) are much smaller than that of GPT-2. We believe this phenomenon is because the Clip image embeddings trained using image captions do not have enough granularity to support reasoning over the image, question, and answer choices for answer selection, compared to T5 models. Besides, IterPLM has much worse scores than we imagined. While many papers Wang et al. (2022) show that iterative prompting should boost the performance, our experiments suggest that asking the PLMs to select between their own output at the highest confidence
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & Type & GPT-J & UL2 & OPT & Codex \\ \hline \multirow{3}{*}{ViT-L\_14} & DG & \multicolumn{4}{c}{23.5} & \multirow{3}{*}{} \\ & w/o cot & & 28.7 & 30.3 & 29.1 & 33.4 \\ & w/ cot & & 29.5 & 33.1 & 31.3 & **35.3** \\ \hline \multirow{3}{*}{RN50x64} & DG & \multicolumn{4}{c}{21.6} & \multirow{3}{*}{} \\ & w/o cot & & 29.3 & 30.3 & 28.6 & 34.5 \\ \cline{1-1} \cline{3-5} & w/ cot & & 29.6 & 32.6 & 31.4 & **36.4** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study on how different inputs influence the answer selection result using ClipCapVQA (Mokady et al., 2021) on OK-VQA. The first column represents two CLIP checkpoints. “DG” represents direct generation without any answer choices.
\begin{table}
\begin{tabular}{l||c c} \hline \hline KAT & Top1 & All w/o cot & All w/ cot \\ \hline \hline GPT-J (single) & 45.9 & 47.8 & 49.6 \\ GPT-J (ensemble) & 46.6 & 48.4 & 50.3 \\ \hline UL2 (single) & 50.2 & 50.7 & 51.2 \\ UL2 (ensemble) & 51.1 & 51.5 & 52.2 \\ \hline OPT (single) & 51.7 & 52.3 & 52.5 \\ OPT (ensemble) & 52.1 & 52.9 & 53.0 \\ \hline Codex (single) & 56.2 & 57.1 & 57.5 \\ Codex (ensemble) & 57.1 & 58.1 & **58.5** \\ \hline All (single) & 56.4 & 56.9 & 57.0 \\ All (ensemble) & 57.0 & 57.6 & 57.9 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study investing how different inputs influence the answer selection results using KAT (top) and UnifiedQA (bottom) on OK-VQA in accuracy scores. “Top1” means using Top 1 answer choice,“All” in the first row means using all answer choices, to form the input respectively. “cot” means the CoT rationales. We train with 3 random seeds and denote the average scores as \(single\) and majority vote results as \(ensemble\).“All” in the leftmost column represent using combined answer choices from all four PLMs.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & GPT-J & UL2 & OPT & Codex \\ \hline w/o cot & 28.5 & 29.1 & 31.6 & **45.6** \\ w/ cot & 28.1 & 32.3 & 33.5 & 44.9 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on how different inputs influence the answer selection result using IterPLM: iterative prompting using the same PLM, on OK-VQA. All results are in accuracy scores. Both setting use all the answer choices.
is indeed a very difficult problem for them.
In Table 3, we also compare single PLM answer choices with ensembled choices by all four PLMs, with the latter showing lower scores. We believe this is because the answer selectors we experiment on are not good enough, and thus increasing choice numbers turns out to hurt the performance.
### Implementation Details
In the answer choice generation step, we use 16-shot in-context examples on the test data. On the training data, because we have ten gold answers with repetitions, we use 4-shot in-context learning for faster generation. The temperature for PLM generation is set to be 0.001. The generation max token length is set to be 15. All experiments of selection models have been run in 8 NVIDIA V100 Tensor Core GPUs with 32 GiB of memory each, 96 custom Intel Xeon Scalable (Skylake) vCPUs, and 1.8 TB of local NVMe-based SSD storage. The running times for KAT, UnifiedQA and ClipCap are less than 4, 2 and 1 hours, respectively. OPT-175b model is locally set up in 32 NVIDIA V100 Tensor Core GPUs to make inferences. The learning rates for KAT, UnifiedQA and Clipcap are set as 3e-5, 5e-5 and 2e-5, respectively, for all experiments. Optimizer AdamW [11] is used for all selection models.
## 5 Ablation Studies
We perform qualitative and quantitative analysis on the answer selection results to better understand whether the expanded external knowledge coverage benefits the end-task VQA much. As illustrated in Tables 4 to 6, we investigate the impact of various inputs on the answer selection results, with answer choices representing the retrieved knowledge.
**CoT Rationale Impact** From the experiments results in Tables 4 to 6 where we compare the settings: "w/cot" and "w/o cot", input with CoT rationales consistently boosts the answer selection performance of KAT, UnifiedQA, and ClipCap. However, this conclusion fails for iterative prompting - adding CoT hurts the performance of IterPLM when we use GPT-J and Codex. We believe this can result from the difference in CoT qualities, and different pre-training methods and data.
**Choice Number Impact** As shown in Table 4, larger knowledge coverage, represented by using choices from all four PLMs versus a single PLM, can not consistently increase the performance of KAT or UnifiedQA. As we compare the results on Codex choices and that on all PLMs choices, more choices always lead to lower accuracy scores. This is somehow against our instinct, and we believe it is because our answer selectors are not good enough. Digging deeper into the problem, we further compare the difference between using Top1 choices and all choices in KAT as in the top table. Note that the Top1 results here are not the same as the Top1 accuracy in Table 2 because KAT uses Wikipedia knowledge by design so it further expands knowledge coverage. We can see that using all choices is consistently better than using Top 1 choice. However, the improvements are too small (0.4-1.9 %) considering that their knowledge coverages differ by at least 20% as in Table 1, suggesting that KAT, while being the best, is still not the ideal selection model, and motivating future research in this direction.
**Multi-modal Selector Impact** As demonstrated in Table 6, we experiment with the two versions of CLIP embedding: "ViT-L\({}_{-}\)14" and "RN50x64" and the difference between direct generation (DG) and answer selection is constantly large - providing answer choices definitely helps ClipCap to generate the correct answer.
**Ensemble Impact** Our answer choice generation step is indeed ensembling on PLMs results. Previous VQA methods that retrieve from PLMs also conduct ensembling but in a different way [23]. They usually request the same prompt (see example in Figure 4) multiple times and take the majority-voted answer. This process is called multi-query ensemble, and could boost the GPT-3 performance by about 5%. We argue that our proposed RAS0 prompting is superior to multi-query ensemble in that we allow more diversity in the output and provide VQA systems more explainability by separating the choice-generation and selection steps, without additional API request cost or longer inference time.
## 6 Conclusion
In this paper, we propose RAS0: a new VQA pipeline following a generate-then-select strategy guided by world knowledge. RAS0 proposes a new prompting method that largely increases the external knowledge coverage by a margin of more than 20% compared to previous approaches on the OK-VQA benchmark. Our pipeline achieves the new SOTA 58.5% on the end-task performance,
encouraging avenues for future studies.
## 7 Limitations
While the previous VQA methods that retrieve from PLMs all use GPT-3, we do not experiment with GPT-3 in this paper due to the additional cost. We only focus on applying text-generation models as answer selectors, while classification models could also potentially be good answer selectors. The multi-modal CLIP embedding has already been surpassed by several recent studies [1, 19, 20, 14] and therefore ClipCap cannot represent the performance of multi-modal answer selectors.
## 8 Ethical Considerations
The authors of this paper acknowledge the significance of responsible NLP in research and development. The objective of this research is to enhance the capabilities of visual question answering models, guided by human values-based world knowledge. We strive to ensure that the model is not only accurate and efficient, but also fair and unbiased. We recognize that the VQA technology may have a substantial impact on society and pledge to be transparent in sharing our findings and progress with relevant users and stakeholders.
## Acknowledgments
The authors would like to thank researchers at AWS AI Labs who commented on or otherwise supported throughout the course of this project, including Simeng Han, Donghan Yu, Sijia Wang, and Shuaichen Chang.
|
2306.01843 | Lifting Architectural Constraints of Injective Flows | Normalizing Flows explicitly maximize a full-dimensional likelihood on the
training data. However, real data is typically only supported on a
lower-dimensional manifold leading the model to expend significant compute on
modeling noise. Injective Flows fix this by jointly learning a manifold and the
distribution on it. So far, they have been limited by restrictive architectures
and/or high computational cost. We lift both constraints by a new efficient
estimator for the maximum likelihood loss, compatible with free-form bottleneck
architectures. We further show that naively learning both the data manifold and
the distribution on it can lead to divergent solutions, and use this insight to
motivate a stable maximum likelihood training objective. We perform extensive
experiments on toy, tabular and image data, demonstrating the competitive
performance of the resulting model. | Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Lea Zimmermann, Ullrich Köthe | 2023-06-02T18:03:03Z | http://arxiv.org/abs/2306.01843v5 | # Maximum Likelihood Training of Autoencoders
###### Abstract
Maximum likelihood training has favorable statistical properties and is popular for generative modeling, especially with normalizing flows. On the other hand, generative autoencoders promise to be more efficient than normalizing flows due to the manifold hypothesis. In this work, we introduce successful maximum likelihood training of unconstrained autoencoders for the first time, bringing the two paradigms together. To do so, we identify and overcome two challenges: Firstly, existing maximum likelihood estimators for free-form networks are unacceptably slow, relying on iteration schemes whose cost scales linearly with latent dimension. We introduce an improved estimator which eliminates iteration, resulting in constant cost (roughly double the runtime per batch of a vanilla autoencoder). Secondly, we demonstrate that naively applying maximum likelihood to autoencoders can lead to divergent solutions and use this insight to motivate a stable maximum likelihood training objective. We perform extensive experiments on toy, tabular and image data, demonstrating the competitive performance of the resulting model. We call our model the maximum likelihood autoencoder (MLAE).
## 1 Introduction
Generative modeling is one of the most important tasks in machine learning, having numerous applications across vision (Rombach et al., 2022), language modeling (Brown et al., 2020), scientific applications (Ardizzone et al., 2018; Radev et al., 2020) and beyond. One of the best-motivated approaches to generative modeling is maximum likelihood training, due to its favorable statistical properties (Hastie et al., 2009). In the continuous setting, exact maximum likelihood training is most commonly achieved by normalizing flows (Rezende and Mohamed, 2015; Dinh et al., 2014; Kobyzev et al., 2020) which parameterize an exactly invertible function with a tractable change of variables (log-determinant term). This generally introduces a trade-off between model expressivity and computational cost, where the cheapest networks to train and sample from, such as coupling block architectures, require very specifically constructed functions which may limit expressivity (Draxler et al., 2022). In addition, normalizing flows preserve the dimensionality of the inputs, requiring a latent space of the same dimension as the data space.
Due to the manifold hypothesis (Bengio et al., 2013), which suggests that realistic data lies on a low-dimensional manifold embedded into a high-dimensional data space, it is more efficient to only model distributions on a low-dimensional manifold and regard deviations from the manifold as uninformative noise. A natural tool for learning low-dimensional manifolds is the autoencoder and many attempts have been made to develop generative versions of this architecture, which transform the training data into a simple latent distribution, typically standard normal, via the encoder. Generating from the model is achieved by passing samples from the latent distribution through the decoder (see fig. 1).
Prior works such as Caterini et al. (2021); Brehmer and Cranmer (2020) have used some version of exact maximum likelihood training in combination with autoencoders, though invariably with autoencoder architectures adapted from the normalizing flow literature (known as "invertible autoencoders" (Teng and Choromanska, 2019) or "injective flows" (Kothari et al., 2021)) where encoder and decoder share parameters. The restrictive architectures used in such models were originally introduced for tractable change of variables calculation in normalizing flows, but such calculations are not possible in the presence of a bottleneck (Brehmer and Cranmer, 2020). As a result, we propose to drop the restrictive constructions (such as coupling blocks) and use an unconstrained encoder and decoder, as in a standard autoencoder. This simplifies the design of the model and makes it more expressive. To our knowledge, we are the first work to introduce exact maximum likelihood training with unconstrained autoencoders.
We build on the unbiased maximum likelihood estimator used in rectangular flows (Caterini et al., 2021). We simplify it considerably by replacing iterative conjugate gradient, which can require iteration of up to the number of latent dimensions to converge, with a single step estimator. This allows us to train our autoencoder models almost as fast as a vanilla autoencoder. Specifically, our loss function and its gradient can be computed in about twice the time (or less) as the reconstruction loss and its gradient. In addition, we make a novel observation about generative autoencoders: naively training with maximum likelihood is ill-defined due to the possibility of diverging curvature in the decoding function. To fix this problem, we propose a modification to our maximum likelihood estimator which counteracts the possibility of diverging curvature. We call our model the _maximum likelihood autoencoder_ (MLAE).
To summarize, we make the following contributions:
* We introduce an efficient exact maximum likelihood estimator for unconstrained autoencoders, making them a practical generative model (section 4.1).
* We identify pathological behavior in the naive application of maximum likelihood training in the presence of a bottleneck, and offer a solution to avoid this behavior while maintaining computational efficiency (section 4.2).
* We demonstrate competitive performance with a series of experiments on toy, tabular and image data (section 5). We provide code to implement our model and reproduce our results at [https://github.com/vislearn/MLAE](https://github.com/vislearn/MLAE).
Figure 1: **Maximum likelihood autoencoders (MLAE) training and inference.**_(Left)_ We combine a reconstruction loss \(\mathcal{L}_{\text{recon.}}\) with a novel maximum likelihood loss \(\tilde{\mathcal{L}}_{\text{NLL}}\) to obtain an autoencoder with a Gaussian latent space. _(Right)_ We generate novel samples by decoding standard normal latent samples with our best-performing models on CelebA and MNIST. The reconstructions shown are on CelebA validation data, the samples are uncurated samples from our models.
Related work
At the heart of maximum likelihood training lies the need to estimate the Jacobian determinant of the transformation to calculate the change of variables. Efficient computation of this determinant traditionally imposed two major restrictions on normalizing flow architectures: Firstly, the latent space has to match in dimension with the data space. Secondly, normalizing flows are restricted to certain functional forms, such as coupling and autoregressive blocks. Below we outline the existing approaches to overcome these problems and how our solution compares.
Lower-dimensional latent spacesOne set of methods attempts to use full-dimensional normalizing flows, with some additional regularization or architectural constraints such that a subspace of the latent space corresponds to the manifold. One strategy adds noise to the data to make it a full-dimensional distribution then denoises to the manifold (Horvat and Pfister, 2021; Loaiza-Ganem et al., 2023). Another restricts the non-manifold latent dimensions to have small variance (Beitler et al., 2021).
Other methods sidestep the problem by making training into a two-step procedure. First, an autoencoder is trained on reconstruction loss, then a normalizing flow is trained to learn the resulting latent distribution. Brehmer and Cranmer (2020) and Kothari et al. (2021) use an injective flow, while Bohm and Seljak (2020) use unconstrained networks as autoencoder. Ghosh et al. (2019) additionally regularize the decoder.
Conformal embedding flows (Ross and Cresswell, 2021) ensure the decomposition of the determinant into the contribution from each block by exclusively using conformal transformations. However, the resulting transformations are quite restrictive.
The most similar work to ours is the rectangular flow (Caterini et al., 2021) which estimates the gradient of the log-determinant via an iterative, unbiased estimator. The resulting method is quite slow to train, and uses injective flows, which are restrictive.
Unconstrained normalizing flow architecturesSeveral works attempt to reduce the constraints imposed by typical normalizing flow architectures, allowing the use of free-form networks. FFJORD (Grathwohl et al., 2018) is a type of continuous normalizing flow (Chen et al., 2018) which estimates the change of variables stochastically. Residual flows (Behrmann et al., 2019; Chen et al., 2019) make residual networks invertible, but require expensive iterative estimators to train via maximum likelihood. Self-normalizing flows (Keller et al., 2021) and relative gradient optimization (Gresele et al., 2020) estimate maximum likelihood gradients for the matrices used in neural networks, but restrict the architecture to use exclusively square weight matrices without skip connections.
Approximating maximum likelihoodMany methods optimize some bound on the maximum likelihood, notably the variational autoencoder (Kingma and Welling, 2013) and its variants. Cunningham et al. (2020) also optimizes a variational lower bound to the likelihood. Kumar et al. (2020), Zhang et al. (2020) approximate the log-determinant of the Jacobian by its Frobenius norm. The entropic AE (Ghose et al., 2020) maximizes the entropy of the latent distribution by a nearest-neighbor estimator while constraining its variance, resulting in a Gaussian latent space. In addition, there are other ways to regularize the latent space which are not based on maximum likelihood, e.g. adversarial methods (Makhzani et al., 2015).
In contrast to the above, our approach uses exact maximum likelihood with an unconstrained architecture, which easily accommodates a lower-dimensional latent space.
## 3 Background
AutoencodersAutoencoders are defined by a pair of models, an encoder \(f:\mathbb{R}^{D}\rightarrow\mathbb{R}^{d}\) which compresses data to a lower-dimensional latent space and a decoder \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{D}\) which decompresses the latent representation. The goal is to learn models such that a reconstructed instance \(\hat{x}=g(f(x))\) is close to its original value despite being compressed through a lower-dimensional bottleneck. The closeness is most often measured by the squared Euclidean distance which gives rise to the reconstruction loss:
\[\mathcal{L}_{\text{recon.}}=E_{\text{P}_{\text{P}_{\text{P}_{\text{P}}}}(x)} \left[\|\hat{x}-x\|^{2}\right]. \tag{1}\]
The image of \(g\), which we will also call the _decoder manifold_, is defined as the set of points \(\{g(z)\mid z\in\mathbb{R}^{d}\}\), which is a \(d\)-dimensional manifold embedded in \(\mathbb{R}^{D}\). The optimal encoder will be such that \(\hat{x}\) is an orthogonal projection onto the decoder manifold and the optimal decoder \(g(z)\) will output the mean value of all \(x\) such that \(f(x)=z\). The optimal encoder and decoder form a pseudoinverse pair, defined below.
Pseudoinverse pairWe call \(f\) and \(g\) a _pseudoinverse pair_ if \(f\circ g\) is the identity. Equivalently, \(g\circ f\) is a _projection_: a reconstructed sample \(\hat{x}=g(f(x))\) does not change on further application of \(g\circ f\), i.e. \((g\circ f)\circ(g\circ f)=(g\circ f)\). We call \(f\) a pseudoinverse of \(g\) and \(g\) a pseudoinverse of \(f\) if \(f\) and \(g\) form a pseudoinverse pair. Note that if \(d<D\) a pseudoinverse pair is not unique for a given \(f\) or \(g\): the only requirement on \(f\) is that it inverts \(g\) on its image; its behavior elsewhere is unconstrained. Equally, the only requirement on \(g\) is that it maps to some point in its preimage, so \(g(z)=x\) for some \(x\) such that \(f(x)=z\), but which point is not specified. We can remove these degrees of freedom by requiring that \(f\) and \(g\) minimize the reconstruction loss, having the properties given in the previous paragraph.
Change of variables across dimensionsNormalizing flow models, trained by maximum likelihood via the change of variables theorem, are only defined when mapping between spaces of equal dimension. A result from differential geometry (Krantz and Parks, 2008) allows us to generalize the change of variables theorem to non-equal-dimension transformations through the formula:
\[p_{X}(x)=p_{Z}(f(x))\left(\det\left[g^{\prime}(f(x))^{\top}g^{\prime}(f(x)) \right]\right)^{-\frac{1}{2}}, \tag{2}\]
where \(f\) and \(g\) are a pseudoinverse pair and primes denote derivatives: \(g^{\prime}(f(x))\) is the Jacobian matrix of \(g\) evaluated at \(f(x)\). Note that, since \(p_{X}\) is derived as the pushforward of the latent distribution \(p_{Z}\) by \(g\), the formula is valid only for \(x\) which lie on the decoder manifold (see appendix A for more details).
Injective flowsInjective flows, also called invertible autoencoders (Teng and Choromanska, 2019; Brehmer and Cranmer, 2020), parameterize \(f\) and \(g\) as the composition of two invertible functions, \(w\) defined in \(\mathbb{R}^{D}\) and \(h\) defined in \(\mathbb{R}^{d}\), with a slicing/padding operation in between:
\[f=h^{-1}\circ\texttt{slice}\circ w^{-1}\quad\text{and}\quad g=w\circ\texttt{ pad}\circ h, \tag{3}\]
where \(\texttt{slice}(x)\) selects the first \(d\) elements of \(x\) and \(\texttt{pad}(z)\) concatenates \(D-d\) zeros to the end of \(z\). Since \(\texttt{slice}\) and \(\texttt{pad}\) are a pseudoinverse pair, so too are \(f\) and \(g\).
Rectangular flowsMinimizing the negative logarithm of eq. (2) and adding a Lagrange multiplier to restrict the distance of data points from the decoder manifold results in the following per-sample loss term:
\[\mathcal{L}_{\text{RF}}(x)=-\log p_{Z}(z)+\frac{1}{2}\log\det\left[g^{\prime}( z)^{\top}g^{\prime}(z)\right]+\beta\|\hat{x}-x\|^{2}, \tag{4}\]
where \(z=f(x)\) and \(\beta\) is a hyperparameter.
The log-determinant term is the difficult one to optimize. Fortunately, its gradient with respect to the parameters \(\theta\) of the decoder can be estimated tractably (Caterini et al., 2021). Note that \(g=g_{\theta}\) but the \(\theta\) subscript is dropped to avoid clutter. The relevant quantity is (with \(J=g^{\prime}(z)\)):
\[\frac{\partial}{\partial\theta_{j}}\frac{1}{2}\log\det(J^{\top}J)=\frac{1}{2} \operatorname{tr}\left((J^{\top}J)^{-1}\frac{\partial}{\partial\theta_{j}}(J ^{\top}J)\right). \tag{5}\]
The trace can be estimated with a trace estimator (Hutchinson, 1989; Girard, 1989):
\[\frac{\partial}{\partial\theta_{j}}\frac{1}{2}\log\det(J^{\top}J)\approx \frac{1}{2K}\sum_{k=1}^{K}\epsilon_{k}^{\top}(J^{\top}J)^{-1}\frac{\partial}{ \partial\theta_{j}}(J^{\top}J)\epsilon_{k}, \tag{6}\]
where the \(\epsilon_{k}\) are sampled from a distribution where \(E[\epsilon\epsilon^{\top}]=\mathbb{I}\), typically either Rademacher or standard normal. Matrix multiplication can be avoided by using vector-matrix and matrix-vector product subroutines. These are readily obtained by automatic differentiation in the case of vector products with \(J^{\top}J\) and by autodiff combined with the conjugate gradient iterative method for \((J^{\top}J)^{-1}\) products. Write \(\epsilon_{k}^{\top}(J^{\top}J)^{-1}=\texttt{CG}(J^{\top}J;\epsilon_{k})^{\top}\) where \(\texttt{CG}(A;b)\) denotes the solution to
\(Ax=b\) via conjugate gradient. The parameter derivative can be made to act only on the rightmost Jacobian terms by applying the stop-gradient operation, to the output of the conjugate gradient method. The final surrogate loss function for the log-determinant term is therefore:
\[\frac{1}{2K}\sum_{k=1}^{K}\texttt{stop\_gradient}\left(\texttt{CG}\left(J^{ \top}J;\epsilon_{k}\right)^{\top}\right)J^{\top}J\epsilon_{k}, \tag{7}\]
which replaces the log-determinant term in the loss. The function stop_gradient returns its input, but makes it constant with respect to parameters. Note that the resulting loss does not have the same value as, but shares the gradient with the original loss in eq. (4).
## 4 Maximum likelihood autoencoder (MLAE)
Our modification to rectangular flows is threefold: first, we use an unconstrained autoencoder architecture (no restrictively parameterized invertible functions); second, we introduce a more computationally efficient surrogate estimator; third, we modify the surrogate to avoid pathological behavior related to manifolds with high curvature. Our per-sample loss function is:
\[\mathcal{L}(x)=-\log p_{Z}(z)-\frac{1}{K}\sum_{k=1}^{K}\epsilon_{k}^{\top}f^{ \prime}(x)\,\texttt{stop\_gradient}\left(g^{\prime}(z)\epsilon_{k}\right)+ \beta\|\hat{x}-x\|^{2}, \tag{8}\]
with \(z=f(x)\). Note the negative sign before the surrogate term, which comes from sending the log-determinant gradient to the encoder rather than the decoder. We will derive and motivate this formulation of the loss in sections 4.1 to 4.2.
### Simplifying the surrogate
We considerably simplify the optimization of rectangular flows by a new surrogate for the log-determinant term, which uses the Jacobian of the encoder as an approximation for the inverse Jacobian of the decoder. This allows the surrogate to be computed in a single pass, avoiding costly conjugate gradient iterations.
We do this by expanding the derivative in eq. (5):
\[\frac{1}{2}\operatorname{tr}\left((J^{\top}J)^{-1}\frac{\partial}{\partial \theta_{j}}(J^{\top}J)\right)=\operatorname{tr}\left(J^{\dagger}\frac{ \partial}{\partial\theta_{j}}J\right), \tag{9}\]
where \(J^{\dagger}=(J^{\top}J)^{-1}J^{\top}\) is the pseudoinverse of \(J\). The full derivation is in appendix B. To see the advantage of this formulation consider that for a pseudoinverse pair \(f\) and \(g\):
\[\frac{\partial}{\partial z}f(g(z))=f^{\prime}(g(z))g^{\prime}(z)=\mathbb{I}, \tag{10}\]
since \(f\circ g\) is the identity function. By substituting \(f(x)\) for \(z\), we see that the pseudoinverse of the Jacobian of \(g\) evaluated at \(f(x)\) is just the Jacobian of \(f\) evaluated at \(\hat{x}\). Using the stop_gradient operation, this leads to the following surrogate loss term:
\[\frac{1}{K}\sum_{k=1}^{K}\texttt{stop\_gradient}\left(\epsilon_{k}^{\top}f^ {\prime}(\hat{x})\right)g^{\prime}(z)\epsilon_{k}, \tag{11}\]
or equivalently, using the encoder Jacobian in place of the decoder Jacobian (see appendix B):
\[-\frac{1}{K}\sum_{k=1}^{K}\epsilon_{k}^{\top}f^{\prime}(\hat{x})\texttt{ stop\_gradient}\left(g^{\prime}(z)\epsilon_{k}\right). \tag{12}\]
Each term of the sum can be computed from just two vector-Jacobian products obtained from backward-mode automatic differentiation. This is a significant improvement on the iterative conjugate gradient method needed in the original formulation of rectangular flows which requires up to \(2(d+1)\) vector-Jacobian or Jacobian-vector products to ensure convergence (Caterini et al., 2021). We measure \(\sim 1.5\times\) to \(2\times\) the wall clock time of our loss compared to reconstruction loss only, independent of the latent dimension. Note that the surrogate is only accurate if \(f\) is (at least approximately) a pseudoinverse for \(g\). We assume that this is sufficiently fulfilled in our model due to the presence of the reconstruction loss and we observe stable training in practice, thus validating the assumption.
### Problems with maximum likelihood in the presence of a bottleneck
Rectangular flows are trained with a combination of a reconstruction and likelihood term. We might ask what happens if we only train with the likelihood term, making an analogy to normalizing flows. In this case our loss would be:
\[\mathcal{L}_{\text{NLL}}(x)=-\log p_{Z}(z)+\frac{1}{2}\log\det\left[g^{\prime}(z )^{\top}g^{\prime}(z)\right]. \tag{13}\]
Unfortunately, optimizing this loss can lead us to learn a degenerate decoder manifold, an issue raised in Brehmer and Cranmer (2020). Here we expand on their argument to show why the model will tend to learn a manifold which aligns with the lowest entropy directions of the data. We show how to fix this problem in the linear case with the addition of a reconstruction term but show that adding a reconstruction term in the nonlinear case is not sufficient to avoid pathological solutions.
First consider that the per-sample loss is invariant to projections: \(\mathcal{L}_{\text{NLL}}(\hat{x})=\mathcal{L}_{\text{NLL}}(x)\), since \(\mathcal{L}_{\text{NLL}}(x)\) is a function only of \(f(x)\) and \(f(\hat{x})=f(x)\). This means that we can write our loss as:
\[\mathcal{L}_{\text{NLL}}=E_{p_{\text{data}}(x)}[\mathcal{L}_{\text{NLL}}(x)]= E_{\hat{p}_{\text{data}}(x)}[\mathcal{L}_{\text{NLL}}(\hat{x})], \tag{14}\]
where \(\hat{p}_{\text{data}}(\hat{x})\) is the probability density of the projection of the training data onto the decoder manifold. Now consider that the negative log-likelihood loss is one part of a KL divergence, and KL divergences are always non-negative:
\[KL(\hat{p}_{\text{data}}(\hat{x})||p_{\theta}(\hat{x}))=-H(\hat{p}_{\text{ data}}(\hat{x}))-E_{p_{\text{data}}(\hat{x})}[\log p_{\theta}(\hat{x})]\geq 0. \tag{15}\]
As a result, the loss is lower bounded by the entropy of the data projected onto the manifold:
\[\mathcal{L}_{\text{NLL}}=-E_{\hat{p}_{\text{data}}(\hat{x})}[\log p_{\theta}( \hat{x})]\geq H(\hat{p}_{\text{data}}(\hat{x})). \tag{16}\]
Unlike in standard normalizing flow optimization, where the right hand side would be fixed, giving the loss a well-defined lower bound, in this case the entropy on the right depends on the learned model, and the loss can continue to decrease without bound by reducing the entropy of the projected data. See fig. 2 (_left_) for an illustration of a pathological case. This is a toy model with a 1-dimensional latent space where the decoder maps to a semicircle with learnable radius which is offset horizontally. The encoder projects orthogonally onto the manifold. Naively optimizing the likelihood leads to increasing curvature since this decreases the entropy of the projected data indefinitely.
Since it is clear that optimization of a likelihood loss without reconstruction will fail, we ask what happens as we reintroduce a reconstruction loss at various strengths. Suppose the loss is of the form:
\[\mathcal{L}=\mathcal{L}_{\text{NLL}}+\beta E_{p_{\text{data}}(x)}\left[\|\hat {x}-x\|^{2}\right]. \tag{17}\]
Figure 2: Naive training of autoencoders with negative log-likelihood (NLL, see section 4.2) leads to pathological solutions _(left)_. Starting with the initialization (\(t=0\), black), gradient steps increase the curvature of the learnt manifold (\(t=1,2\), orange). This reduces NLL because the entropy of the projected data is reduced, by moving the points closer to one another. This effect is stronger than the reconstruction loss. We fix this problem by evaluating the volume change off-manifold _(right)_. This moves the manifold closer to the data and reduces the curvature (\(t=1,2\), green), until it eventually centers the manifold on the data with zero curvature (\(t=\infty\), green). Light lines show the set of points which map to the same latent point. Data is projected onto the \(t=2\) manifold.
In appendix C the closed-form solution to this model is worked out in detail when \(f\) and \(g\) are linear functions. The model learns low-entropy directions of the data when \(\beta\) is too low but transitions to the PCA solution for large enough \(\beta\). This transition occurs at \(\beta=1/2\sigma^{2}\) where \(\sigma\) is the smallest eigenvalue of the data covariance matrix.
Unfortunately, when \(f\) and \(g\) are non-linear, the addition of a reconstruction term is not enough to fix the pathological behavior of the likelihood loss defined on the manifold. Consider again fig. 2, where the left-hand figure is optimized with both likelihood and reconstruction loss. Without additional constraints on the decoding function, the curvature can increase without bound, leading to a loss tending towards negative infinity.
Towards a well-behaved lossThe term which leads to pathological behavior in the likelihood loss is the log-determinant. In the original formulation, this expression is evaluated after projecting \(x\) to the manifold. We make the fairly simple modification of evaluating \(f^{\prime}\) at \(x\) rather than \(\hat{x}\). Namely, we modify eq. (12) to estimate the gradient of the log-determinant term by:
\[-\frac{1}{K}\sum_{k=1}^{K}\epsilon_{k}^{\top}f^{\prime}(x)\texttt{stop\_gradient} \left(g^{\prime}(z)\epsilon_{k}\right). \tag{18}\]
Using the change of variables with \(f^{\prime}\) evaluated at \(\hat{x}\), on the manifold, all that matters is the change of volume from the projected data to the latent space, so we can decrease \(-\log\det(f^{\prime}(\hat{x})f^{\prime}(\hat{x})^{\top})\) (and hence the loss) by choosing a manifold which concentrates the projected data more tightly. Concentration effects due to curvature happen when data is on the concave side of the manifold (as in fig. 2). In this setting, a perturbation to \(x\) will not change the resulting \(f(x)\) very much, an effect that becomes stronger with more curvature. As a result, \(f^{\prime}(x)\) becomes smaller as curvature increases. This increases the loss and thus counteracts the undesired concentration effect. In this way, we discourage pathological solutions involving high curvature. In fig. 2 (_right_) we can see the effect of the modified estimator: the manifold now moves towards the data since the optimization is not dominated by diverging curvature.
Along with the results of section 4.1, this leads to the following loss (same as eq. (8)):
\[\mathcal{L}(x) =\tilde{\mathcal{L}}_{\text{NLL}}(x)+\beta\mathcal{L}_{\text{ recon.}}(x) \tag{19}\] \[=-\log p_{Z}(z)-\frac{1}{K}\sum_{k=1}^{K}\epsilon_{k}^{\top}f^{ \prime}(x)\texttt{stop\_gradient}\left(g^{\prime}(z)\epsilon_{k}\right)+ \beta\|\hat{x}-x\|^{2}. \tag{20}\]
Note that we could have modified eq. (11) (replacing \(\hat{x}\) with \(x\)) instead of modifying eq. (12), but we found that giving gradient from the log-determinant term to the encoder led to more stable training.
## 5 Experiments
In this section, we demonstrate the empirical success of the proposed model. First, we analyze the influence of the reconstruction weight on model behavior. Second, we compare our model to rectangular flows on tabular data. Finally, we show competitive performance on the Pythae image generation benchmark Chadebec et al. (2022), achieving the best FID score in some categories.
Code to implement the model and to reproduce our results is provided at [https://github.com/vislearn/MLAE](https://github.com/vislearn/MLAE).
### Characterization of model behavior
Implementation detailsIn implementing the trace estimator, we have to make a number of choices. Briefly, i) we chose to formulate the log-determinant gradient in terms of the encoder rather than decoder as it was more stable in practice, ii) we performed traces in the order \(f^{\prime}(x)g^{\prime}(z)\) as this reduces variance, iii) we used a mixture of forward- and backward-mode autograd as this was compatible with our estimator, and iv) we used orthogonalized Gaussian noise in the trace estimator. Full justification for these choices is given in appendix D.
Toy dataTo further illustrate the problems of maximum likelihood training in the presence of a bottleneck, we consider a 2-D sinusoid subjected to Gaussian noise (fig. 3). The autoencoder should converge to a manifold that spans the sinusoid. However, this only happens when the reconstruction weight \(\beta\) in the loss eq. (8) is sufficiently high (fig. 3 right). Otherwise, our analysis in section 4.2 predicts that the loss is minimized by a projection with minimal entropy. In other words, the latent codes for small \(\beta\) represent the direction _orthogonal_ to the manifold, and the autoencoder learns the noise instead of the sinusoid (fig. 3 left). We demonstrate in appendix E.1 that the point at which \(\beta\) becomes sufficiently large varies between datasets.
Conditional MNISTWe train models on MNIST, conditioned on the digit labels, across different reconstruction weights and analyze how well each model reconstructs the data manifold. We measure this by the entropy of the learned distribution and the visual diversity of the generated samples, which both increase with reconstruction weight, as expected. We find that the convergence towards a Gaussian latent distribution slows down for very large reconstruction weights, as the gradient is then dominated by the reconstruction loss, see appendix E.2.
### Model benchmarking
Tabular DataWe evaluate our method on four of the tabular datasets used by Papamakarios et al. (2017), using the same data splits, and make a comparison to the published rectangular flow results (Caterini et al., 2021), see table 1. We adopt the "FID-like metric" from that work, which computes the Wasserstein-2 distance between the closest Gaussian distributions to the test data and to the data generated by the model. This is a measure of the difference of the means and covariance matrices of the generated and test datasets. We note that the results are comparable, with our method underperforming on GAS but achieving a better result on MINIBOONE. However we emphasize the greatly reduced runtime of our method: while the rectangular flow experiments were trained for around 1-2 hours per run, our runs each took under 10 minutes. Full experimental and runtime details are in appendix E.3.
Image generation on Pyhae benchmarkWe test our approach in the Pyhae benchmark for generating images with autoencoders (Chadebec et al., 2022). We train the architectures they propose using our loss from section 4.2. For selecting hyperparameters, we follow the instructions from the benchmark by selecting the best model by validation FID from 10 different configurations: We test
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & POWER & GAS & HEPMASS & MINIBOONE \\ \hline Rectangular Flows (\(K=1\)) & **0.083 \(\pm\) 0.015** & **0.110 \(\pm\) 0.021** & **0.779 \(\pm\) 0.191** & 1.001 \(\pm\) 0.051 \\ MLAE (_ours_) (\(K=1\)) & **0.109 \(\pm\) 0.052** & 0.377 \(\pm\) 0.038 & **0.710 \(\pm\) 0.020** & **0.772 \(\pm\) 0.034** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of performance on FID-like (lower is better) metric between rectangular flows and our method on tabular datasets (Papamakarios et al., 2017).
Figure 3: Learning a noisy 2-D sinusoid with a 1-D latent space for different reconstruction weights \(\beta\). Color codes denote the value of the latent variable at each location. When the reconstruction term has low weight (_left_), the autoencoder learns to throw away information about the position along the sinusoid and only retains the orthogonal noise. Only sufficiently high weights (_right_) result in the desired solution, where the decoder spans the sinusoid manifold. The middle plot shows the tradeoff between reconstruction error and NLL as we transition between these regimes (box plots indicate variability across runs).
different reconstruction weights \(\beta=5,10,15,20,25\), and number of Hutchinson samples \(K=1,2\). We give details on the training procedure in appendix E.4.
The Pythae benchmark trains on images from MNIST (LeCun et al., 2010) (data \(D=784\), latent \(d=16\)), CIFAR10 (Krizhevsky, 2009) (\(D=3072,d=256\)), and CelebA (Liu et al., 2015) (\(D=12288,d=64\)). To get latent samples, Pythae proposes to sample from a standard normal, and a GMM with 10 components fit to the training latent codes.
We report Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017) on CelebA in table 2. We achieve SOTA on the ResNet architecture, and perform competitively on the ConvNet architecture. We point to appendix E.4 for detailed results on CelebA, MNIST and CIFAR10.
We notice that we can further improve performance via a small but important change to the architecture: The architectures in Pythae consist of several convolutional layers, followed by a single fully-connected layer which projects to the latent dimension. Adding more fully-connected layers, we can further improve our FID score to \(47.6\) with the normal sampler, and \(39.7\) with the GMM sampler (see table 2 and fig. 1 for samples). This only adds \(2\%\) additional parameters compared to the Pythae ConvNet. On MNIST, we find that a network with only \(1/5\)th of the number of parameters of the corresponding Pythae model reduces the FID by a factor of \(1/2\) for the normal sampler. We argue that the inductive bias from a mainly convolutional network prevents having latent dimensions that are independent from one another as their receptive field is limited.
Due to the high computational cost of running the 570 new benchmark models that would result from adding a new architecture to Pythae (requiring several hundreds to thousand GPU hours), we leave a full re-run of this optimization for future work.
Based on the impact of minor changes to the model architecture, we believe that the full potential of our model stands to be realized.
## 6 Conclusions
This paper offers a computationally efficient solution to maximum likelihood training on manifolds, which we call the maximum-likelihood autoencoder (MLAE). We significantly improve an existing estimator, note that it can be applied to unconstrained autoencoders, analyze problems with naive maximum-likelihood training and offer a solution, and implement and test our model on toy, tabular and image datasets. We find that the model is practical and scalable, showing similar or better performance to other autoencoder generative models on a benchmark. As our work is concerned with
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{ConvNet + \(\mathcal{N}\)} & \multicolumn{2}{c|}{ResNet + \(\mathcal{N}\)} & \multicolumn{2}{c}{ConvNet + GMM} & \multicolumn{2}{c}{ResNet + GMM} \\ & FID \(\downarrow\) & IS \(\uparrow\) & FID & IS & FID & IS & FID & IS \\ \hline VAE (Kingma and Welling, 2013) & 54.8 & 1.9 & 66.6 & 1.6 & 52.4 & 1.9 & 63.0 & I.7 \\ IWAE (Burda et al., 2015) & 55.7 & 1.9 & 67.6 & 1.6 & 52.7 & 1.9 & 64.1 & 1.7 \\ VAE-InN (Brezegne and Mohamed, 2015) & 56.5 & 1.9 & 67.1 & 1.6 & 53.3 & 1.9 & 62.8 & 1.7 \\ VAE-IAF (Kingma et al., 2016) & 55.4 & 1.9 & 66.2 & 1.6 & 53.6 & 1.9 & 62.7 & 1.7 \\
2-(FCVAE Higgins et al., 2017; Chen et al., 2018) & 55.7 & 1.8 & 65.9 & 1.6 & 51.7 & 1.9 & 59.3 & 1.7 \\ FactorVAE (Kim and Mnih, 2018) & 53.8 & 1.9 & 66.4 & 1.7 & 52.4 & 2.0 & 63.3 & 1.7 \\ InfoVAE (-RBF/IMQ) (Zhao et al., 2017) & 55.5 & 1.9 & 66.4 & 1.6 & 52.7 & 1.9 & 62.3 & 1.7 \\ AAE (Makharan et al., 2015) & 59.9 & 1.8 & 64.8 & 1.7 & 53.9 & 2.0 & 58.7 & 1.8 \\ MSSSMVAE (Neel et al., 2017) & 124.3 & 1.3 & 119.0 & 1.3 & 124.3 & 1.3 & 119.2 & 1.3 \\ \hline VAEGAN (Larsen et al., 2016) & **39.7** & 1.9 & 122.8 & 2.0 & **35.6** & 1.8 & 84.3 & 1.7 \\ \hline VanillaAE (-RBF/IMQ) (Tolsikhin et al., 2017) & 64.6 & 1.7 & 67.1 & 1.6 & 51.7 & 2.0 & 57.7 & 1.8 \\ \hline VQVAE (Van Den Oord et al., 2017) & 306.9 & 1.0 & 1403.2 & 2.2 & 51.6 & 2.0 & 57.9 & 1.8 \\ RAE- (-2.6Y) Ghosh et al. (2019) & 86.1 & 2.8 & 168.7 & 3.1 & 52.5 & 1.9 & 58.3 & 1.8 \\ \hline
**MLAE (_ours_)** & 56.9 & 2.1 & **62.3** & 1.7 & 47.3 & 1.9 & **55.0** & 1.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Benchmark results on CelebA**, following Chadebec et al. (2022). We train their architectures (ConvNet and ResNet) with our new training objective, achieving SOTA FID on ResNet. We draw latent samples from standard normal “A/” or a GMM fit using training data “GMM”. Models with multiple variants (indicated in brackets) have been merged to indicate only the best result across variants. We mark the best FID in each column in bold and underline the second best. As the benchmark chooses the best of 10 models to report, no standard deviations are provided.
improving generative models, we do not foresee any direct negative societal consequences resulting from our work, though note that it is possible to misuse generative models.
Several theoretical and practical questions remain for future work. We motivate our improved loss (section 4.2) heuristically, but are unsure what objective the gradient corresponds to. Fitting a GMM to the latent space after training improves performance, suggesting that our latent distributions are not perfectly Gaussian. We speculate that this could be due to the encoder and decoder not being a true pseudoinverse pair, or is a consequence of manifold overfitting (Loaiza-Ganem et al., 2022), or could be due to the inductive bias of the benchmark architecture and leave potential theoretical or practical improvements to future work.
## 7 Acknowledgements
This work is supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Cluster of Excellence). AR acknowledges funding from the Carl-Zeiss-Stiftung. LZ acknowledges support by the German Federal Ministery of Education and Research (BMBF) (project EMUNE/03110293A). We thank the Center for Information Services and High Performance Computing (ZIH) at TU Dresden for its facilities for high throughput calculations. The authors acknowledge support by the state of Baden-Wurttemberg through bwHPC and the German Research Foundation (DFG) through grant INST 35/1597-1 FUGG.
|
2305.09797 | Analytical study of particle geodesics around a scale-dependent de
Sitter black hole | We give a fully analytical description of radial and angular geodesics for
massive particles that travel in the spacetime provided by a (3+1)-dimensional
scale-dependent black hole in the cosmological background, for which, the
quantum corrections are assumed to be small. We show that the equations of
motion for radial orbits can be solved by means of Lauricella hypergeometric
functions with different numbers of variables. We then classify the angular
geodesics and argue that no planetary bound orbits are available. We calculate
the epicyclic frequencies of circular orbits at the potential's maximum and the
deflection angle of scattered particles is also calculated. Finally, we resolve
the raised Jacobi inversion problem for the angular motion by means of a
genus-2 Riemannian theta function, and the possible orbits are derived and
discussed. | Mohsen Fathi | 2023-05-16T20:40:04Z | http://arxiv.org/abs/2305.09797v3 | # Analytical study of particle geodesics around a scale-dependent de Sitter black hole
###### Abstract
We give a fully analytical description of radial and angular geodesics for massive particles that travel in the spacetime provided by a \((3+1)\)-dimensional scale-dependent black hole in the cosmological background, for which, the quantum corrections are assumed to be small. We show that the equations of motion for radial orbits can be solved by means of Lauricella hypergeometric functions with different numbers of variables. We then classify the angular geodesics and argue that no planetary bound orbits are available. We calculate the epicyclic frequencies of circular orbits at the potential's maximum and the deflection angle of scattered particles is also calculated. Finally, we resolve the raised Jacobi inversion problem for the angular motion by means of a genus-2 Riemannian theta function, and the possible orbits are derived and discussed.
_keywords_: Black holes, time-like geodesics, scale-dependent gravity, cosmological constant
pacs: 04.20.Fy, 04.20.Jb, 04.25.-g
## I Introduction
The reconciliation of geometry and gravity by the general theory of relativity is shown by the investigation of free-falling objects in the gravitational fields, where the curvature of spacetime plays the main role. In fact, the argument that planets and light do travel on geodesics was the main reason that general relativity could receive some popularity soon after its birth. In this regard, and after the proposition of the Schwarzschild solution [1], the prediction and measurement of light deflection around the Sun, proven during the 1919 solar eclipse expedition [2], and the accurate evaluation of the anomalous precession in the perihelion of Mercury [3], can be named as the first two primary tests of general relativity. In fact, according to the non-linear nature of the partial differential equations that appear in the dynamics of moving particles in curved spacetimes, the above observational tests and the similar ones which are still in progress, have been based on the simplified results obtained from the approximate or numerical manipulations of the geodesic equations. On the other hand, it is of significant advantage to have in hand the analytical expressions. First, because they may serve as the touchstone for the numerical methods and approximations, and second, they can be used to make a complete systematic study of the parameter space, and hence, to make further predictions of the astrophysical observables. Accordingly, and since Hagihara's 1931 studies on the geodesics of particles in Schwarzschild spacetime [4], which was then followed by Darwin, Mielnik, and Plebanski [5; 6; 7], efforts to find exact analytical solutions for the geodesic equations of massive and massless particles have been on the rise. In particular, the application of modular forms in solving the arising (hyper-)elliptic integrals in the study of geodesics has received considerable attention in the last two decades. These methods which are based on the theories of elliptic functions and modular forms, were studied by nominated nineteenth-century mathematicians such as Jacobi [8], Abel [9], Riemann [10; 11], and Weierstrass [12] (see also Ref. [13] for a complete textbook review on these discoveries). Accordingly, numerous investigations have been devoted to the analysis of the time-like and null geodesics in static and stationary black hole spacetimes inferred from general relativity and its extensions, in which, the raised (hyper-)elliptic integrals are treated by means of hypergeometric, elliptic, and the Riemannian theta functions of the different genus (see for example Refs. [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]).
It is, however, important to mention that although general relativity has appeared successful in the course of the aforementioned astrophysical tests, this theory has not yet answered the long-lasting questions concerning the quantum nature of gravity. Hence, it is indispensable to search for a consistent theory of quantum gravity, which is one of the famous quests in modern theoretical physics. In fact, in most cases, scientists try to take into account the scale-dependence of the gravitational action's couplings, once the quantum effects appear [43; 44; 45; 46; 47; 48; 49; 50; 51]. In this sense, the theories of gravity become scale-dependent (SD) at the quantum level. The SD theories of gravity have received notable attention during the last years, and in particular, their relevance to black hole spacetimes has been studied widely [52; 53; 54; 55; 56; 57; 58; 59; 60].
Also in this work, we consider a special \((3+1)\)-dimensional (4D) static spherically symmetric SD spacetime for black holes in the cosmological background given in Ref. [59]. In the same interest as at the beginning of this section, we study and derive the exact analytical solutions to the equations of motion for massive particles moving in the exterior geometry of this black hole. The analysis requires a precise treatment of hyper-elliptic integrals with special properties, and we provide several methods for
deriving the solutions. In particular, we exploit the Lauricella hypergeometric functions of different numbers of variables. Note that, the definite Lauricella functions have been used to calculate the period of planar and non-planar bound orbits in black hole spacetimes. In this study, for the first time, we present the indefinite Lauricella functions as the solutions to hyper-elliptic integrals that appear in the calculation of the radial geodesics and use them to simulate the possible orbits. The paper is organized as follows: In Sect. II we provide a brief introduction to the SD theory and introduce the black hole solution. This is followed by the derivation of the horizons and the causal structure of the spacetime. In Sect. III, we construct the Lagrangian dynamics which is used to study the geodesics. In Sect. IV we begin our discussion, starting from the radial geodesics. The relevant effective potential and the corresponding types of orbits are derived and discussed in detail. In Sect. V, we switch to the angular geodesics, which includes the analysis of the effective potential and the possible orbits. In this section, the scattering angle of deflected particles and the stability of circular orbits are also discussed. Within the paper, all kinds of orbits are plotted appropriately to demonstrate their properties. We conclude in Sect. VI. Throughout this work, we apply a geometrized unit system, in which \(G=c=1\). Also wherever appears, prime denotes differentiation with respect to the \(r\)-coordinate.
## II The SD black hole solution
In the SD theories of gravity, the classical general relativistic solutions are extended by means of some SD coupling parameters that compensate for the quantum corrections. For the particular case which is of interest in this study, there are two coupling parameters that contribute to the construction of the theory; the _running_ cosmological constant \(\Lambda_{k}\) and the running Newton's gravitational constant \(G_{k}\), where \(k(r)\) plays the role of an arbitrary re-normalization scale. This way, and by including the metric tensor \(g_{\mu\nu}\) as the main ingredient, the Einstein field equations take the SD form [59]
\[G_{\mu\nu}+\Lambda_{k}g_{\mu\nu}=\kappa_{k}T^{\rm eff}_{\mu\nu}, \tag{1}\]
in which \(\kappa_{k}=8\pi G_{k}\), and the effective energy-momentum tensor is given by
\[\kappa_{k}T^{\rm eff}_{\mu\nu}=\kappa_{k}T^{\rm M}_{\mu\nu}-\Delta t_{\mu\nu}, \tag{2}\]
in terms of the matter \(T^{\rm M}_{\mu\nu}\), and the \(G\)-varying
\[\Delta t_{\mu\nu}=G(r)\left(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)G^{- 1}(r), \tag{3}\]
parts of the energy-momentum tensor, where \(\Box\equiv\nabla^{\lambda}\nabla_{\lambda}\). Now the null energy condition implies that \(\mathcal{O}\left(k(r)\right)\rightarrow\mathcal{O}(r)\), and hence, only the radial variations are considered. This way, the static spacetime of the SD de Sitter (SDdS) black hole is found as
\[{\rm d}s^{2}=-f(r){\rm d}t^{2}+f^{-1}(r){\rm d}r^{2}+r^{2}{\rm d}\theta^{2}+r^{ 2}\sin^{2}\theta{\rm d}\phi^{2}, \tag{4}\]
in the usual Schwarzschild coordinates \(x^{\mu}=(t,r,\theta,\phi)\), where the lapse function is given by [59]
\[f(r)=1-\frac{M\left[-2+3r\epsilon(1-2r\epsilon)\right]}{r}+\frac{r}{6}\left(- 6\epsilon+9\epsilon r^{2}-2\Lambda r\right)\,+r^{2}\epsilon^{2}\left(1+6M \epsilon\right)\ln\left(1+\frac{1}{r\epsilon}\right), \tag{5}\]
describing the exterior geometry of an object of mass \(M\). Here, \(\Lambda>0\) is the classical cosmological constant, and \(\epsilon>0\) is the SD running parameter. Recently, this solution has been analyzed in Ref. [61] regarding the shadow and the deflection angle of light rays. Note that \([\Lambda]={\rm m}^{-2}\) and \([\epsilon]={\rm m}^{-1}\). In this study, we consider small effects from the quantum corrections, in the sense that only up to the first order of the term \(\epsilon r\) is taken into account. This way, we recast the lapse function (5) as
\[f(r)=1-\frac{2M}{r}+\left(3M-r\right)\epsilon-\frac{1}{3}\Lambda r^{2}. \tag{6}\]
In Ref. [62], this particular form has been used to study the analytical solutions for propagating null geodesics. As expected, for \(\epsilon\to 0\) the classical Schwarzschild-de Sitter spacetime is recovered. To facilitate the calculations, we do the transformation \(r\to Mr\), which is equivalent to letting \(M=1\). We also let \(\Lambda/3\rightarrow\Lambda\). The causal structure of this spacetime is determined by means of the solutions to the equation \(f(r)=0\), which results in the three values
\[r_{1} =-\frac{4}{\Lambda}\sqrt{\frac{g_{2}}{3}}\cos\left(\frac{1}{3} \arccos\left(\frac{3g_{3}}{g_{2}}\sqrt{\frac{3}{g_{2}}}\right)-\frac{4\pi}{3} \right)-\frac{\epsilon}{3\tilde{\Lambda}}, \tag{7}\] \[r_{2} =-\frac{4}{\Lambda}\sqrt{\frac{g_{2}}{3}}\cos\left(\frac{1}{3} \arccos\left(\frac{3g_{3}}{g_{2}}\sqrt{\frac{3}{g_{2}}}\right)-\frac{2\pi}{3} \right)-\frac{\epsilon}{3\tilde{\Lambda}},\] (8) \[r_{3} =-\frac{4}{\Lambda}\sqrt{\frac{g_{2}}{3}}\cos\left(\frac{1}{3} \arccos\left(\frac{3g_{3}}{g_{2}}\sqrt{\frac{3}{g_{2}}}\right)\right)-\frac{ \epsilon}{3\tilde{\Lambda}}, \tag{9}\]
in which
\[g_{2} =\frac{1}{12}\left(3\tilde{\Lambda}+\epsilon^{2}+9\tilde{\Lambda} \epsilon\right), \tag{10a}\] \[g_{3} =\frac{1}{432}\left(54\tilde{\Lambda}^{2}+2\epsilon^{3}+27\tilde{ \Lambda}\epsilon^{2}+9\tilde{\Lambda}\epsilon\right). \tag{10b}\]
The discriminant of the equation \(f(r)=0\) is of the form \(\Delta=g_{2}^{3}-27g_{3}^{2}\approx\frac{\tilde{\Lambda}^{3}}{64}(1-27\tilde{ \Lambda})+\mathcal{O}(\epsilon^{2})\), which is always positive for \(\tilde{\Lambda}\ll 1\). Hence, the radii in Eqs. (7)-(9) are real-valued and it is straightforward to check that \(r_{1}>r_{2}>0\) and \(r_{3}<0\). Accordingly, the black hole has a cosmological horizon at \(r_{++}=r_{1}\) and an event horizon at \(r_{+}=r_{2}\). This way, the lapse function can be recast as
\[f(r)=\frac{\tilde{\Lambda}}{r}\left(r_{++}-r\right)\left(r-r_{+}\right)\left( r-r_{3}\right). \tag{11}\]
In Fig. 1, the radial profile of the lapse function \(f(r)\) has been plotted for some small values for the \(\epsilon\)-parameter.
## III Lagrangian dynamics for motion of massive particles
The motion of massive particles in the spacetime provided by the line element (4), can be described by the Lagrangian
\[2\mathcal{L} = g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu} \tag{12}\] \[= -f(r)\dot{t}^{2}+\frac{\dot{r}^{2}}{f(r)}+r^{2}\left(\dot{\theta} ^{2}+\sin^{2}\theta\dot{\phi}^{2}\right),\]
in which \(\dot{x}^{\mu}\equiv\mathrm{d}x^{\mu}/\mathrm{d}\tau\), where \(\tau\) is the affine parameter of the geodesic curves. One can consider the conjugate momenta
\[\Pi_{\mu}=\frac{\partial\mathcal{L}}{\partial\dot{x}^{\mu}}, \tag{13}\]
which based on the Killing symmetries of the spacetime, introduces the two constants of motion
\[\Pi_{t}\equiv-f(r)\dot{t}\equiv-E, \tag{14}\] \[\Pi_{\phi}=r^{2}\dot{\phi}\equiv L, \tag{15}\]
with \(E\) and \(L\), termed respectively, as the energy and the angular momentum of the test particles1. The time-like trajectories are distinguished by letting \(2\mathcal{L}=-1\). This way, and by confining ourselves to the equatorial plane (i.e. \(\theta=\pi/2\)), the equations of
Figure 1: The radial profile of \(f(r)\), plotted for different values of \(\epsilon\) and \(\tilde{\Lambda}=3\times 10^{-4}\). The diagrams correspond to the profiles of (a) the original lapse function (5), and (b) the first order estimation in Eq. (6).
motion are obtained as
\[\dot{r}^{2}=E^{2}-V(r), \tag{16}\] \[\left(\frac{\mathrm{d}r}{\mathrm{d}t}\right)^{2}=\frac{f(r)^{2}}{E^ {2}}\left[E^{2}-V(r)\right],\] (17) \[\left(\frac{\mathrm{d}r}{\mathrm{d}\phi}\right)^{2}=\frac{r^{4}}{ L^{2}}\left[E^{2}-V(r)\right], \tag{18}\]
in which
\[V(r)=f(r)\left(1+\frac{L^{2}}{r^{2}}\right), \tag{19}\]
is the effective gravitational potential felt by the approaching particles. We begin our investigation by studying the radial trajectories.
## IV Radial motion
The study of infalling particles with zero angular momentum can have numerous advantages regarding the standard general relativistic tests. For example, the theoretical foundations of the so-called gravitational clock effect for falling observers in the gravitational fields are based on radial orbits, which is also related to the gravitational redshift-blueshift of light rays passing a black hole. Another example is the well-known _frozen_ infalling objects when they are observed by distant observers as they approach the black hole's event horizon. This is related to the difference between the perception of comoving and distant observers, as they observe infalling objects onto the black hole [63; 64]. In this case, the effective potential takes the form \(V_{r}(r)=f(r)\), whose radial profile has been shown in Fig. 2. The effective potential exhibits a maximum, so the motion becomes unstable where \(V_{r}^{\prime}(r)=0\), which results in the radial position
\[d_{u}=\frac{2}{\tilde{\Lambda}}\sqrt{\frac{\tilde{g}_{2}}{3}}\cos\left(\frac{ 1}{3}\arccos\left(\frac{3\tilde{g}_{3}}{\tilde{g}_{2}}\sqrt{\frac{3}{\tilde{g} _{2}}}\right)\right)-\frac{\epsilon}{6\tilde{\Lambda}}, \tag{20}\]
Figure 2: The radial effective potential, together with examples of the turning points, plotted for \(\epsilon=0.02\) and \(\tilde{\Lambda}=3\times 10^{-4}\). In this case, the maximum radial distance for unstable orbits is \(d_{u}=8.86\) for which \(E^{2}=E_{u}^{2}=0.63\). The two turning points \(d_{s}=21.44\) and \(d_{f}=3.40\) have been also shown, whose corresponding energy value is \(E^{2}=0.4\). The turning point \(d_{s}\) indicates the distance, at which \(0<E^{2}<E_{u}^{2}\), and the frontal scattering of particles occurs.
where
\[\tilde{g}_{2} =\frac{\epsilon^{2}}{12}, \tag{21a}\] \[\tilde{g}_{3} =\frac{\tilde{\Lambda}^{2}}{4}\left(2-\frac{\epsilon^{3}}{54 \tilde{\Lambda}^{2}}\right). \tag{21b}\]
The value in Eq. (20), is the maximum distance for unstable orbits, which corresponds to the energy \(E_{u}\equiv V_{r}(d_{u})\). This way, one can categorize the radial orbits as follows:
* _Frontal scattering of the first and second kinds (FSFK and FSSK)_: For \(0<E^{2}<E_{u}^{2}\), the orbits correspond to the FSFK when they encounter the turning point \(d_{s}\) (for which \(d_{u}<d_{s}<r_{++}\)), or to the FSSK when they start from the turning point \(d_{f}\) (for which \(r_{+}<d_{f}<d_{u}\)). In the case of the FSFK, the particles recede from the black hole after scattering, while for the FSSK, the particles fall inexorably onto the event horizon.
* _Critical radial orbits_: In the case of \(E^{2}=E_{u}^{2}\), depending on the initial distance of approach, the test particles encounter different fates. In this sense, when the particles approach from \(d_{i}\) (for which \(d_{u}<d_{i}<r_{++}\)), they fall on the radius \(d_{u}\), whereas when they come from \(d_{0}\) (for which \(r_{+}<d_{0}<d_{u}\)), they are captured by the black hole. These two categories constitute the critical radial orbit of the first and second kinds (CROFK and CROSK).
* _Radial capture_: For \(E^{2}>E_{u}^{2}\), the particles coming from a finite distance \(d_{j}\) (for which \(r_{+}<d_{j}<r_{++}\)), will fall onto the event horizon.
In fact, by using the expression in Eq. (11), the equations of motion for radial orbits can be rewritten as
\[\dot{r}^{2}=\frac{\mathfrak{p}_{3}(r)}{r}, \tag{22}\] \[\left(\frac{\mathrm{d}r}{\mathrm{d}t}\right)^{2}=\frac{\tilde{ \Lambda}^{2}(r_{++}-r)^{2}(r-r_{+})^{2}(r-r_{3})^{2}\mathfrak{p}_{3}(r)}{E^{2 }r^{3}}, \tag{23}\]
in which
\[\mathfrak{p}_{3}(r)=\tilde{\Lambda}r^{3}-(r_{3}+r_{+}+r_{++})\,\tilde{\Lambda} r^{2}+\left[E^{2}+r_{3}(r_{+}+r_{++})\tilde{\Lambda}+r_{+}r_{++}\tilde{ \Lambda}\right]r-r_{3}r_{+}r_{++}\tilde{\Lambda}. \tag{24}\]
### FSFK and FSSK
The characteristic polynomial (24) vanishes at the radial distances
\[d_{1} =\frac{4}{\tilde{\Lambda}}\sqrt{\frac{\bar{g}_{2}}{3}}\cos\left( \frac{1}{3}\arccos\left(\frac{3\bar{g}_{3}}{\bar{g}_{2}}\sqrt{\frac{3}{\bar{ g}_{2}}}\right)-\frac{4\pi}{3}\right)+\frac{r_{++}+r_{+}+r_{3}}{3}, \tag{25}\] \[d_{2} =\frac{4}{\tilde{\Lambda}}\sqrt{\frac{\bar{g}_{2}}{3}}\cos\left( \frac{1}{3}\arccos\left(\frac{3\bar{g}_{3}}{\bar{g}_{2}}\sqrt{\frac{3}{\bar{ g}_{2}}}\right)-\frac{2\pi}{3}\right)+\frac{r_{++}+r_{+}+r_{3}}{3},\] (26) \[d_{3} =\frac{4}{\tilde{\Lambda}}\sqrt{\frac{\bar{g}_{2}}{3}}\cos\left( \frac{1}{3}\arccos\left(\frac{3\bar{g}_{3}}{\bar{g}_{2}}\sqrt{\frac{3}{\bar{ g}_{2}}}\right)\right)+\frac{r_{++}+r_{+}+r_{3}}{3}, \tag{27}\]
where
\[\bar{g}_{2} =\frac{\tilde{\Lambda}}{12}\left[\left(r_{++}+r_{+}+r_{3}\right) ^{2}\tilde{\Lambda}-3\left(E^{2}+r_{+}r_{++}\tilde{\Lambda}+r_{3}\left(r_{+}+ r_{++}\right)\tilde{\Lambda}\right)\right], \tag{28a}\] \[\bar{g}_{3} =-\frac{\tilde{\Lambda}^{2}}{16}\left[\frac{1}{3}\left(r_{++}+r_ {+}+r_{3}\right)\left(E^{2}+r_{+}r_{++}\tilde{\Lambda}+r_{3}\left(r_{+}+r_{++} \right)\tilde{\Lambda}\right)-r_{3}r_{+}r_{++}\tilde{\Lambda}-\frac{2}{27} \left(r_{++}+r_{+}+r_{3}\right)^{3}\tilde{\Lambda}\right]. \tag{28b}\]
One can verify that \(0<d_{2}<d_{3}\) and \(d_{1}<0\). Hence, we can assign \(d_{f}\equiv d_{2}\) and \(d_{s}\equiv d_{3}\) at which, the frontal scatterings occur. This way, the characteristic polynomial can be recast as
\[\mathfrak{p}_{3}(r)=\tilde{\Lambda}\left(r-d_{s}\right)\left(r-d_{f}\right) \left(r-d_{1}\right). \tag{29}\]
By taking advantage of this simple form in the case of the FSFK at \(r=d_{s}\), the equation of motion (22) leads to a degenerate hyper-elliptic integral which yields the solution (see appendix A)
\[\tau(r)=\frac{2d_{s}}{\sqrt{\tilde{\Lambda}\ell^{2}}}\sqrt{1-\frac{d_{s}}{r}}F_{D }^{(3)}\left(\frac{1}{2},b_{1},b_{2},1;\frac{3}{2};c_{1},c_{2},1-\frac{d_{s}}{r }\right), \tag{30}\]
where \(\ell^{2}=(d_{s}-d_{f})(d_{s}-d_{1})\), and \(F_{D}^{(3)}\) is the incomplete 2-variable Lauricella hypergeometric function, which here can be given in terms of the one-dimensional Euler-type integral [65; 66]
\[\int_{0}^{1-d_{s}/r}z^{-\frac{1}{2}}(1-z)^{-1}\prod_{i=1}^{2}(1-c_{i}z)^{-b_{i }}\,\mathrm{d}z=2\sqrt{1-\frac{d_{s}}{r}}\,F_{D}^{(3)}\left(\frac{1}{2},b_{1},b_{2},\frac{1}{2};\frac{3}{2};c_{1},c_{2},1-\frac{d_{s}}{r}\right), \tag{31}\]
with \(b_{1}=b_{2}=1/2\), and
\[c_{1} =-\frac{d_{f}}{d_{s}-d_{f}}, \tag{32a}\] \[c_{2} =-\frac{d_{1}}{d_{s}-d_{1}}. \tag{32b}\]
The solution given in Eq. (30) relates to the perception of comoving observers with the radial geodesics in the course of the FSFK. Note that, although expressing the solution in the form (30) is brief and aesthetically pleasant, nevertheless, the equation of motion (22) can still be solved in terms of ordinary elliptic integrals and Jacobi elliptic functions (see appendix B). To the distant observers, the radial evolution of the coordinate time is obtained by solving the degenerate hyper-elliptic integral resulting from the equation of motion (23), which yields (see Eq. (101))
\[t(r)=-\frac{E}{2d_{s}^{2}\bar{\ell}^{3}\sqrt{\tilde{\Lambda}^{3}\bar{\ell}^{2} }}\left(1-\frac{d_{s}}{r}\right)^{2}F_{D}^{(6)}\left(2,1,1,1,\frac{1}{2}, \frac{1}{2},\frac{1}{2};3;\bar{c}_{1},\bar{c}_{2},\bar{c}_{3},\bar{c}_{4},\bar {c}_{5},1-\frac{d_{s}}{r}\right), \tag{33}\]
in which \(\bar{\ell}^{3}=-d_{s}^{-3}(r_{++}-d_{s})(d_{s}-r_{+})(d_{s}-r_{3}),\bar{\ell} ^{2}=d_{s}^{-2}(d_{s}-d_{f})(d_{s}-d_{1})\), and
\[\bar{c}_{1} =-\frac{r_{++}}{d_{s}-r_{++}}, \tag{34a}\] \[\bar{c}_{2} =-\frac{r_{+}}{d_{s}-r_{+}},\] (34b) \[\bar{c}_{3} =-\frac{r_{3}}{d_{s}-r_{3}},\] (34c) \[\bar{c}_{4} =-\frac{d_{f}}{d_{s}-d_{f}},\] (34d) \[\bar{c}_{5} =-\frac{d_{1}}{d_{s}-d_{1}}. \tag{34e}\]
Similar to the previous case, the equation of motion can be solved in terms of elliptic integrals, as explained in detail in appendix B. In Fig. 3, the radial profile of the time parameters have been plotted for the FSFK and FSSK, based on the solutions in Eqs. (30) and (33). As we can see, the \(\tau\)-profile crosses the horizons in each of the cases, while this never happens for the \(t\)-profile, and it shows an asymptotic behavior on the horizons. This highlights the fact that to distant observers, it takes infinite time for infalling particles to pass the horizons.
### CROFK and CROSK
In this case, the characteristic polynomial in Eq. (24) can be recast as
\[\mathfrak{p}_{3}(r)=\tilde{\Lambda}\left(r-d_{u}\right)^{2}\left(r-d_{1} \right), \tag{35}\]
given \(d_{u}\) in Eq. (20). We can, hence, divide the space into the two regions (I) and (II) which distinguish the fates that occur to the test particles approaching the critical radius \(d_{u}\) from either \(d_{i}\) or \(d_{0}\). Respectively, they correspond to the CROFK and CROSK.
Now solving the radial equation of motion (22) for the proper time, these two regions are distinguished by the solutions
\[\tau_{\rm I}(r) = \pm\frac{1}{\sqrt{\tilde{\Lambda}}}\left[\tau_{A}(r,d_{u})-\tau_{A }(d_{i},d_{u})\right], \tag{36}\] \[\tau_{\rm II}(r) = \mp\frac{1}{\sqrt{\tilde{\Lambda}}}\left[\tau_{A}(r,d_{u})-\tau_{ A}(d_{0},d_{u})\right], \tag{37}\]
in which
\[\tau_{A}(r,d_{u})=2\,{\rm arctanh}\left(\sqrt{\frac{r}{r-d_{1}}}\,\right)-2 \sqrt{\frac{d_{u}}{d_{u}-d_{1}}}\,{\rm arctanh}\left(\frac{r-d_{u}+\sqrt{r(r- d_{1})}}{\sqrt{d_{u}(d_{u}-d_{1})}}\right). \tag{38}\]
This is while for the distant observers, the equation of motion (23) provides the solutions
\[t_{\rm I}(r) = \pm\frac{E_{u}}{\sqrt{\tilde{\Lambda}^{3}}}\sum_{n=1}^{4}\varpi_ {n}\left[t_{n}(r)-t_{n}(d_{i})\right], \tag{39}\] \[t_{\rm II}(r) = \mp\frac{E_{u}}{\sqrt{\tilde{\Lambda}^{3}}}\sum_{n=1}^{4}\varpi_ {n}\left[t_{n}(r)-t_{n}(d_{0})\right], \tag{40}\]
for the aforementioned regions, where
\[t_{1}(r) ={\rm arctanh}\left(\sqrt{\frac{r_{++}-d_{1}}{r_{++}}}\sqrt{ \frac{r}{r-d_{1}}}\right), \tag{41a}\] \[t_{2}(r) ={\rm arctanh}\left(\sqrt{\frac{r_{+}-d_{1}}{r_{+}}}\sqrt{\frac {r}{r-d_{1}}}\right),\] (41b) \[t_{3}(r) ={\rm arctanh}\left(\sqrt{\frac{d_{u}-d_{1}}{d_{u}}}\sqrt{\frac {r}{r-d_{1}}}\right),\] (41c) \[t_{4}(r) ={\rm arctanh}\left(\sqrt{\frac{r_{3}-d_{1}}{r_{3}}}\sqrt{\frac {r}{r-d_{1}}}\right), \tag{41d}\]
Figure 3: The plots of (a) the FSFK, and (b) the FSSK, plotted for \(E^{2}=0.4\), \(\epsilon=0.02\) and \(\tilde{\Lambda}=3\times 10^{-4}\). The corresponding initial points are \(d_{s}\) and \(d_{f}\), as indicated in Fig. 2. The thin curves show the radial profile of \(\tau(r)\), whereas the thick ones correspond to that for \(t(r)\).
and
\[\varpi_{1} =\frac{2\sqrt{r_{++}^{3}}}{(r_{++}-r_{3})(r_{++}-r_{+})(r_{++}-d_{u}) \sqrt{r_{++}-d_{1}}}, \tag{42a}\] \[\varpi_{2} =\frac{2\sqrt{r_{+}^{3}}}{(r_{++}-r_{+})(r_{+}-r_{3})(d_{u}-r_{+}) \sqrt{r_{+}-d_{1}}},\] (42b) \[\varpi_{3} =\frac{2\sqrt{d_{u}^{3}}}{(r_{++}-d_{u})(d_{u}-r_{+})(d_{u}-r_{3} )\sqrt{d_{u}-d_{1}}},\] (42c) \[\varpi_{4} =-\frac{2\sqrt{r_{3}^{3}}}{(r_{++}-r_{3})(r_{+}-r_{3})(d_{u}-r_{3} )\sqrt{d_{1}-r_{3}}}. \tag{42d}\]
The radial profiles of the time coordinates have been plotted in Fig. 4, in the contexts of the CROFK and CROSK, within the discussed regions and based on the initial points of approach.
In this section, we studied the motion of particles with zero initial angular momentum. We classified the orbits and obtained the fully analytical solutions to the equations of motion. On the other hand, the more general types of orbits occur when the particles approach the black hole with non-zero initial angular momentum. Hence, in the next section, we proceed with our discussion by studying angular geodesics.
## V Angular motion
Test particles that approach the black hole with non-zero initial angular momentum (i.e. \(L\neq 0\)), will travel on angular geodesics which are of more diversity and importance. In this section, we perform an analytical study on the different types of angular motion around the SDdS black hole, by solving the equation of motion (18). These orbits are classified by means of the effective potential (19), whose radial profile has been plotted in Fig. 5, for different values of the test particle's angular momentum. Each of the profiles possesses a maximum point, at which, the orbits may become unstable. By raising the initial angular momentum, the height of this maximum is increased, and the profile becomes steeper after this point. As indicated in the right panel of Fig. 5, the orbits may encounter different turning points \(r_{t}\) regarding their initial energy values, that satisfy \(E^{2}=V(r_{t})\). According to Fig. 5(b), circular orbits happen at the maximum, where \(r_{t}=r_{U}\), and orbits of the first and second kinds (OFK and OSK) occur, respectively, at \(r_{t}=r_{S}\) and \(r_{t}=r_{F}\), for which \(E^{2}<E_{U}^{2}\). Once \(E^{2}>E_{U}^{2}\), the trajectories are
Figure 4: The radial profiles of the temporal coordinates for the CROFK and CROSK, plotted for \(\epsilon=0.02\) and \(\tilde{\Lambda}=3\times 10^{-4}\), which according to Fig. 2 provides \(d_{u}=8.86\) and \(E_{u}^{2}=0.63\). These orbits can be distinguished in the regions (I) and (II), for which, the initial points of approach are \(d_{i}=15\) and \(d_{0}=5\). The thin and thick curves correspond, respectively, to the behaviors of \(\tau(r)\) and \(t(r)\).
captured by the black hole. Note that, since the effective potential does not have any minimums, the SDdS black hole is not capable of forming an accretion disk, which requires the availability of innermost stable circular orbits (ISCO). However, the spirally infalling particles can be detected by means of their direct emission before being devoured into the event horizon.
### Circular orbits
The circular orbit occurs when the effective potential reaches its maximum, at which \(V^{\prime}(r)=0\). This equation together with the condition \(V(r)=E_{U}^{2}\) (or \(\dot{r}=0\) in Eq. (16)), yields
\[E_{U}(r)^{2} =\frac{2}{r}\frac{\left[\epsilon r^{2}-(3\epsilon+1)r+\tilde{ \Lambda}r^{3}+2\right]^{2}}{2(3\epsilon+1)r-\epsilon r^{2}-6}, \tag{43}\] \[L_{U}(r) =\sqrt{\frac{r^{2}\left(2\tilde{\Lambda}r^{3}-\epsilon r^{2}-2 \right)}{\epsilon r^{2}-6\epsilon r-2r+6}}. \tag{44}\]
In Fig. 6, the radial profiles of the above quantities have been shown for the specific case of the effective potential in Fig. 5(b). As expected, by approaching the critical radius \(r_{U}\), the profiles increase sharply until they reach the values \(E_{U}^{2}=V(r_{U})\) and \(L_{U}=L\) (i.e. the critical energy and the initial angular momentum). In contrast, by receding from \(r_{U}\), the energy decreases and approaches its values at the vicinity of the cosmological horizon, whereas the angular momentum falls rapidly and vanishes at the radial distance of unstable radial orbits, \(d_{u}\). Furthermore, to show the dependence of the above profiles on the variations in the running parameter \(\epsilon\), in Fig. 7, we have done three-dimensional plots of \(E_{U}^{2}(r)\) and \(L_{U}(r)\) for the same range of \(\epsilon\) as in Fig. 1.
#### iv.1.1 Stability of the orbits
Let us rewrite the equation of motion (18) as
\[\left(\frac{\mathrm{d}r}{\mathrm{d}\phi}\right)^{2}=\frac{\mathcal{P}_{6}(r)}{ L^{2}}, \tag{45}\]
in which, the characteristic polynomial is given as
\[\mathcal{P}_{6}(r)=\tilde{\Lambda}r^{6}+\epsilon r^{5}+\left(L^{2}\tilde{ \Lambda}+E^{2}-3\epsilon-1\right)r^{4}+\left(2-\epsilon L^{2}\right)r^{3}-(1+ 3\epsilon)L^{2}r^{2}+2L^{2}r. \tag{46}\]
Figure 5: The radial profiles of the angular effective potential, plotted for \(\epsilon=0.02\) and \(\tilde{\Lambda}=3\times 10^{-4}\), and for (a) four different initial angular momenta, and (b) the particular case of \(L=20\). Accordingly, the critical radius is \(r_{U}=2.93\), corresponding to the potential’s extremum \(V(r_{U})=15.05=E_{U}^{2}\), and the two turning points \(r_{S}=4.77\) and \(r_{F}=2.25\) are encountered for \(E^{2}=10\).
This way, the turning points of the angular motion are determined by means of the equation \(\mathcal{P}_{6}(r)=0\). Accordingly, the orbits can become circular at a turning point, where \(\mathcal{P}_{6}(r)=\mathcal{P}_{6}^{\prime}(r)=0\), and be marginally stable at that point once the extra condition \(\mathcal{P}_{6}^{\prime\prime}(r)=0\) is satisfied. Hence, the circular orbits are stable (unstable) when \(\mathcal{P}_{6}^{\prime\prime}(r)>0\) (\(\mathcal{P}_{6}^{\prime\prime}(r)<0\)). In Fig. 8, the behavior of the characteristic polynomial and its differentials has been plotted at the vicinity of the radius of circular orbits \(r_{U}\), for the specific case of Fig. 5(b). As it can be discerned from the diagram, at the vicinity of the radius of circular orbits we have \(\mathcal{P}_{6}^{\prime\prime}(r)>0\), which indicates that the circular orbits at this radius have some amount of stability. This stability stems from the curve width at the tip of the effective potential, which is non-zero for all values of the test particle's initial angular momentum. However, to find the extent, to which the circular orbits are stable, one needs to calculate the sensitivity of the circular orbits to perturbations along the radial axis. This way, a limit can be identified, beyond which, the circular orbits become unstable. Such limit can be obtained in the context of _epicyclic frequency_\(\Omega_{r}\), which is the frequency of oscillations of circularly orbiting particles along the radial direction [67] (see also the review in Ref. [68]). In the case of the SDdS black hole, this frequency can be expressed as [69]
\[\Omega_{r}^{2}(r)=-\frac{1}{2g_{rr}}V^{\prime\prime}(r), \tag{47}\]
Figure 6: The radial behavior of \(E_{U}^{2}\) and \(L_{U}\), plotted for \(\epsilon=0.02\) and \(\tilde{\Lambda}=3\times 10^{-4}\). By approaching \(r_{U}\), the profiles increase to \(E_{U}^{2}=V(r_{U})=15.05\) and \(L_{U}=20\), which are the essential values of energy and angular momentum that construct the effective potential and its maximum as given in Fig. 5(b). Moving away from \(r_{U}\), the energy profile falls intensely and then continues to decrease smoothly after passing its value \(E_{u}^{2}=0.63\) at the radius of critical radial orbits \(d_{u}=8.86\), where the angular momentum vanishes.
which by means of Eqs. (44) and (19), yields
\[\Omega_{r}^{2}(r)=\frac{8r\left\{-3\tilde{\Lambda}^{2}\left[r(2r-15)+ 30\right]r^{5}+2\tilde{\Lambda}\left[r\left(r^{2}-9r+27\right)-18\right]r^{2}+( r-6)^{2}\right\}-288}{(6-2r)^{3}\,r^{2}\left(\tilde{\Lambda}r^{3}-r+2\right)}\\ +\frac{\epsilon}{2(r-3)^{4}r\left(\tilde{\Lambda}r^{3}-r+2 \right)^{2}}\left[10\tilde{\Lambda}^{3}r^{12}-120\tilde{\Lambda}^{3}r^{11}+513 \tilde{\Lambda}^{3}r^{10}-16\tilde{\Lambda}^{2}r^{10}-810\tilde{\Lambda}^{3}r^ {9}+234\tilde{\Lambda}^{2}r^{9}\right.\\ \left.-1305\tilde{\Lambda}^{2}r^{8}-2\tilde{\Lambda}r^{8}+3294 \tilde{\Lambda}^{2}r^{7}+20\tilde{\Lambda}r^{7}-2916\tilde{\Lambda}^{2}r^{6}-5 7\tilde{\Lambda}r^{6}-54\tilde{\Lambda}r^{5}\right.\\ \left.+2r^{5}+216\tilde{\Lambda}r^{4}-31r^{4}+218r^{3}-708r^{2}+ 1080r-648\right]+\mathcal{O}\left(\epsilon^{2}\right). \tag{48}\]
The radial behavior of the radial epicyclic frequency for the circular orbits at the radius \(r_{U}\) in the effective potential in Fig. 5(b), has been demonstrated in Fig. 9. As it can be observed from the figure, the frequency falls rapidly from its high values at \(r_{U}\), and tend to zero as we recede from it. As long as \(\Omega_{r}^{2}\neq 0\), we can expect stability of circular orbits at the vicinity of \(r_{U}\). In this sense, the stability domain for this particular case is within \(r_{U}\leq r<3\). Passing this region, the particles do not travel on stable orbits and escape from the black hole (see the forthcoming sections).
### OFK and the scattering zone
Once the test particles are subjected to the condition \(E^{2}<E_{U}^{2}\), they encounter the two turning points \(r_{S}\) and \(r_{F}\), while approaching the black hole (see Fig. 5b). In this sense, the characteristic polynomial (46) can be recast as
\[\mathcal{P}_{6}(r)=\tilde{\Lambda}r\left(r-r_{S}\right)\left(r-r_{F}\right) \left(r-r_{4}\right)\left(r-r_{5}\right)\left(r-r_{5}^{*}\right), \tag{49}\]
in which \(r_{4}<0\) and \(r_{5}\in\mathbb{C}\). The test particles approaching from \(r_{S}\), experience a hyperbolic motion and then escape from the black hole. This scattering phenomenon occurs in the context of the OFK. However, to obtain the explicit solution to the angular equation of motion (45), we encounter the inversion of the hyper-elliptic integral
\[\phi-\phi_{0}=L\int_{r}^{r_{S}}\frac{\mathrm{d}r}{\sqrt{\mathcal{P}_{6}(r)}}, \tag{50}\]
with \(\phi_{0}\) being the initial azimuth angle, which confronts us with a special case of the Jacobi inversion problem. However, before proceeding with the calculation of the inversion, let us provide an analytical expression for \(\phi(r)\), by doing a direct integration of Eq. (50). This solution is of importance once the deflection angle of scattered particles is concerned. Now considering the expression (49) and using the method given in appendix A, we obtain the analytical solution
\[\phi(r)=\phi_{0}-\frac{Lr_{S}}{2\sqrt{\tilde{\Lambda}\mathscr{F}^{4}}}\left( 1-\frac{r_{S}}{r}\right)^{2}F_{D}^{(5)}\left(2,\frac{1}{2},\frac{1}{2},\frac{ 1}{2},\frac{1}{2},\frac{1}{2};3;\delta_{1},\delta_{2},\delta_{3},\delta_{4},1- \frac{r_{S}}{r}\right), \tag{51}\]
where \(\mathscr{F}^{4}=r_{S}r_{F}^{-2}r_{4}^{-1}|r_{5}|^{2}(r_{S}-r_{F})(r_{F}-r_{4}) (r_{S}-r_{5})(r_{S}-r_{5}^{*})\), and
\[\delta_{1} =\frac{r_{S}}{r_{S}-r_{F}}, \tag{52a}\] \[\delta_{2} =\frac{r_{S}}{r_{S}-r_{4}},\] (52b) \[\delta_{3} =\frac{r_{S}}{r_{S}-r_{5}}=\delta_{4}^{*}. \tag{52c}\]
Note that, since the integral equation (50) is generically hyper-elliptic, it cannot be solved explicitly in terms of common elliptic integrals. Nevertheless, under some circumstances, it could be reduced to a degenerate hyper-elliptic integral, and be solved in the same way as for the equation of motion (22) (see appendix B).
#### v.2.1 The scattering angle
After reaching the point of closest approach \(r_{S}\), particles on the OFK are scattered by the black hole. The same as the case of deflection of light in black hole spacetimes, for an observer located at the radial distance \(r_{\mathrm{O}}\), the deflection (scattering) of massive particles is done under the scattering angle \(\Theta=2\phi_{\mathrm{O}}-\pi\)[70], in which \(\phi_{\mathrm{O}}=\phi(r_{\mathrm{O}})\) is obtained by means of Eq. (51). In Fig. 10, we have shown the change of the scattering angle \(\Theta\) with respect to the variations in the energy in the domain \(0<E<E_{U}^{2}\). In order to generate this plot, a set of \((E_{S}^{2},r_{S})\) pairs was generated in the context of the effective potential in Fig. 5(b), and then by exploiting Eq. (51), the values of \(\phi_{\mathrm{O}}\) and their corresponding scattering angles were calculated.
### Analytical solutions for the orbits
To present a full study of the possible orbits of particles around the SDdS black hole, here we proceed with constructing a set of exact analytical solutions to the angular equation of motion, which is capable of describing all kinds of orbits in the spacetime geometry. Applying the change of variable \(r=1/u\) to Eq. (45), we get
\[\phi-\phi_{0}=\int_{u_{0}}^{u}\frac{u\,\mathrm{d}u}{\sqrt{\mathcal{P}_{5}(u)}}, \tag{53}\]
in which \(u_{0}\) corresponds to an initial point of approach located at \((r_{0},\phi_{0})\), and
\[\mathcal{P}_{5}(u)=\tilde{\Lambda}\mathscr{L}+\epsilon\mathscr{L}u+\left[ \tilde{\Lambda}+\left(E^{2}-3\epsilon-1\right)\mathscr{L}\right]u^{2}+\left(2 \mathscr{L}-\epsilon\right)u^{3}-\left(1+3\epsilon\right)u^{4}+2u^{5}, \tag{54}\]
where we have defined \(\mathscr{L}=1/L^{2}\). The Eq. (53) includes a hyper-elliptic integral, for the inverse of which one must use abelian modular functions of genus two. A rigorous method of dealing with such problems was introduced by Riemann to study the singularities of algebraic curves on a homology surface [10]. He also introduced the concept of Riemannian theta functions [11], which have been used to solve the raised Jacobi inversion problems. Such functions have also proved very useful in mathematics and theoretical physics. The usefulness of modular forms in general relativity and the applications of genus-2 Riemannian theta functions to the hyper-elliptic integrals arising from the geodesic equations in cosmological-constant-induced spacetimes, were first studied in Refs. [14; 15; 16; 17], and then in Refs. [20; 21; 22; 23], where the Jacobi inversion problem is approached by means of the Riemann surfaces of genus two and higher (see also Ref. [71]). In fact, the square root of the integrand of Eq. (53) has two branches and hence, it is not well defined on the complex plane. Furthermore, we must bear in mind that the inverse solution \(u(\phi)\) should not depend on the path of integration [21]. In this sense, if
\[\omega=\oint_{\gamma}\frac{u\,\mathrm{d}u}{\sqrt{\mathcal{P}_{5}(u)}}, \tag{55}\]
is valid for the integration path \(\gamma\), we then expect that
\[\phi-\phi_{0}-\omega=\int_{u_{0}}^{u}\frac{u\,\mathrm{d}u}{\sqrt{\mathcal{P}_ {5}(u)}}, \tag{56}\]
to be valid as well. Accordingly, the solution must respect the condition \(u(\phi)=u(\phi-\omega)\) for all \(\omega\neq 0\). Now defining the algebraic curve \(y^{2}=\mathcal{P}_{5}(u)\), homologous to a genus-2 Riemann surface, one can then introduce the holomorphic
\[\mathrm{d}\zeta_{1}=\frac{\mathrm{d}x}{\sqrt{\mathcal{P}_{5}(x)}},\qquad \mathrm{d}\zeta_{2}=\frac{x\,\mathrm{d}x}{\sqrt{\mathcal{P}_{5}(x)}}, \tag{57}\]
and meromorphic
\[\mathrm{d}\rho_{1}=\frac{\left(2\mathscr{L}-\epsilon\right)x-2\left(1+3 \epsilon\right)x^{2}+6x^{3}}{4\sqrt{\mathcal{P}_{5}(x)}}\mathrm{d}x,\qquad \mathrm{d}\rho_{2}=\frac{4x^{2}}{4\sqrt{\mathcal{P}_{5}(x)}}\mathrm{d}x, \tag{58}\]
differentials in accordance with the expression in Eq. (54), and based on the definitions given in Ref. [72]. We also introduce the real
\[2\omega_{ij}=\oint_{a_{j}}\mathrm{d}\zeta_{i},\qquad 2\eta_{ij}=\oint_{a_{j}} \mathrm{d}\rho_{i}, \tag{59}\]
and imaginary
\[2\bar{\omega}_{ij}=\oint_{b_{j}}\mathrm{d}\zeta_{i},\qquad 2\bar{\eta}_{ij}= \oint_{b_{j}}\mathrm{d}\rho_{i}, \tag{60}\]
Figure 10: The change of \(\Theta\) versus the variations in \(E^{2}\), plotted for \(L=20\), \(\epsilon=0.02\) and \(\tilde{\Lambda}=3\times 10^{-4}\), in the domain \(0\leq E^{2}\leq E_{U}^{2}=15.05\) and for \(r_{\mathrm{O}}=28\). As expected, the scattering angle has a fixed value at the vicinity of the cosmological horizon, where the energy vanishes, and then diverges by approaching the energy of circular orbits.
half-period matrices on the homology basis \((a_{i},b_{i})\) of the Riemann surface. Together, the above quantities generate the symmetric period matrices of the first and second kinds, given respectively as \((2\mathbf{\omega},2\bar{\omega})\) and \((2\mathbf{\eta},2\bar{\mathbf{\eta}})\). Having these information in hand, the analytical solution to the inversion of the integral equation (56) is obtained as [71, 21]
\[u(\phi)=-\frac{\sigma_{1}}{\sigma_{2}}(\phi_{\sigma}), \tag{61}\]
in which \(\sigma_{i}\) represents the \(i\)th derivative of the 2-variable Kleinian sigma function
\[\sigma(\mathbf{z})=\mathcal{C}e^{\mathbf{z}^{\dagger}\cdot\mathbf{\mathcal{K}}\cdot\mathbf{z}} \,\vartheta\begin{bmatrix}\mathbf{q}\\ \mathbf{q}^{\prime}\end{bmatrix}\left(2\mathbf{\omega}^{-1}\cdot\mathbf{z};\mathbf{T}\right), \tag{62}\]
which is expressed in terms of the genus-2 Riemannian theta function
\[\vartheta\begin{bmatrix}\mathbf{q}\\ \mathbf{q}^{\prime}\end{bmatrix}\left(\mathbf{z};\mathbf{T}\right)=\sum_{\mathbf{m}\in\mathbb{ Z}^{2}}e^{\mathrm{i}\pi\left(\mathbf{m}+\mathbf{q}\right)^{\dagger}\cdot\left[\mathbf{T} \cdot\left(\mathbf{m}+\mathbf{q}\right)+2\mathbf{z}+2\mathbf{q}^{\prime}\right]}, \tag{63}\]
with characteristics \(\mathbf{q}=(0,1/2)^{t}\) and \(\mathbf{q}^{\prime}=(1/2,1/2)^{t}\), where \(\mathbf{T}=\mathbf{\omega}^{-1}\cdot\bar{\mathbf{\omega}}\) is the symmetric Riemann matrix, \(\mathbf{\mathcal{K}}=\mathbf{\eta}\cdot(2\mathbf{\omega})^{-1}\), and the vector of Riemann constants is given as \(\mathbf{K}=\mathbf{q}+\mathbf{q}^{\prime}\cdot\mathbf{T}\). In the above relations, the sign \(\cdot\) indicates the common matrix product. Also, the constant \(\mathcal{C}\) has certain properties and can be obtained explicitly [72]. Moreover, \(\phi_{\sigma}=\left(\mathscr{F}(\phi-\phi_{\mathrm{in}}),\phi-\phi_{\mathrm{ in}}\right)^{t}\) with \(\phi_{\mathrm{in}}=\int_{0_{0}}^{\infty}\frac{u\,\mathrm{d}u}{\sqrt{P_{\mathrm{S }}(u)}}\), is a one-dimensional divisor. This sigma divisor can be obtained by means of the extra condition \(\sigma(\phi_{\sigma})=0\), which identifies the function \(\mathscr{F}\). Finally, the angular profile of the radial coordinate is obtained as
\[r(\phi)=-\frac{\sigma_{2}}{\sigma_{1}}(\phi_{\sigma}). \tag{64}\]
In the above solution, the functions \(\sigma_{i}\) depend on the parameters \(\mathbf{\omega}\), \(\mathbf{\eta}\), \(\mathbf{T}\) and \(\phi_{\sigma}\), as well as the characteristic polynomial \(\mathcal{P}_{5}(u)\). Since this solution is valid in all regions of the spacetime, it can be applied to simulate every kind of particle orbits which are allowed by the effective potential. In Fig. 11, this solution has been used in the domain \(E^{2}<E_{U}^{2}\), to simulate the OFK for the scattered particles. As it can be inferred from the diagram, the trajectories are of hyperbolic form in the equatorial plane, and the more the turning points recede from the radius of circular orbits \(r_{U}\), the particles have more tendency to travel on repulsive geodesics. On the other hand, the trajectories become attractive when the turning point approaches \(r_{U}\). For the particular case of \(E_{S_{T}}^{2}\) in the figure, as expected, the particles approach the circular orbits, but still they escape the black hole. This behavior is closely related to that for the critical orbits. As discussed earlier, the same energy levels produce another turning point \(r_{F}\) on the effective potential, from which, the test particles can only travel on the OSK and be captured by the black hole. In Fig. 12, the energy level choices of Fig. 11 have been adopted to simulate the OSK on the SDdS black hole. These two kinds of orbits, in fact, confine the orbits that occur at the vicinity of the potential's extremum, and are termed as the critical orbits. If the particles with \(E^{2}=E_{U}^{2}\) approach this extremum from the radial distances \(r_{i}\gtrsim r_{U}\), they finally escape the black hole after performing circular orbits at the radius \(r_{U}\). Such particles travel on the critical orbit of the first kind (COFK). On the other hand, particles of the same energy will travel on the critical orbit of the second kind (COSK), when they approach the extremum form the distances \(r_{i}\lesssim r_{U}\), and they finally fall onto the event horizon. In Fig. 13, these two orbits have been shown together to compare their behavior. As expected, in both cases there is a certain extension of stability for the circular orbits around \(r_{U}\), as discussed in subsection V.1.1. Finally, when \(E^{2}>E_{U}^{2}\), the test particles that approach from a radial distance \(r_{i}\), do not encounter any turning points and hence, have no other choice but to fall onto the event horizon. This way, the capture zone of the black hole is identified. In Fig. 14, some examples of captured trajectories by the SDdS black hole have been demonstrated. As it can be observed, the closer the energy of the particles is to \(E_{U}\), the more they tend to spiral orbits before being captured by the black hole. In this sense, at energy values close to \(E_{U}\), the particles form an unstable circular orbit around \(r_{U}\) and then fall inexorably onto the event horizon.
In this section, we gave a full study to the motion of particles with non-zero initial angular momentum and their possible types of orbits. Accordingly, the particles could either deflect into or out of the black hole, or perform circular orbits with limited stability. Since all kinds of possible particle orbits have been studied so far, we close our discussion at this point and summarize our results in the next section.
## VI Summary and conclusions
In this work, we have focused on the exact analytic solutions of the equations of motion for massive particles moving in the exterior geometry of a four-dimensional SD black hole associated with a positive cosmological constant. In particular, we studied the exact analytic solutions of the equations of motion that arise for the radial and angular motion around an SDdS black hole
Figure 11: The OFK plotted for \(\epsilon=0.02\), \(\tilde{\Lambda}=3\times 10^{-4}\), \(\mathscr{L}=0.0025\), \(\phi_{0}=0\), and different energy levels corresponding to particle scattering, in accordance with the effective potential in Fig. 5(**b**). These chosen values are \(E_{S_{1}}^{2}=0.02,E_{S_{2}}^{2}=0.6,E_{S_{3}}^{2}=1,E_{S_{4}}^{2}=5,E_{S_{5}}^ {2}=10,E_{S_{6}}^{2}=13\) and \(E_{S_{7}}^{2}=14.7\), for which the scattering happens at the radii \(r_{S_{1}}=32.95,r_{S_{2}}=23.30,r_{S_{3}}=18.85,r_{S_{4}}=7.57,r_{S_{5}}=4.77, r_{S_{6}}=3.80\) and \(r_{S_{7}}=3.22\) (the dashed circles with the same colors as the trajectory curves).
with small quantum corrections. We first explored the causal structure of the spacetime and identified its event and cosmological horizons. We then pursued a canonical Lagrangian dynamics method to obtain the first-order differential equations of motion. For the case of radial motion, we classified the types of possible orbits in the context of the radial effective potential, and calculated the exact solutions, separately, for each of the cases. We showed that for the frontal radial scatterings, the equations of motion for proper and coordinate time result in degenerate hyper-elliptic integrals, to which, we gave exact analytical solutions in terms of 2-variable and 5-variable indefinite Lauricella hypergeometric functions. We then applied these solutions to simulate the radial orbits for the FSFK and FSSK. We showed that despite the fact that the comoving observers experience crossing
the cosmological and event horizons within a finite amount of time, to the distant observers it takes infinite time for the test particles to pass the horizons. For the case of critical radial orbits, we presented the analytical solutions in terms of hyperbolic functions and showed that the test particles experience two distinct fates, and starting from a critical radius, they either escape to the cosmological horizon or are captured by the black hole. The same scenario holds for the comoving and distant observers. Switching to the study of angular orbits, we argued that the effective potential could offer only certain types of orbits for particles with non-zero initial angular momentum. Since the potential has no minimum, no planetary bound orbits are offered by the black hole. It is, however, important to note that, for the vanishing running parameter, which corresponds to the Schwarzschild-de Sitter black hole, the effective potential acquires a minimum, so that the planetary bound orbits are also possible. Such cases have been studied extensively, for example in Refs. [18; 20; 21; 24]. Furthermore, in Ref. [39], the motion of particles in the exterior geometry of a black hole with a linear quintessential term and cloud of strings has been investigated, where the spacetime metric can mimic the line element (6) with a vanishing cosmological constant. In particular, the term \((3M-r)\epsilon\) acts similar to the combination of cloud of strings and linear quintessence, because of which, the black hole can offer planetary bound orbits. However, as we demonstrated in the previous sections, such a possibility is eluded from the SDdS black hole, for which both the linear and quadratic terms are available in the spacetime metric. On the other hand, the potential's extremum defines a radius, at which, the particles can be on circular orbits with some extent of stability. We calculated the energy and angular momentum of the test particles on such orbits and demonstrated their radial profiles. We also inferred that the circular orbits at the vicinity of the potential's maximum can be stable since the second derivative of the characteristic polynomial is positive in a certain domain. We identified this domain by calculating the epicyclic frequencies of particles on circular orbits, around the potential's maximum. We also paid attention to the scattered trajectories which correspond to particles moving on the OFK. Such orbits occur when the initial energy of the particles is less than that at the potential's maximum, and hence, they can escape from the black hole. We showed that in general, the equation of motion for angular trajectories leads to a hyper-elliptic integral. First, we solved this equation to obtain the radial profile of the azimuth angle. The solution was given in terms of a 4-parameter Lauricella hypergeometric function and was then exploited to calculate the deflection angle of scattered particles. We plotted the changes of this angle in terms of the variations in the test particles' energy and showed that, as expected, it diverges at the vicinity of the energy of circular orbits. To study the behavior of particles on angular geodesics, we then performed an analytical treatment of the equation of motion, which involves the inversion of the included hyper-elliptic integral. This was a particular case of the Jacobi inversion problem, and hence, the process of obtaining the solution involved the abelian modular functions of genus two. We calculated the holomorphic and meromorphic differentials which are indispensable in the identification of the period matrices associated with the algebraic curve on the homologous Riemann surface. Accordingly, the general solution for the angular motion was expressed by the Kleinian sigma functions, which are given in terms of the Riemannian theta function of genus two with two-dimensional vectorial characteristics. Based on this solution, the orbits were discussed and simulated in accordance with the classifications offered by the effective potential. We plotted several cases of the OFK for different turning points, and as expected, the orbits shift from being repulsive to being attractive, by approaching the potential's extremum. We also plotted several cases of the OSK. This was followed by demonstrating the critical orbits, which are comprised of unstable circular orbits, that either escape from the black hole or fall onto the event horizon, and hence, these two orbits are the upper limits of the OFK and OSK. We finally paid attention to the capture zone, for which, the incident particles with higher energies fall inexorably onto the black hole. In this sense, the critical orbits form the lower boundary of the capture zone. Note that, since the SDdS black hole is incapable of forming an accretion disk, it cannot be regarded as a real astrophysical black hole. However, studies like the one performed in this paper may equip scientists with advanced mathematical tools which pave the way to do rigorous scrutinization of other SD alternatives to general relativistic spacetimes with more similarity to real astrophysical black hole geometries and put them into observational assessments.
## Acknowledgements
The author acknowledges Universidad de Santiago de Chile for financial support through the Proyecto POSTDOC-DICYT, Codigo 042331CM_Postdoc. I would like to thank Angel Rincon for introducing Ref. [59] and the SDdS solution.
## Appendix A Derivation of the radial solution of the FSFK
Applying the change of variable \(r\to x/d_{s}\) to the equation of motion (22), results in the equation
\[\tau(x)=-\frac{d_{s}}{\sqrt{\Lambda}}\int_{1}^{x}\frac{\mathrm{d}x}{\sqrt{x^{ 2}(1-x)(d_{s}-d_{f}x)(d_{s}-d_{1}x)}}, \tag{10}\]
which contains a degenerate hyper-elliptic integral. A second change of variable \(x\to 1-z\), yields
\[\tau(z)=\frac{d_{s}}{\sqrt{\tilde{\Lambda}}}\int_{0}^{z}\frac{\mathrm{d}z}{ \sqrt{p_{5}(z)}}, \tag{10}\]
in which
\[p_{5}(z) = (1-z)^{2}z(d_{s}-d_{f}[1-z])(d_{s}-d_{1}[1-z]) \tag{11}\] \[= z(1-z)^{2}\left[1+\frac{d_{f}z}{d_{s}-d_{f}}\right](d_{s}-d_{f} )\left[1+\frac{d_{1}z}{d_{s}-d_{1}}\right](d_{s}-d_{1})\] \[= \ell^{2}z(1-z)^{2}(1-c_{1}z)(1-c_{2}z).\]
This helps us recasting Eq. (10) as
\[\tau(z)=\frac{d_{s}}{\sqrt{\tilde{\Lambda}\ell^{2}}}\int_{0}^{z}z^{-\frac{1}{2 }}(1-z)^{-1}(1-c_{1}z)^{-\frac{1}{2}}(1-c_{2}z)^{-\frac{1}{2}}\,\mathrm{d}z. \tag{12}\]
Comparing the above relation to the one-dimensional integral form [66]
\[\int_{0}^{z}z^{a-1}(1-z)^{c-a-1}\prod_{i=1}^{n}(1-\xi_{i}z)^{-b_{i}}\,\mathrm{ d}z=\frac{z^{a}}{a}F_{D}^{(n+1)}\left(a,b_{1},\ldots,b_{n},1+a-c;a+1;\xi_{1}, \ldots,\xi_{n},z\right) \tag{13}\]
of the incomplete \(n\)-variable Lauricella hypergeometric function, provides \(n=2\), \(a=c=b_{1}=b_{2}=1/2\), \(\xi_{1}=c_{1}\) and \(\xi_{2}=c_{2}\).
## Appendix B Expressing some of the solutions in terms of elliptic integrals
Applying the change of variable \(r\to 1/u\), and after some manipulations, the differential equation (22) takes the form
\[\tau(u)=\frac{1}{\sqrt{\psi_{0}\tilde{\Lambda}}}\int_{u}^{u_{s}}\frac{\mathrm{ d}u}{u\sqrt{\left(u_{f}-u\right)\left(u_{s}-u\right)\left(u-u_{1}\right)}}, \tag{14}\]
where \(\psi_{0}=-d_{s}d_{f}d_{1}\), \(u_{s}=1/d_{s}\), \(u_{f}=1/d_{f}\), and \(u_{1}=1/d_{1}\), which respect the hierarchy \(u_{1}<u<u_{s}<u_{f}\). Based on this condition, this integral can be re-expressed as [73]
\[\tau(u)=\frac{1}{\sqrt{\psi_{0}\tilde{\Lambda}}}\frac{\varrho}{u_{s}}\int_{0} ^{y_{1}}\frac{\mathrm{d}n^{2}y}{(1-\tilde{\alpha}\,\mathrm{sn}^{2}y)}\mathrm{ d}y \tag{15}\]
in which \(\mathrm{sn}\,y\equiv\mathrm{sn}(y,\mathfrak{k})\) and \(\mathrm{dn}\,y\equiv\mathrm{dn}(y,\mathfrak{k})\) are, respectively, the Jacobi elliptic sine function and the Jacobi delta amplitude with the modulus
\[\mathfrak{k}^{2}=\frac{u_{s}-u_{1}}{u_{f}-u_{1}}, \tag{16}\]
and the variable \(y\) is defined in terms of the relation
\[\mathrm{sn}^{2}y=\frac{\left(u_{f}-u_{1}\right)\left(u_{s}-u\right)}{\left(u_ {s}-u_{1}\right)\left(u_{f}-u\right)}. \tag{17}\]
This way, the limit of the upper integral (15) is given by \(\mathrm{sn}\,y_{1}=\sin\varphi\), where
\[\varphi=\mathrm{am}\,y_{1}=\arcsin\left(\mathrm{sn}\,y\right), \tag{18}\]
is the Jacobi amplitude of the functions. Furthermore, we have notated
\[\varrho=\frac{2}{\sqrt{u_{f}-u_{1}}}, \tag{19a}\] \[\tilde{\alpha}=\frac{\mathfrak{K}^{2}u_{f}}{u_{s}}. \tag{19b}\]
This way, the solution to the integral (24) can be expressed as [73]
\[\tau(u)=\frac{\varrho}{u_{s}\sqrt{\psi_{0}\tilde{\Lambda}}}\frac{\mathfrak{k}^{2}}{ \tilde{\alpha}^{2}}\sum_{j=0}^{1}\frac{\left(\tilde{\alpha}^{2}-\mathfrak{k}^{2 }\right)^{j}}{\mathfrak{k}^{2j}j!\left(1-j\right)!}\mathcal{V}_{j}, \tag{25}\]
in which
\[\mathcal{V}_{0} =F\left(\varphi,\mathfrak{k}\right), \tag{26a}\] \[\mathcal{V}_{1} =\Pi\left(\varphi,\tilde{\alpha}^{2},\mathfrak{k}\right), \tag{26b}\]
are, respectively, the incomplete elliptic integrals of the first and third kind. The same procedure can be pursued for the differential equation (23), which by means of the change of variable \(r\to 1/u\) and partial fraction decomposition, can be recast as
\[t(u)=\frac{E}{\sqrt{\psi_{0}\tilde{\Lambda}^{3}}}\Bigg{[}-\frac{ r_{++}}{\psi_{1}}\int_{u}^{u_{s}}\frac{\mathrm{d}u}{\left(u_{++}-u\right) \sqrt{\left(u_{f}-u\right)\left(u_{s}-u\right)\left(u-u_{1}\right)}}\\ +\frac{r_{+}}{\psi_{2}}\int_{u}^{u_{s}}\frac{\mathrm{d}u}{\left( u_{+}-u\right)\sqrt{\left(u_{f}-u\right)\left(u_{s}-u\right)\left(u-u_{1}\right)}}\\ +\frac{r_{3}}{\psi_{3}}\int_{u}^{u_{s}}\frac{\mathrm{d}u}{\left( u_{3}-u\right)\sqrt{\left(u_{f}-u\right)\left(u_{s}-u\right)\left(u-u_{1}\right)}} \Bigg{]}, \tag{27}\]
with \(u_{++}=1/r_{++}\), \(u_{+}=1/r_{+}\) and \(u_{3}=1/r_{3}\), and by defining \(\psi_{1}=r_{++}(r_{++}-r_{+})(r_{++}-r_{3})\), \(\psi_{2}=r_{+}(r_{++}-r_{+})(r_{+}-r_{3})\) and \(\psi_{3}=-r_{3}(r_{++}-r_{3})(r_{+}-r_{3})\). The solution to this equation is given by [73]
\[t(u)=\frac{E}{\sqrt{\psi_{0}\tilde{\Lambda}^{3}}}\Bigg{[}\frac{ r_{++}\varrho}{\psi_{1}\left(u_{s}-u_{++}\right)}\frac{\mathfrak{k}^{2}}{ \tilde{\alpha}_{++}^{2}}\sum_{j=0}^{1}\frac{\left(\tilde{\alpha}_{++}^{2}- \mathfrak{k}^{2}\right)^{j}}{\mathfrak{k}^{2j}j!\left(1-j\right)!}\mathcal{V}_{ ++j}\\ +\frac{r_{+}\varrho}{\psi_{2}\left(u_{+}-u_{s}\right)}\frac{ \mathfrak{k}^{2}}{\tilde{\alpha}_{+}^{2}}\sum_{j=0}^{1}\frac{\left(\tilde{ \alpha}_{+}^{2}-\mathfrak{k}^{2}\right)^{j}}{\mathfrak{k}^{2j}j!\left(1-j \right)!}\mathcal{V}_{++j}\\ -\frac{r_{3}\varrho}{\psi_{3}\left(u_{s}-u_{3}\right)}\frac{ \mathfrak{k}^{2}}{\tilde{\alpha}_{3}^{2}}\sum_{j=0}^{1}\frac{\left(\tilde{ \alpha}_{3}^{2}-\mathfrak{k}^{2}\right)^{j}}{\mathfrak{k}^{2j}j!\left(1-j \right)!}\mathcal{V}_{3j}\Bigg{]}, \tag{28}\]
in which \(\mathcal{V}_{++j}\), \(\mathcal{V}_{+j}\) and \(\mathcal{V}_{3j}\) have the same expressions as in Eqs. (26), considering the respected exchanges \(\tilde{\alpha}\rightarrow\tilde{\alpha}_{++},\tilde{\alpha}_{+},\tilde{\alpha }_{3}\), where
\[\tilde{\alpha}_{++}^{2}=\frac{\mathfrak{k}^{2}\left(u_{f}-u_{++} \right)}{\left(u_{s}-u_{++}\right)}, \tag{29a}\] \[\tilde{\alpha}_{+}^{2}=\frac{\mathfrak{k}^{2}\left(u_{+}-u_{f} \right)}{\left(u_{+}-u_{s}\right)},\] (29b) \[\tilde{\alpha}_{3}^{2}=\frac{\mathfrak{k}^{2}\left(u_{f}-u_{3} \right)}{\left(u_{s}-u_{3}\right)}. \tag{29c}\]
The integral in Eq. (50) is genuinely hyper-elliptic, and we note that the only way to express the solutions in terms of ordinary elliptic integrals is on one of the limits \(r\gg r_{F}\), \(|r_{4}|\ll 1\), or \(r\gg|r_{5}|\). Under these conditions, the solution of the integral (50) can be obtained in a similar way as in Eq. (25).
|
2307.08596 | Omnipotent Adversarial Training in the Wild | Adversarial training is an important topic in robust deep learning, but the
community lacks attention to its practical usage. In this paper, we aim to
resolve a real-world challenge, i.e., training a model on an imbalanced and
noisy dataset to achieve high clean accuracy and adversarial robustness, with
our proposed Omnipotent Adversarial Training (OAT) strategy. OAT consists of
two innovative methodologies to address the imperfection in the training set.
We first introduce an oracle into the adversarial training process to help the
model learn a correct data-label conditional distribution. This
carefully-designed oracle can provide correct label annotations for adversarial
training. We further propose logits adjustment adversarial training to overcome
the data imbalance issue, which can help the model learn a Bayes-optimal
distribution. Our comprehensive evaluation results show that OAT outperforms
other baselines by more than 20% clean accuracy improvement and 10% robust
accuracy improvement under complex combinations of data imbalance and label
noise scenarios. The code can be found in https://github.com/GuanlinLee/OAT. | Guanlin Li, Kangjie Chen, Yuan Xu, Han Qiu, Tianwei Zhang | 2023-07-14T07:09:57Z | http://arxiv.org/abs/2307.08596v2 | # Omnipotent Adversarial Training
###### Abstract
Adversarial training is an important topic in robust deep learning, but the community lacks attention to its practical usage. In this paper, we aim to resolve a real-world application challenge, i.e., training a model on an imbalanced and noisy dataset to achieve high clean accuracy and robustness, with our proposed Omnipotent Adversarial Training (OAT). Our strategy consists of two innovative methodologies to address the label noise and data imbalance in the training set. We first introduce an oracle into the adversarial training process to help the model learn a correct data-label conditional distribution. This carefully-designed oracle can provide correct label annotations for adversarial training. We further propose logits adjustment adversarial training to overcome the data imbalance challenge, which can help the model learn a Bayes-optimal distribution. Our comprehensive evaluation results show that OAT outperforms other baselines by more than 20% clean accuracy improvement and 10% robust accuracy improvement under the complex combinations of data imbalance and label noise scenarios. The code can be found in [https://github.com/GuanlinLee/OAT](https://github.com/GuanlinLee/OAT).
## 1 Introduction
Exploring how to enhance the adversarial robustness of deep learning models has constantly attracted attention from both industry and academia. Adversarial robustness refers to the ability of a deep learning model to resist against adversarial attacks. Madry et al. [32] proposed adversarial training (AT), a popular strategy to improve the model's robustness. Due to its high computational cost, numerous works further proposed computation-friendly AT methods [39, 53] to be applicable to large-scale datasets. Although significant efforts have been devoted to making AT more efficient and practical, there still exists a gap to address the real-world applications. The main obstacle is that these works idealize the dataset as being completely clean and uniformly distributed. However, in real-world scenarios, annotations are often noisy [48, 50] and datasets tend to be long-tailed [30, 46], making these methods less effective.
Label noise is a common occurrence in datasets due to variations in the experience and expertise of data annotators. As not all annotators are experts, error labels are present in many real-world datasets. For example, as reported in [42], the Clothing1M dataset [50] contains about 38.5% noise, and the WebVision dataset [29] was found to have around 20.0% noise. Although some crowdsourcing platforms, like Amazon Mechanical Turk [2], can provide some mechanisms like voting to reduce the ratio of noisy labels in the datasets, it remains challenging to guarantee completely clean label mapping. Consequently, label noise is still an open problem in deep learning model training processes.
On the other hand, data imbalance can occur when it is difficult to collect sufficient samples for several specific classes, resulting in an insufficient number of examples for these classes and causing data imbalance [46]. Typically, we call a dataset long-tailed if most of the data belong to several classes, called head classes, and fewer data belong to other classes, known as tail classes [46]. Given that this is the natural property of the data distribution, it is challenging to create a perfectly balanced dataset in practice. Additionally,
label noise can exacerbate data imbalance by introducing additional noise to the tail classes. Thus, it is important to consider both label noise and data imbalance together when developing a robust deep learning model.
Most of existing solutions focus on the robust training over clean and balanced datasets. To the best of our knowledge, only two works have examined label noise in the context of adversarial training [14, 23]. However, both of them aim at addressing overfitting issues rather than training models to achieve high robustness on datasets with label noise. Meanwhile, only one published work studies AT on long-tailed datasets [49]. No attention has been given to the joint effects of label noise and data imbalance on model robustness. Actually, label noise and data imbalance influence the training process from two different aspects, i.e., incorrect label mapping and overfitting head classes, respectively. _Existing approaches for either label noise or data imbalance are insufficient to address their joint effects. A combination of [14, 23] and [49] cannot achieve promising results either._ The reason comes from the poor label refurbishment effective in [14, 23] under massive label noise, making the models fail to converge during AT (proved in our experiments in Section 5). In AT, it is more challenging to separate the data with correct and wrong labels and then correct wrong labels based on the model's predictions [27, 40], because the high value of the robust loss [51] and low confidence scores on the training data [47] are consistent on all data and are unrelated with the correctness of labels. On the contrary, in normal training, the model will give higher loss values and lower confidence scores on data with wrong labels. So, simply combining previous methods cannot essentially address the problems, and it is necessary to design a solution dedicated to AT on imbalanced and label noisy datasets.
Challenges arise when we train a robust model on a noisy and imbalanced dataset. First, in AT, generating adversarial examples (AEs) relies on the gradients, which are calculated with the label and the model's prediction, to update the perturbation for the target model. With noisy labels, the generated AEs become less reliable, reducing the effectiveness of AT. Additionally, incorrect annotations prevent the model from learning the correct mapping between data and labels, which harms the clean accuracy of the robust model. Second, an imbalanced dataset decreases the model's generalizability and makes the model lean to classify a sample into head classes [30]. This can result in poor performance on tail classes and lower overall robustness of the model. Unfortunately, without correct labels, prior solutions for data imbalance cannot work properly, because the label distribution can be misleading.
Therefore, if we can extract data with wrong annotations in the training set and provide correct labels to them with high probability, we will have the opportunity to mitigate the adverse effects of training models under noisy labels. Furthermore, if we can correct the wrong labels, we will recover a correct label distribution, which is helpful to address the overfitting problem caused by data imbalance.
Based on the above insights, we propose a novel training strategy, named **O**mnpipotent **A**dversarial **T**raining (OAT), which aims to obtain a robust model trained on a noisy and imbalanced dataset. The proposed OAT is a two-step training scheme, i.e., the oracle training process and robust model training process. Specifically, in the first step, we introduce an oracle to provide correct annotations for a noisy dataset. Unlike existing label correction methods that rely solely on model predictions [3, 40], we adopt a novel method to predict labels using high-dimensional feature embeddings and a \(k\)-nearest neighbors algorithm. To overcome the data imbalance challenge in the oracle training process, we propose a dataset re-sampling method. Moreover, to further improve the label correction process, we adopt the self-supervised contrastive learning method to train the oracle.
In the second step, to address the data imbalance problem, we introduce the logits adjustment adversarial training, which can help the model learn a Bayes-optimal distribution. By obtaining correct labels from the oracle, we can approximate the true label distribution, which is adopted to adjust the model's predictions, allowing the model to achieve comparable robustness to previous AT methods [49]. Furthermore, we introduce interactions between the oracle and the model to make the model obtain high clean accuracy and robustness even on an imbalanced dataset with massive label noise. Extensive experimental results show that OAT achieves higher clean accuracy and robustness on the noisy and imbalanced training dataset. Overall, our contributions can be summarized as follows.
* We propose the first AT strategy, OAT, aiming to solve a real-world problem, i.e., adversarial training on a noisy and imbalanced dataset.
* OAT outperforms previous works under various practical scenarios. Specifically, it achieves up to 80.72% clean accuracy and 42.84% robust accuracy on a heavy imbalanced dataset with massive label noise, which is about 50% and 20% higher than SOTA methods.
* Our comprehensive experiments can inspire researchers to propose more approaches to minimize the performance gap between ideal datasets and practical datasets.
## 2 Related Works
### Noisy Label Recognition
Label noise is a common threat in practice because the data annotation process heavily depends on the knowledge of the workers. Recently, numerous works aim to address the label noise in image recognition from different perspectives, including new model architectures [43], robust loss functions [45, 52], label correction [23, 36] and sample se
lection [19]. Specifically, Goldberger et al. [17] proposed a noise adaptation layer to model the label transition pattern with a noise transition matrix. However, the estimation error between the adaptation layer and real label noise distribution is large when the noise rate is high in the training set, causing worse results. For the robust loss functions, Ghosh et al. [16] proved that the Mean Absolute Error (MAE) loss is robust to the label noise, but it harms the model's generalizability. Label correction [23, 36] is another way to address the label noise problem. Existing methods aim to learn the correct label mapping and then correct the wrong labels. Li et al. [27] proposed a sample selection method, adopting two models to adaptively choose samples with smaller loss values as clean data and samples with larger loss values as noisy data. Then, each model predicts a label for the noisy data and provides them to its peer model to learn together with clean data.
### Long-tailed Recognition
Data imbalance is common in collected large datasets, since data belonging to some categories are naturally rare, e.g., special diseases in medical datasets (Skin-7 [9]), endangered species in animal datasets (iNaturalist 2018 [1]). Such imbalanced data distribution will harm the model's generalizability [5]. Long-tailed recognition is proposed to solve this real-world problem and train models on imbalanced datasets. A straightforward approach is to re-sample the training distribution to make it more balance, such as random under-sampling head classes [31] and random over-sampling tail classes [21]. Recently, a logits adjustment method is proposed [34, 37], solving the dilemma that models lean to classify samples into head classes with high probability.
### Adversarial Training
Adversarial training (AT) [32, 51] is one of the most famous approaches to increase the robustness of models. It generates on-the-fly AEs to train the models. Recently, several works are proposed to promote AT in real-world applications. Zheng et al. [53] proposed an efficient AT method based on the transferability of AEs to reduce the AE generation cost, making it possible to adopt AT on large datasets, such as ImageNet [13]. Researchers also studied the behaviors of models trained on randomly labeled datasets with AT and found that models trained with AT can memorize those random labels [14, 23]. Based on the observation, they proposed new training algorithms to address the overfitting problem, which can also be adopted to train models on noisy datasets. For another practical problem, RoBal [49] is proposed to meet the imbalanced dataset scenario.
To the best of our knowledge, there is no work focusing on training models on both imbalanced and noisy datasets with AT. We step forward to real-world applications and explore this threat model in this paper. Our method combines label refurbishment and distribution re-balancing, achieving state-of-the-art results under different combinations of label noise and data imbalance settings.
## 3 Preliminaries
In the following, we provide the necessary definitions of datasets, label noise, and label distribution before presenting the proposed methods.
For supervised learning algorithms, we consider a dataset with two basic components, i.e., the set of data and the label mapping. We give a formal definition of a dataset1 as follows:
Footnote 1: We leave the open-set problem [44] as future work. In this paper, all data with incorrect labels have correct labels within the label set of the dataset [20].
**Definition 1**: _Suppose a set \(\mathcal{S}\) and a mapping \(\mathcal{A}\) satisfy \(\mathcal{A}(x)\in[C]\), where \(x\in\mathcal{S}\). The tuple \((\mathcal{S},\mathcal{A})\) is called a dataset \(\mathcal{D}(\mathcal{S},\mathcal{A})\). \(C\) represents the number of classes. \(\mathcal{A}(x)\) is the label of data \(x\)._
Clearly, given a set \(\mathcal{S}\) with the cardinality \(|\mathcal{S}|\), and the number of classes is \(C\), where \(|\mathcal{S}|>C\), there are \(C+|\mathcal{S}|!\sum_{i=2}^{C}({C\choose i}{|{S|-1\choose i-1}(i)!})\) different mappings, where \(|\mathcal{S}|!\) and \((i)!\) are the factorial of \(|\mathcal{S}|\) and \(i\). We introduce a set \(\mathfrak{A}\) to represent all possible label mappings \(\mathcal{A}\):
**Definition 2**: _Given a set \(\mathcal{S}\) and the number of classes \(C\), \(\mathfrak{A}\) contains all mappings \(\mathcal{A}\), satisfying \(\mathcal{A}(x)\in[C]\) for \(x\in\mathcal{S}\)._
With set \(\mathfrak{A}\), we can give a special label mapping \(\mathcal{A}_{\text{gt}}\) under certain culture knowledge \(\mathfrak{K}\). Every person with knowledge \(\mathfrak{K}\) will agree with the output of \(\mathcal{A}_{\text{gt}}\) for every \(x\in\mathcal{S}\). Then, we call the dataset \(\mathcal{D}(\mathcal{S},\mathcal{A}_{\text{gt}})\) a clean dataset without label noise. Otherwise, any \(\mathcal{A}\in\mathfrak{A}\) that is not \(\mathcal{A}_{\text{gt}}\) constructs a noisy dataset \(\mathcal{D}(\mathcal{S},\mathcal{A})\). So, whether a dataset contains label noise is depended on \(\mathcal{A}\) and independent of \(\mathcal{S}\). Formally, we can define the noise ratio (NR) of a dataset \(\mathcal{D}(\mathcal{S},\mathcal{A})\) as \(\operatorname{NR}=\frac{\sum_{x\in\mathcal{S}}\mathds{1}(\mathcal{A}(x)!= \mathcal{A}_{\text{gt}}(x))}{|\mathcal{S}|}\), where \(|\mathcal{S}|\) is the number of the data in set \(\mathcal{S}\). With previous definitions, we can give a formal definition of label distribution for a given dataset \(\mathcal{D}(\mathcal{S},\mathcal{A})\).
**Definition 3**: _Given a dataset \(\mathcal{D}(\mathcal{S},\mathcal{A})\), \(N_{i}=\sum_{x\in\mathcal{S}}\mathds{1}(\mathcal{A}(x)=i)\), representing the number of data in the set \(\mathcal{S}\) mapped into class \(i\) by \(\mathcal{A}\)._
In Definition 3, we count the number of data for each class \(i\) based on the output of \(\mathcal{A}\). So, given a dataset \(\mathcal{D}(\mathcal{S},\mathcal{A})\), we can calculate its imbalanced ratio (IR) under \(\mathcal{A}\) and the true imbalanced ratio (\(\operatorname{IR}_{\text{gt}}\)) under \(\mathcal{A}_{\text{gt}}\), and \(\operatorname{IR}=\frac{\min(N_{i})}{\max(N_{i})}\). Usually, if \(\mathcal{A}\neq\mathcal{A}_{\text{gt}}\), the label distributions will be different for the clean dataset and noisy datasets. We use \(\mathcal{D}\) to represent a dataset if no ambiguity in the following sections.
In practice, obtaining the mapping \(\mathcal{A}_{\text{gt}}\) requires lots of additional effort, so the dataset owner usually adopts a plausible mapping \(\mathcal{A}\) to approximate the correct mapping, which will introduce label noise into the dataset. Under this situation, both the mapping \(\mathcal{A}_{\text{gt}}\) and the corresponding correct
label distribution are unknown. So, reconstructing a more precise label mapping \(\mathcal{A}^{\prime}\) from the known one \(\mathcal{A}\) to decrease the label noise in the dataset and calculating the correct label distribution are both required to train a model with AT, for AE generation and loss backpropagation.
## 4 Omnipotent Adversarial Training
To address the label noise and imbalanced data distribution problems, we introduce an oracle \(\mathcal{O}\) into the training process to improve the robustness of the AT-model \(\mathcal{M}\), and propose a new training framework, named Omnipotent Adversarial Training (OAT). Figure 1 illustrates the overall workflow of OAT, which consists of two key processes: the oracle training (OT) and the adversarial training (AT). In OAT, the model owner aims to leverage the oracle \(\mathcal{O}\) to provide correct annotations to train an AT-model \(\mathcal{M}\) to obtain robustness on the dataset \(\mathcal{D}\). The oracle can be represented as \(\mathcal{O}(\cdot)=\mathcal{O}_{C}(\mathcal{O}_{F}(\cdot))\), where \(\mathcal{O}_{F}\) is the feature encoder, and \(\mathcal{O}_{C}\) is the classification layer. The AT-model \(\mathcal{M}\) can be represented as \(\mathcal{M}(\cdot)=\mathcal{M}_{C}(\mathcal{M}_{F}(\cdot))\), where \(\mathcal{M}_{F}\) is the feature encoder, and \(\mathcal{M}_{C}\) is the classification layer. We use the same architecture for \(\mathcal{O}\) and \(\mathcal{M}\). In every training epoch, we first train the oracle, then adopt it to predict the labels for the dataset \(\mathcal{D}\), and finally use the predictions as annotations to generate AEs and train the AT-model \(\mathcal{M}\). Below, we present the details of the OT and AT processes.
### Oracle Training
Unlike the traditional model training process that focuses on achieving strong generalizability on test data, oracle training aims to optimize the oracle's ability to predict training samples as accurately as the ground-truth set \(\mathcal{A}_{\text{gt}}\). This unique objective motivates us to develop an effective approach to training the oracle. If the oracle is trained under the annotations from the label mapping \(\mathcal{A}\), the training set \(\mathcal{D}\) can be both noisy and imbalanced, hindering the oracle's ability to approximate the target mapping \(\mathcal{A}_{\text{gt}}\). To overcome these issues, we introduce four main techniques, i.e., dataset re-sampling, label refurbishment, dataset split, and contrastive self-supervised learning.
**Dataset Re-sampling ( 1** in Figure 1). Training a model to fit an imbalanced label distribution is more challenging than training a model on a balanced one [30]. Based on this prior, we over-sample the dataset \(\mathcal{D}(\mathcal{S},\mathcal{A})\) to make the number of data for every class equal. Specifically, we first find out the largest number of data \(N_{\max}=\max(N_{i})\) among all classes. For each class \(i\), we fix all data \(x\), satisfying \(\mathcal{A}(x)=i\). So, there will be \(N_{i}\) data in class \(i\). Then, we randomly and repeatedly select \(N_{\max}-N_{i}\) data from the fixed data with replacement and add them into the set \(\mathcal{S}\) for class \(i\). This process yields \(N_{\max}\) samples for every class, and we refer to the resulting balanced dataset as \(\mathcal{D}^{\prime}(\mathcal{S}^{\prime},\mathcal{A})\). The dataset re-sampling process is only launched at the first time running the OT process, and the set \(\mathcal{S}^{\prime}\) is generated once and for all.
**Label Refurbishment and Dataset Split ( 2** in Figure 1). This technique is introduced to improve the prediction accuracy of the oracle \(\mathcal{O}\). It has been found that the model first learns samples with correct labels [4, 41]. So, in the early training phase, the model gives higher confidence scores for correctly labeled data. Due to the model's generalizability, the samples with incorrect labels will be classified into correct classes with high confidence. Our idea is to use a threshold \(\theta_{r}\) to refurbish labels as follows:
\[\mathcal{A}_{r}(x)=\begin{cases}\mathcal{A}(x),&\max(\sigma(\mathcal{O}(x)))< \theta_{r}\\ \arg\max(\sigma(\mathcal{O}(x))),&\max(\sigma(\mathcal{O}(x)))\geq\theta_{r} \end{cases}\]
where \(\mathcal{O}(x)\) is the logits output of data \(x\) and \(\sigma(\cdot)\) is the softmax function. After label refurbishment, we will obtain
Figure 1: Overview of OAT. We alternately train the oracle and the AT-model, and adopt the oracle to provide the AT-model with new annotations, to overcome the challenges in long-tailed learning and noisy label learning.
a dataset \(\mathcal{D}^{\prime}(\mathcal{S}^{\prime},\mathcal{A}_{r})\), which could contain less label noise.
To train our oracle as meticulously as possible, we split the dataset \(\mathcal{D}^{\prime}(\mathcal{S}^{\prime},\mathcal{A}_{r})\) into a clean one and a noisy one. Previous works adopt the values of the loss function [3, 27] or predicted confidence scores [33, 40] to identify whether the data have correct annotations or not, which is not stable and can fail under massive label noise [15]. Different from them, we adopt a non-parametric \(k\)-nearest neighbors (\(k\)-NN) model \(\mathcal{K}\) to split the dataset. The insight behind our method is that models trained in a contrastive self-supervised manner will automatically map the data belonging to the same class into the neighbor feature embedding [24], which indicates that data in the same class will have more similar features than data from different classes. Therefore, we first adopt \(\mathcal{K}\) to find the \(k\)-nearest neighbors for each data \(x\) in the feature space. Then, we calculate the predicted label \(L_{x}^{\mathcal{K}}\) from \(\mathcal{K}\) by finding the class which contains most of the neighbors for each data \(x\). If the label \(L_{x}^{\mathcal{K}}\) is the same as \(\mathcal{A}_{r}(x)\), we add \(x\) into the clean set \(\mathcal{S}^{\prime}_{C}\). Otherwise, we add \(x\) into the noisy set \(\mathcal{S}^{\prime}_{N}\). After the label refurbishment and dataset split, we will have two new datasets, \(\mathcal{D}^{\prime}(\mathcal{S}^{\prime}_{C},\mathcal{A}_{r})\) containing less label noise and \(\mathcal{D}^{\prime}(\mathcal{S}^{\prime}_{N},\mathcal{A}_{r})\) containing more label noise, which are named \(\mathcal{D}^{\prime}_{C}\) and \(\mathcal{D}^{\prime}_{N}\), respectively.
**Contrastive Self-Supervised Learning** ( 3) in Figure 1). In prior works, models trained in a self-supervised manner are proved to be more robust against label noise [15, 25, 28] and label imbalance [24]. So, we borrow a contrastive learning approach, BYOL [18], but removing the momentum encoder, for two reasons. The first one is that Chen et al. [8] proved that using a shared feature encoder to replace the momentum encoder can also achieve good results. The second reason is that using a shared encoder can improve the efficiency and reduce the training cost. We introduce additional two modules \(\mathcal{O}_{H}\) and \(\mathcal{O}_{P}\) to participate in the contrastive learning part. Because the contrastive learning does not require the labels, we directly adopt the full dataset \(\mathcal{D}^{\prime}\) to train the oracle, and the loss can be represented as:
\[\mathcal{L}_{\mathrm{COS}}=-\mathbb{E}_{x\sim\mathcal{D}^{\prime}}\frac{ \mathcal{O}_{H}(\mathcal{O}_{F}(\tau_{1}(x)))*\mathcal{O}_{P}(\mathcal{O}_{H} (\mathcal{O}_{F}(\tau_{2}(x))))}{\|\mathcal{O}_{H}(\mathcal{O}_{P}(\tau_{1}(x )))\|_{2}*\|\mathcal{O}_{P}(\mathcal{O}_{H}(\mathcal{O}_{F}(\tau_{2}(x))))\|_{ 2}},\]
where \(\tau_{1}\) is a weak data augmentation strategy (only cropping and flipping) and \(\tau_{2}\) is a strong data augmentation strategy based on the AutoAugment [11].
For the supervised learning part, we only adopt the sample in the previous separated clean dataset \(\mathcal{D}^{\prime}_{C}\), and the loss is:
\[\mathcal{L}_{\mathrm{CE}}=\mathbb{E}_{x,\mathcal{A}_{r}(x)\sim\mathcal{D}^{ \prime}_{C}}\mathrm{cross-entropy}(\mathcal{O}(x),\mathcal{A}_{r}(x)).\]
Furthermore, to better leverage the knowledge from the oracle, we expect that the oracle can provide the AT-model \(\mathcal{M}\) more different prediction distributions from \(\mathcal{M}\). So, we adopt a penalty term described as follows:
\[\mathcal{L}_{\mathrm{MSE}}=-\mathbb{E}_{x\sim\mathcal{D}^{\prime}_{C}} \mathrm{MSE}(\sigma(\mathcal{O}(x)),\sigma(\mathcal{M}(x)))\]
Overall, the loss function for the oracle training is
\[\mathcal{L}_{\mathcal{O}}=\mathcal{L}_{\mathrm{COS}}+\mathcal{L}_{\mathrm{CE }}+\mathcal{L}_{\mathrm{MSE}}.\]
### Adversarial Training
Although we adopt an oracle to correct the wrong annotations, it is not enough to train a robust model on a dataset with unknown label distributions. Based on a previous study [49], it is important to design specific approaches to addressing the dataset imbalance, because training a model on the long-tailed dataset can cause it to badly overfit the head classes. In the AT stage of OAT, we combine two approaches, i.e., label distribution estimation and logits adjustment AT, to address the challenges together.
**Label Distribution Estimation** ( 4 in Figure 1). As the considered training set can be both noisy and imbalanced, it is important to infer the correct label annotations and label distribution. To obtain a relatively precise label distribution, we first adopt the oracle \(\mathcal{O}\) to predict the label for each sample in the dataset \(\mathcal{D}\). To make it clear, we define a new label mapping based on the oracle as follows:
\[\mathcal{A}^{\mathcal{O}}(x)=\arg\max(\sigma(\mathcal{O}(x))),x\in\mathcal{S}.\]
So, the label distribution predicted by the oracle is
\[N^{\mathcal{O}}_{i}=\sum_{x\in\mathcal{S}}\mathds{1}(\mathcal{A}^{\mathcal{O} }(x)=i),i\in[C],\]
where \(C\) is the number of classes in the dataset \(\mathcal{D}\).
**Logits Adjustment AT** ( 5 in Figure 1). To overcome the over-confidence in long-tailed recognition, we study the previous logits adjustment approach [34] with the label distribution \(N^{\mathcal{O}}_{i}\). Specifically, we adjust the model \(\mathcal{M}\)'s output logits during the training process in the following way:
\[l=\mathcal{M}(x)+\log([N^{\mathcal{O}}_{1},N^{\mathcal{O}}_{2},\dots,N^{ \mathcal{O}}_{C}]).\]
Whether the label distribution is a uniform one or a long-tailed one, the logits adjustment translates the model's confidence scores into Bayes-optimal predictions [34] under the current label distribution, making it a universal solution for all possible label distributions.
The logits adjustment AT can be divided into two steps, i.e., AE generation and model training. In the AE generation step, we simply follow PGD-AT [32] to generate AE. This step can be formulated as
\[x_{\mathrm{adv}}=\mathrm{PGD}(\mathcal{M},x,\mathcal{A}^{\mathcal{O}}(x)),\]
where the PGD attack accepts as input a classifier model \(\mathcal{M}\), a clean sample \(x\) and its corresponding label \(\mathcal{A}^{\mathcal{O}}(x)\), and returns an AE \(x_{\mathrm{adv}}\). We adjust the output logits for the model during the AE generation.
In the model training step, we consider the oracle as a soft label generator, and adopt its confidence scores as labels
to train the AT-model \(\mathcal{M}\). It can be seen as a strong and adaptive label smooth method [35], which further addresses the robust overfitting [38]. The loss function is written as
\[\mathcal{L}_{\mathrm{CE}}=-\mathbb{E}_{x\sim\mathcal{D}}\sum_{i=1}^ {C}\log(\sigma(\mathcal{M}(x_{\mathrm{adv}})\\ +\log([N_{1}^{\mathcal{O}},N_{2}^{\mathcal{O}},\dots,N_{C}^{ \mathcal{O}}]))_{i})*\sigma(\mathcal{O}(x))_{i}.\]
To further leverage the feature embedding generated by the oracle, we add a contrastive learning loss into the model training step. This loss has the same formula as the contrastive loss in the oracle training process:
\[\mathcal{L}_{\mathrm{COS}}=-\mathbb{E}_{x\sim\mathcal{D}}\frac{\mathcal{O}_{ H}(\mathcal{O}_{F}(x))*\mathcal{O}_{P}(\mathcal{O}_{H}(\mathcal{M}_{F}(x_{ \mathrm{adv}})))}{\|\mathcal{O}_{H}(\mathcal{O}_{F}(x))\|_{2}*\|\mathcal{O}_{ P}(\mathcal{O}_{H}(\mathcal{M}_{F}(x_{\mathrm{adv}})))\|_{2}},\]
where we consider the PGD attack as a very strong data augmentation strategy.
Overall, the loss function for the adversarial training is
\[\mathcal{L}_{\mathcal{M}}=\mathcal{L}_{\mathrm{CE}}+\mathcal{L}_{\mathrm{COS}}.\]
In our experiment, we consider \(\mathcal{L}_{\mathrm{MSE}}\) in \(\mathcal{L}_{\mathcal{O}}\) and \(\mathcal{L}_{\mathrm{COS}}\) in \(\mathcal{L}_{\mathcal{M}}\) are two terms under oracle-model interactions. We will explore the effectiveness of the interaction though ablation studies in Section 5.2.
## 5 Experiments
### Configurations
**Datasets and models.** We adopt two datasets to evaluate our proposed OAT, i.e., CIFAR-10 and CIFAR-100 [26]. We generate imbalanced datasets based on the _exponential method_[6], which is widely used in previous papers [12, 37, 49]. For the label noise generation, we consider two types of label noise, i.e., _symmetric noise_ and _asymmetric noise_, which are common settings in previous works [15, 27, 25]. Specifically, the symmetric noise means the noisy label is uniformly selected from all possible labels except the ground-truth one. The asymmetric noise is to simulate a more practical scenario, where the ground-truth label can only be changed into a new one with similar semantic information, e.g., truck \(\rightarrow\) automobile, bird \(\rightarrow\) airplane, deer \(\rightarrow\) horse, and cat \(\rightarrow\) dog. We only apply the asymmetric noise to CIFAR-10, as we cannot find prior works studying the asymmetric noise in CIFAR-100. When we generate a label-noisy and imbalanced dataset, we first generate a dataset under the given NR and then use the exponential method on the noisy labels to sample it to obtain a long-tailed dataset under the given IR, which can guarantee that all classes contain at least one correct sample. So in some cases, the ground-truth label distribution can be a balanced one and the noisy label distribution is badly imbalanced, which increases the difficulty of adversarial training. For the model structure, because the oracle and AT-model in OAT are based on ResNet-18 [22], to make a fair comparison, we implement all baseline methods on ResNet-18.
**Baseline.** We consider five baseline methods, i.e., PGD-AT [32], TRADES [51], SAT [23], TE [14] and RoBal [49]. Specifically, PGD-AT and TRADES are two representative AT strategies, which are proposed to improve the model's robustness on balanced and clean datasets. SAT and TE study the memorization of AT under random labels. Some of their experimental results are obtained from datasets with random noise and achieve good results. So we consider that they can be adopted to train models on noisy datasets. In order to make a fair comparison, we adopt the PGD version of SAT and TE, based on their official implementations. RoBal is proposed to solve the long-tailed AT challenge. We compare OAT with these baseline methods under various settings.
**Implementation Details.** For OAT, we adopt the same \(k\)-NN structure as SSR+ [15] with \(k=200\), and follow the hyperparameter setup in its implementation, i.e., \(\theta_{r}=0.8\). \(\mathcal{O}_{H}\) and \(\mathcal{O}_{P}\) are two MLPs with one hidden layer, whose hidden dimension is 256 and output dimension is 128. We discuss the training cost overhead in Appendix F.
To evaluate the robustness and clean accuracy of baselines and OAT, we follow the training strategy proposed in [38], except for RoBal, which follows a different training setting for long-tailed datasets [49]. All other hyperparameters in baseline methods are followed in their official implementations. Specifically, for all methods, we use SGD as the optimizer, with the initial learning rate 0.1, momentum 0.9, weight decay 0.0005, and batch size 128. For RoBal, the total number of training epochs is 80, and we decay the learning rate at the 60-th and 75-th epoch with a factor 0.1. For others, the total number of training epochs is 200, and the learning rate decays at the 100-th and 150-th epoch with a factor 0.1. Note that the learning rate decay is only for the AT-model in OAT, while the oracle does not need to adjust the learning rate, because we observe a larger learning rate can slow down the convergence speed of the oracle and improve the AT-model's robustness by introducing uncertainty in the oracle's predictions. For adversarial training, except for TRADES, we adopt \(l_{\infty}\)-norm PGD [32], with a maximum perturbation size \(\epsilon=8/255\) for 10 iterations, and step length \(\alpha=2/255\) in each iteration. For TRADES, we follow its official implementation, with a maximum perturbation size \(\epsilon=8/255\) for 10 iterations, the step length \(\alpha=2/255\) in each iteration, and robust loss scale \(\beta=6.0\).
**Metrics.** In the main paper, we report the clean accuracy (CA) and robust accuracy (RA) under AutoAttack [10]. Other results under different attacks can be found in Appendix C. We save the "**Best**" model with the highest robustness on the test set under PGD-20 and the "**Last**" model at the end of training. Due to page limit, some results of the "**Last**" models are in Appendix A.
### Ablation Study
We first explore the effectiveness of different components proposed in OAT, including the oracle-model interactions and logits adjustment. Table 1 presents the results on a balanced and imbalanced clean dataset, respectively. It is clear that with the oracle-model interaction, both clean accuracy and robust accuracy are improved. Furthermore, the results indicate that with the interaction, the robust overfitting is mitigated. On the other hand, the logits adjustment will harm the clean accuracy and robustness of models trained on the balanced dataset and cause some robust overfitting on the imbalanced dataset, because the estimated label distribution from the oracle is not as exact as the ground-truth distribution. However, when we train models on an imbalanced dataset, the clean accuracy and robustness of the best model indicate that the effectiveness of the logits adjustment is significant. Overall, both oracle-model interaction and logits adjustment are essential components in OAT.
### Results under Label Noise
We study the models trained on balanced but noisy datasets. Tables 2 and 4 show the results of the balanced CIFAR-10 dataset containing symmetric and asymmetric noise, respectively. Table 3 illustrates the results of models trained on the balanced CIFAR-100 dataset with symmetric noise. The symmetric noise can harm the clean accuracy of baseline models to a bigger degree than harming the robustness. Clearly, decreasing the clean accuracy will reduce the robust accuracy. So when the noise ratio reaches 0.8, we observe models trained with baseline methods do not converge, and the robustness is close to zero. Based on the results, it is clear that OAT achieves consistent high clean accuracy and robust accuracy under different settings. Specifically, SAT adopts the model's confidence scores to refurbish the labels, and achieves lower clean accuracy, as the model trained with AEs will be less overconfident of the data [47] and have slower convergence speed, making the label refurbishment fail. On the other hand, TE only works under less label noise and fails when there are massive noise in the dataset. For example, on CIFAR-10 and NR = 0.6, the clean accuracy of the model with the best robust accuracy of OAT is about 32% higher than the one of SAT. The robustness of this model is about 6% higher than the one of TE. Besides, with the increasing noise ratio, we find that both clean accuracy and robustness face the overfitting challenge. Among all methods, OAT achieves the best results to alleviate overfitting, because of the adaptive label smoothing from the oracle.
asymmetric noise, the number of samples in class "truck" will be significantly less than that in class "automobile". RoBal achieves better results than other baselines. However, because of the label distribution estimation and logits adjustment in OAT, it outperforms RoBal in both clean accuracy and robustness, which proves that OAT is the best choice for different types of label noise.
### Label Distribution Correction
To evaluate the quality of the estimated label distribution, we illustrate the oracle's predicted labels in Figure 2. Other cases can be found in Appendix E. We use "Prior" to represent the label distribution of the known dataset, and "GT" to represent the ground-truth distribution of clean labels, which is unknown for a noisy dataset. We plot the estimated label distribution in the 10th, 50th, and 100th training epoch, respectively. We consider a complex case, where both clean labels and noisy labels are long-tailed. The results prove that our oracle can correctly produce the label distribution under this scenario. So OAT outperforms other baselines in various settings.
## 6 Conclusion and Future Work
We propose a new training strategy, OAT, to solve real-world adversarial training challenges, including label noise and data imbalance. By introducing an oracle, our method achieves state-of-the-art results under different evaluation setups. We hope the dataset re-sampling, logits adjustment AT and other proposed techniques can inspire researchers to explore more effective training strategies for practical usage.
The main limitation of OAT is the performance drop under massive asymmetric noise, although it is much better than prior works. From the results, we can find that models trained on a dataset containing massive asymmetric label
noise will have lower clean accuracy and become easier to overfit the training set. It is important to address this challenge as future work.
## 7 Acknowledgement
This work is supported under the RIE2020 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contributions from the industry partner(s). It is also supported in part by Singapore Ministry of Education (MOE) AcRF Tier 2 MOE-T2EP20121-0006 and AcRF Tier 1 RS02/19.
|
2303.15756 | Complete non-ambiguous trees and associated permutations: new
enumerative results | We study a link between complete non-ambiguous trees (CNATs) and permutations
exhibited by Daniel Chen and Sebastian Ohlig in recent work. In this, they
associate a certain permutation to the leaves of a CNAT, and show that the
number of $n$-permutations that are associated with exactly one CNAT is
$2^{n-2}$. We connect this to work by the first author and co-authors linking
complete non-ambiguous trees and the Tutte polynomial of the associated
permutation graph. This allows us to prove a number of conjectures by Chen and
Ohlig on the number of $n$-permutations that are associated with exactly $k$
CNATs for various $k > 1$, via various bijective correspondences between such
permutations. We also exhibit a new bijection between $(n-1)$-permutations and
CNATs whose permutation is the decreasing permutation $n(n-1)\cdots1$. This
bijection maps the left-to-right minima of the permutation to dots on the top
row of the corresponding CNAT, and descents of the permutation to empty rows of
the CNAT. | Thomas Selig, Haoyue Zhu | 2023-03-28T06:32:17Z | http://arxiv.org/abs/2303.15756v2 | Complete non-ambiguous trees and associated permutations: connections through the Abelian sandpile model
###### Abstract.
We study a link between complete non-ambiguous trees (CNATs) and permutations exhibited by Daniel Chen and Sebastian Ohlig in recent work. In this, they associate a certain permutation to the leaves of a CNAT, and show that the number of \(n\)-permutations that are associated with exactly one CNAT is \(2^{n-2}\). We connect this to work by the first author and co-authors linking complete non-ambiguous trees and the Abelian sandpile model. This allows us to prove a number of conjectures by Chen and Ohlig on the number of \(n\)-permutations that are associated with exactly \(k\) CNATs for various \(k>1\), via bijective correspondences between such permutations. We also exhibit a new bijection between \((n-1)\)-permutations and CNATs whose permutation is the decreasing permutation \(n(n-1)\cdots 1\). This bijection maps the left-to-right minima of the permutation to dots on the bottom row of the corresponding CNAT, and descents of the permutation to empty rows of the CNAT.
## 1. Introduction
Non-ambiguous trees (NATs) were originally introduced by Aval _et al._ in [2] as a special case of the tree-like tableaux from [4]. The combinatorial study of these objects was further developed in [3], which included a generalisation of NATs to higher dimensions. In [13], the authors described a multi-rooted generalisation of complete NATs (CNATs), and linked this to the so-called Abelian sandpile model (ASM), see Sections 2.4, 2.5 and 2.6 for more details.
We can associate a permutation to a CNAT by keeping only its leaf dots (see Section 2.3). This link between CNATs and permutations was first noted, to the best of our knowledge, in [13] (see Section 4 in that paper for more details). Chen and Ohlig [8] initiated the first in-depth combinatorial study of this relationship. In particular, they characterised the set of permutations \(\pi\) which are associated to a unique CNAT. More generally, they looked into the number of \(n\)-permutations with \(k\) associated CNATs, for \(k\geq 1\), and provided a number of conjectures on the enumeration of such permutations. They also studied in detail so-called _upper-diagonal_ CNATs, which are CNATs where the associated permutation is the decreasing permutation \(n(n-1)\cdots 1\), using these to prove a conjecture of Laborde-Zubieta on occupied corners of tree-like tableaux [17].
In this paper, we will further develop the study initiated in [8], using the aforementioned link to the ASM from [13]. In particular, we will see (Theorem 2.12) that the number of CNATs associated with a given permutation \(\pi\) is equal to the number of _minimal recurrent configurations_ for the ASM on the _permutation graph_\(G_{\pi}\) of \(\pi\). Minimal recurrent configurations are in natural bijection with certain acyclic orientations of the graph. The connection between CNATs and the ASM allows us to solve a number of conjectures in [8], via various bijective correspondences between permutations with a given number of CNATs.
We also add to the study of upper-diagonal CNATs by providing a new bijection between upper-diagonal CNATs of size \(n\) and permutations of length \(n-1\) (see Theorem 3.3). This new bijection has the following added benefits. Firstly, it is direct, whereas the bijection in [8] used intermediate objects called tiered trees (see [11]). Secondly, it maps certain statistics of the upper-diagonal
CNATs such as the number of top-row dots, or the number of empty rows, to well-known statistics of the corresponding permutation.
Our paper is organised as follows. In Section 2, we recall some necessary definitions of, and notation on, the combinatorial objects that will be considered in this paper. These include graphs, permutations, CNATs with associated permutations, and the ASM and its minimal recurrent configurations. We also show a number of useful preliminary results that will be useful in the remainder of the paper. At the end of the section we provide the key connection between CNATs and minimal recurrent configurations of the ASM (Theorem 2.12). In Section 3, we focus on one specific family of CNATs, called _upper-diagonal CNATs_. We introduce a concept of _labelled CNAT_, and describe a bijection between labelled CNATs and permutations of the label set (Theorem 3.3), which preserves certain statistics of both objects. This bijection then specialises to a bijection between upper-diagonal CNATs of size \(n\) and permutations of length \(n-1\). The bijection relies on two operations on labelled CNATs, called _top-row decomposition_ and _top-row deletion_.
In Section 4, we focus on counting permutations according to their number of associated CNATs. Specifically, for \(k,n\geq 1\) we are interested in the set \(B(n,k)\) of permutations of length \(n\) which are associated with exactly \(k\) CNATs. The study of these sets was initiated in [8], and we continue that work here by proving a number of conjectures left by the authors. We begin in Section 4.1 by giving characterisations of the sets \(B(n,1)\) and \(B(n,2)\) in terms of the so-called _quadrant condition_ (Definition 4.1, Propositions 4.2 and 4.10). This allows us to describe a simple bijection between the product set \(\{2,\cdots,n\}\times B(n,1)\) and \(B(n+1,2)\) (Theorem 4.3), and deduce the enumerative formula for the latter (Corollary 4.5). In Section 4.2 we use permutation _patterns_ to establish a bijection between the sets \(B(n,2)\) and \(B(n+1,3)\) (Theorem 4.15). We then show in Section 4.3 that the set \(B(n,5)\) is always empty (Theorem 4.17). Finally, in Section 4.4 we consider the maximal value of \(k\) such that \(B(n,k)\) is non-empty. We show (Theorem 4.21) that this is achieved for \(k=(n-1)!\), and that the unique permutation in \(B(n,(n-1)!)\) is precisely the decreasing permutation \(n(n-1)\cdots 1\) whose CNATs were studied in detail in Section 3. We then conclude our paper with a brief summary of our results as well as some open problems and directions for future research (Section 5).
## 2. Preliminaries
In this section we introduce the various objects that will be studied and used throughout the paper, alongside the necessary notation. We also state and prove a number of useful preliminary results.
### Graphs and orientations
Throughout this paper all graphs considered are finite, undirected, simple (i.e. no loops or multiple edges are allowed), and connected. For a graph \(G\), we denote \(V(G)\) and \(E(G)\) its set of vertices and edges respectively, and write \(G=(V(G),E(G))\). Two graphs \(G=(V,E)\) and \(G^{\prime}=(V^{\prime},E^{\prime})\) are said to be _isomorphic_ if there exists a bijection \(\phi:V\to V^{\prime}\) such that for any vertices \(v,w\in V\), we have \((v,w)\in E\) if, and only if, \((\phi(v),\phi(w))\in E^{\prime}\). For a vertex \(v\in V(G)\), we write \(\deg_{v}\) for the degree of \(v\) (its number of neighbours in the graph). Given a subset \(V^{\prime}\subseteq V(G)\), the _induced_ subgraph on \(V^{\prime}\) is the graph with vertex set \(V^{\prime}\) and edge set consisting of all edges in \(E\) whose end-points are both in \(V^{\prime}\). We denote it \(G\left[V^{\prime}\right]\).
A _cycle_ in the graph \(G\) is a sequence of vertices \(v_{0},v_{1},\cdots,v_{n}=v_{0}\) (for some \(n\geq 3\)) such that for every \(i>0\), \((v_{i-1},v_{i})\) is an edge of \(G\), and the vertices \(v_{0},v_{1},\cdots,v_{n-1}\) are all distinct. The integer \(n\) is the _length_ of the cycle, and we will say that \(G\)_contains_ a cycle of length \(n\) (or \(n\)-cycle for short) if there exists such a cycle in \(G\). The \(n\)-cycle (graph) \(C_{n}\) is the graph consisting of a single cycle of length \(n\), with no other vertices or edges. If \(G\) contains a cycle \(v_{0},v_{1},\cdots,v_{n}=v_{0}\) such that \(G\left[\{v_{0},\cdots,v_{n-1}\}\right]\) is the \(n\)-cycle \(C_{n}\), we say that \(G\)_induces_ an \(n\)-cycle. A _tree_ is a (connected)
graph containing no cycle of any length. A _spanning tree_ of a graph \(G\) is a tree \(G^{\prime}\) such that \(V(G^{\prime})=V(G)\).
An _orientation_\(\mathcal{O}\) of a graph \(G\) is the assignment of a direction to each edge of \(G\). Given an orientation \(\mathcal{O}\) of \(G\), and an edge \((v,w)\), we write \(v\xrightarrow{\mathcal{O}}w\) to indicate that the edge is directed from \(v\) to \(w\) in the orientation \(\mathcal{O}\), and \(v\xleftarrow{\mathcal{O}}w\) when it is directed from \(w\) to \(v\). We also write \(\mathrm{in}_{v}^{\mathcal{O}}\), resp. \(\mathrm{out}_{v}^{\mathcal{O}}\) for the number of incoming edges (edges \(v\xleftarrow{\mathcal{O}}w\)), resp. outgoing edges (edges \(v\xrightarrow{\mathcal{O}}w\)), at \(v\) in the orientation \(\mathcal{O}\). A vertex \(v\) is a _target_, resp. _source_, of an orientation \(\mathcal{O}\) if \(\mathrm{in}_{v}^{\mathcal{O}}=\mathrm{deg}_{v}\) (all edges are incoming), resp. \(\mathrm{out}_{v}^{\mathcal{O}}=\mathrm{deg}_{v}\) (all edges are outgoing)1.
Footnote 1: The terminology “sink” is more frequent than “target” in the literature. However, we will be considering the Abelian sandpile model on graphs, for which there is already a designated, fixed, sink vertex, so we use the latter terminology here to avoid confusion.
An orientation is _acyclic_ if it contains no directed cycle, i.e. there is no sequence \(v_{0},\cdots,v_{n-1}\) of vertices such that \(v_{0}\xrightarrow{\mathcal{O}}v_{1}\xrightarrow{\mathcal{O}}\ldots \xrightarrow{\mathcal{O}}v_{n-1}\xrightarrow{\mathcal{O}}v_{0}\). It is straightforward to check that an acyclic orientation must have at least one source and at least one target. For \(s\in V(G)\), we see that an acyclic orientation \(\mathcal{O}\) is _\(s\)-rooted_ if \(s\) is the unique target of \(\mathcal{O}\). We denote by \(\mathrm{AcycOrient}_{s}\left(G\right)\) the set of \(s\)-rooted acyclic orientations of a graph \(G\).
### Permutations, permutation patterns, and permutation graphs
For \(n\geq 1\), we let \(S_{n}\) be the set of permutations of length \(n\) (called \(n\)-permutations for short). We will usually use standard one-line notation for permutations. That is, a permutation \(\pi\in S_{n}\) is a word \(\pi=\pi_{1}\cdots\pi_{n}\) on the alphabet \([n]\) such that every letter in \([n]\) appears exactly once in \(\pi\). By convention, the unique \(0\)-permutation is simply the empty word. For \(i,j\in[n],i\neq j\), we write \(i\prec_{\pi}j\) if \(i\) appears before (to the left of) \(j\) in the permutation (word) \(\pi\).
Another useful way of viewing permutations is their _graphical representation_. In this view, an \(n\)-permutation is a set of dots in an \(n\times n\) grid such that every row and every column of the grid contains exactly one dot. For a permutation \(\pi=\pi_{1}\cdots\pi_{n}\), we construct its graphical representation by putting dots in column \(i\) and row \(\pi_{i}\) for \(i=1,\cdots,n\). When connecting permutations to CNATs, it will be convenient to label columns from left-to-right and rows from top-to-bottom in this representation, as in Figure 1 below.
A permutation \(\pi=\pi_{1}\cdots\pi_{n}\) is said to be _reducible_ if there exists \(1\leq k<n\) such that \(\pi_{1}\cdots\pi_{k}\) is a \(k\)-permutation. A permutation is _irreducible_ if it is not reducible. For example, the permutation \(\mathbf{312564}\) is reducible, while \(561243\) is irreducible.2 A _fixed point_ of a permutation \(\pi\) is an index
Figure 1. Graphical representation of the permutation \(\pi=561243\).
\(j\in[n]\) such that \(\pi_{j}=j\). A _left-to-right minimum_ is a letter \(\pi_{i}\) such that \(\pi_{j}>\pi_{i}\) for all \(j<i\). For example, the permutation \(\mathbf{5}61243\) has two left-to-right minima \(5\) and \(1\).
Given a permutation \(\pi\) and two letters \(i,j\in[n]\), we say that \((i,j)\) is an _inversion_ of \(\pi\) if \(i>j\) and \(i\prec_{\pi}j\). A _descent_ of \(\pi\) is an inversion consisting of consecutive letters in the word, that is \(i=\pi_{k}\) and \(j=\pi_{k+1}\) for some \(k\in[n-1]\). For example, the permutation \(\pi=561243\) has the following inversions (**descents** in bold): \((5,1),(5,2),(5,4),(5,3),\textbf{(6,1)},(6,2),(6,4),(6,3),\textbf{(4,3)}\). The _permutation graph_ of \(\pi\) is the graph with vertex set \([n]\) and edge set the set of inversions of \(\pi\). We denote this graph \(G_{\pi}\). Figure 2 shows the permutation graph of \(\pi=561243\). Note that the labels of the vertices are the row labels in the graphical representation of \(\pi\).
**Proposition 2.1**.: _Let \(\pi\) be a permutation. The permutation graph \(G_{\pi}\) is connected if and only if the permutation \(\pi\) is irreducible._
This is a classical result on permutation graphs, see e.g. [16, Lemma 3.2]. From now on, we assume that all permutations are irreducible unless explicitly stated otherwise. One permutation that will play an important role in our paper, particularly in Section 3 and Theorem 4.21, is the _decreasing_ permutation, defined by \(\mathrm{dec}_{n}:=n(n-1)\cdots 1\) (for some \(n\geq 1\)). In \(\mathrm{dec}_{n}\) every pair is an inversion, so the corresponding graph is the complete graph \(K_{n}\).
Let \(\pi\) be an \(n\)-permutation, and \(\tau\) a \(k\)-permutation for some \(k\leq n\). We say that \(\pi\)_contains_ the _pattern_\(\tau\) if there exist indices \(i_{1}<i_{2}<\cdots<i_{k}\) such that \(\pi_{i_{1}},\pi_{i_{2}},\cdots,\pi_{i_{k}}\) appear in the same relative order as \(\tau\). In other words, if we delete all columns and rows other than \((i_{1},\pi_{i_{1}}),\cdots(i_{k},\pi_{i_{k}})\) from the graphical representation of \(\pi\), we get the graphical representation of \(\tau\). In this case, we call \(\pi_{i_{1}},\cdots,\pi_{i_{k}}\) an _occurrence_ of the pattern \(\tau\). If \(\pi\) does not contain the pattern \(\tau\), we say that \(\pi\)_avoids_\(\tau\). For example, the permutation \(\pi=561243\) contains two occurrences of the pattern \(321\), given by the bolded letters: \(\mathbf{5}61243\), \(\mathbf{5}61243\). However, it avoids the pattern \(4321\) since there is no sequence of \(4\) letters in decreasing order.
The study of permutation patterns has been of wide interest in permutation research in recent decades, with much focus on the enumeration of permutations avoiding certain given patterns. For example, it is still an open problem to establish the growth rate of the set of \(n\)-permutations avoiding the pattern \(1324\) (see [5] and [18] for recent results in that direction). For more information on permutation patterns, we refer the interested reader to Kitaev's book [15] or Chapters 4 and 5 in the most recent edition of Bona's book [6].
Figure 2. The permutation graph of \(\pi=561243\). On the left, we draw the edges corresponding to inversions of \(\pi\) on its graphical representation. On the right, we re-draw the permutation graph in more readable form.
Patterns of a permutation \(\pi\) are linked to induced subgraphs of its permutation graph \(G_{\pi}\) via the following observation. Let \(\pi\) be an \(n\)-permutation, and \(\tau\) a \(k\)-permutation, for some \(k\leq n\). Suppose that \(\pi_{1},\cdots,\pi_{k}\) is an occurrence of the pattern \(\tau\) in \(\pi\). Then the induced subgraph \(G\left[\left\{\pi_{1},\cdots,\pi_{k}\right\}\right]\) is isomorphic to the permutation graph \(G_{\tau}\). Conversely, if there exist vertices \(\pi_{1},\cdots,\pi_{k}\) such that \(G\left[\left\{\pi_{1},\cdots,\pi_{k}\right\}\right]\) is isomorphic to \(G_{\tau}\), then \(\pi_{1},\cdots,\pi_{k}\) is an occurrence of the pattern \(\tau\) in \(\pi\). This yields the following.
**Proposition 2.2**.: _Let \(\pi\) be a permutation, and \(G_{\pi}\) its corresponding permutation graph._
1. _The graph_ \(G_{\pi}\) _induces a_ \(3\)_-cycle if, and only if, the permutation_ \(\pi\) _contains the pattern_ \(321\)_._
2. _The graph_ \(G_{\pi}\) _induces a_ \(4\)_-cycle if, and only if, the permutation_ \(\pi\) _contains the pattern_ \(3412\)_._
3. _The graph_ \(G_{\pi}\) _induces no cycle of length_ \(5\) _or more._
Points (1) and (2) follow from the above observation, combined with the fact that \(3\)- and \(4\)-cycles correspond uniquely to the permutations \(321\) and \(3412\) respectively. Point (3) was shown in [7, Proposition 1.7].
### Complete non-ambiguous trees and associated permutations
In this section we define (complete) non-ambiguous trees and introduce their associated permutations. Non-ambiguous trees were originally introduced by Aval _et al._ in [2] as a special case of the tree-like tableaux from [4].
**Definition 2.3**.: A _non-ambiguous tree_ (NAT) is a filling of an \(m\times n\) rectangular grid, where each cell is either dotted or not, satisfying the following conditions.
1. Every row and every column contains at least one dotted cell.
2. Aside from the top-left cell, every dotted cell has either a dotted cell above it in the same column, or a dotted cell to its left in the same row, but not both.
Note that the two conditions imply that the top-left cell must always be dotted. The use of the word _tree_ to describe these objects comes from the following observation. Given a NAT \(T\), we connect every dot \(d\) not in the top-left cell to its _parent_ dot \(p(d)\), which is the dot immediately above it in its column or to its left in its row (by Condition (2) of the above definition, exactly one of these must exist). This yields a binary tree, rooted at the top-left dot (see Figure 3).
Following tree terminology, for a NAT \(T\), we call the dot lying in the top left cell the _root dot_, or simply the _root_, of \(T\). Similarly, a _leaf dot_ is a dot with no dots below it in the same column or to its right in the same row. An _internal dot_ is a dot which is not a leaf dot (this includes the root, unless the NAT is a single dotted cell). Given a NAT, it will be convenient to label the columns \(1,2,\cdots n\) from left to right, and the rows \(1,2,\cdots,m\) from top to bottom.
A NAT is said to be _complete_ if the underlying tree is complete, i.e. every internal dot has exactly two children. More formally, a complete non-ambiguous tree (CNAT) is a NAT in which every dot either has both a dot below it in the same column and a dot to its right in the same row, or neither of these. The _size_ of a CNAT is its number of leaf dots, or equivalently one more than its number of internal dots.
It is straightforward to check that in a CNAT, every row and every column must have exactly one leaf dot (see e.g. [13, Section 4.3] or [8, Section 2.1]). The leaf dot in a column, resp. row, is simply its bottom-most, resp. right-most, dot. As such, the set of leaf dots of a CNAT \(T\) of size \(n\) forms the graphical representation of an \(n\)-permutation \(\pi\), which must be irreducible (see e.g. [8, Theorem 3.3]). We say that \(\pi\) is the permutation _associated_ with the CNAT \(T\). For example, the CNAT on the left of Figure 3 has associated permutation \(\pi=45312\). For a permutation \(\pi\), we define the set \(\mathrm{CNAT}\left(\pi\right)\) to be the set of CNATs whose associated permutation is \(\pi\), and \(\mathrm{cnat}\left(\pi\right):=\left|\mathrm{CNAT}\left(\pi\right)\right|\) to be the number of such permutations.
### The Abelian sandpile model
In this section we give a brief introduction to the Abelian sandpile model (ASM). We define the notion of recurrent configurations, and recall Dhar's burning algorithm for checking if a given configuration is recurrent or not.
A _sandpile graph_ is a pair \((G,s)\) where \(G\) is a graph, and \(s\in V(G)\) a distinguished vertex of \(G\). We will call \(s\) the _sink_ of the sandpile graph \((G,s)\). For a sandpile graph \((G,s)\), we denote \(\tilde{V}(G):=V(G)\setminus\{s\}\) the set of non-sink vertices. A _configuration_ on \((G,s)\) is a vector \(c=\left(c_{v}\right)_{v\in\tilde{V}(G)}\in\mathbb{Z}_{+}^{|\tilde{V}(G)|}\) which assigns a non-negative integer to each non-sink vertex. We think of \(c_{v}\) as the number of "grains of sand" at vertex \(v\). We denote \(\operatorname{Config}_{s}\left(G\right)\) the set of configurations on \((G,s)\). For \(v\in V(G)\), let \(\alpha^{v}\in\operatorname{Config}_{s}\left(G\right)\) be the configuration \(c\) with \(c_{v}=1\) and \(c_{w}=0\) for \(w\neq v\). By convention, \(\alpha^{s}\) is the all-\(0\) configuration.
A vertex \(v\in\tilde{V}(G)\) is said to be _stable_ in a configuration \(c\) if \(c_{v}<\operatorname{deg}_{v}\). Otherwise it is _unstable_. A configuration is stable if all its vertices are stable, and we denote \(\operatorname{Stable}_{s}\left(G\right)\) the set of all stable configurations on \((G,s)\). Unstable vertices topple as follows. We define the _toppling operator_\(\operatorname{Topp}_{v}\) corresponding to the toppling of an unstable vertex \(v\in\tilde{V}(G)\) in a configuration \(c\in\operatorname{Config}_{s}\left(G\right)\) by:
\[\operatorname{Topp}_{v}(c):=c-\operatorname{deg}_{v}\alpha^{v}+\sum_{w\sim v} \alpha^{w}, \tag{1}\]
where the sum is over all neighbours \(w\) of \(v\) in \(G\), and the addition operator on configurations denotes pointwise addition at each vertex. In words, the toppling of a vertex \(v\) sends one grain of sand from \(v\) to each neighbour \(w\) of \(v\) in \(G\).
Performing this toppling may cause other vertices to become unstable, and we topple these also. One can show (see e.g. [10, Section 5.2]) that starting from some unstable configuration \(c\) and toppling successively unstable vertices, we eventually reach a stable configuration \(c^{\prime}\) (think of the sink as absorbing grains). In addition, the configuration \(c^{\prime}\) reached does not depend on the sequence in which vertices are toppled. We call this \(c^{\prime}\) the _stabilisation_ of \(c\) and denote it \(\operatorname{Stab}(c)\).
We now define a Markov chain on the set \(\operatorname{Stable}_{s}\left(G\right)\) of stable configurations. Fix a probability distribution \(\mu=\left(\mu_{v}\right)_{v\in\tilde{V}(G)}\) on \(\tilde{V}(G)\) such that \(\mu_{v}>0\) for all \(v\in\tilde{V}(G)\). At each step of the Markov chain we add a grain at the vertex \(v\) with probability \(\mu_{v}\) and stabilise the resulting configuration.
The _recurrent_ configurations are those appear infinitely often in the long-time running of this Markov chain. We let \(\operatorname{Rec}_{s}\left(G\right)\) be the set of recurrent configurations for the ASM on the graph \((G,s)\). The study of the recurrent configurations has been of central importance in ASM research (see e.g. [23] and references therein). Here we recall a classical result known as the _burning algorithm_ due to Dhar [10, Section 6.1], which provides a simple algorithmic process to check if a given configuration is recurrent or not.
Figure 3. Two examples of NATs. Leaf dots are represented in blue, and internal dots in black. The NAT on the left is complete, while the one on the right is not (the red dot has only one child).
**Theorem 2.4** (Dhar's burning criterion).: _Let \(\left(G,s\right)\) be a sandpile graph, and \(c\in\mathrm{Stable}_{s}\left(G\right)\) a stable configuration. Then \(c\) is recurrent if, and only if,_
\[\mathrm{Stab}\left(c+\sum_{v\sim s}\alpha^{v}\right)=c,\]
_where the sum is over all neighbours \(v\) of the sink \(s\) in \(G\). Moreover, if \(c\) is recurrent, then in this stabilisation each non-sink vertex of \(\tilde{V}(G)\) topples exactly once._
In words, a configuration \(c\) is recurrent if, and only if, the following assertion holds. Starting from the configuration \(c\), add a grain to each vertex \(v\) adjacent to the sink \(s\) (one can think of this as "toppling the sink"). Then stabilising the resulting configuration yields the original configuration \(c\).
### Minimal recurrent configurations
There is a natural partial order on the set \(\mathrm{Rec}_{s}\left(G\right)\) of recurrent configurations for the ASM on \(\left(G,s\right)\). For two configurations \(c,c^{\prime}\in\mathrm{Rec}_{s}\left(G\right)\), we define \(c\preceq c^{\prime}\) if, and only if, \(c_{v}\leq c_{v}^{\prime}\) for all \(v\in\tilde{V}(G)\). A _minimal recurrent configuration_ is a recurrent configuration which is minimal for this partial order. In words, a minimal recurrent configuration is a recurrent configuration where the removal of one grain of sand from any vertex would cause the configuration to no longer be recurrent. We let \(\mathrm{MinRec}_{s}\left(G\right)\) denote the set of minimal recurrent configurations for the ASM on \(\left(G,s\right)\).
Minimal recurrent configurations can also be characterised in terms of their total number of grains. For a recurrent configuration \(c\in\mathrm{Rec}_{s}\left(G\right)\), we define the _level_ of \(c\) by
\[\mathrm{level}_{s}\left(c\right):=\sum_{v\in\tilde{V}(G)}c_{v}+\deg_{s}-|E(G)|. \tag{2}\]
It is well-known (see e.g. [19, Theorem 3.5]) that \(\mathrm{level}_{s}\left(c\right)\geq 0\) for any recurrent configuration \(c\). Moreover, we have the following (see [22, Proposition 2.11] for a proof, although this fact seems to be implicit in various sandpile-related works).
**Proposition 2.5**.: _Let \(c\in\mathrm{Rec}_{s}\left(G\right)\). Then \(c\) is minimal recurrent if, and only if, \(\mathrm{level}_{s}\left(c\right)=0\)._
Another important fact about minimal recurrent configurations is that their number does not depend on the choice of sink vertex. Indeed, it was shown in [19] that the _level polynomial_ of a graph, given by
\[\mathrm{Level}_{s,G}(x):=\sum_{c\in\mathrm{Rec}_{s}\left(G\right)}x^{\mathrm{ level}_{s}\left(c\right)}, \tag{3}\]
is given by a specification of the Tutte polynomial, and therefore does not depend on the choice of sink \(s\). In particular, this holds for the evaluation \(\mathrm{Level}_{s,G}(0)\) which is equal to the number of minimal recurrent configurations on \(\left(G,s\right)\) by Proposition 2.5. We therefore simply let \(\mathrm{minrec}\left(G\right):=\left|\mathrm{MinRec}_{s}\left(G\right)\right|\) denote the number of minimal recurrent configurations for the ASM on \(G\), and this does not depend on the choice of sink \(s\). This will be useful at various points in this paper, since it allows us to choose the sink according to what is most appropriate in any situation.
The following bijective characterisation of minimal recurrent configurations was shown in [21, Corollary 3].
**Proposition 2.6**.: _Let \(G\) be a graph, and \(s\in V(G)\) a vertex of \(G\). Then the map \(\mathcal{O}\mapsto\left(\mathrm{in}_{v}^{\mathcal{O}}\right)_{v\in\tilde{V} \left(G\right)}\) is a bijection from the set \(\mathrm{AcycOrient}_{s}\left(G\right)\) of \(s\)-rooted acyclic orientations of the graph \(G\) to the set \(\mathrm{MinRec}_{s}\left(G\right)\) of minimal recurrent configurations on the sandpile graph \(\left(G,s\right)\)._
We end this section with some useful structural results. The first shows that "pruning" a graph of its tree branches does not affect its number of minimal recurrent configurations. Given a graph \(G\), we define the _pruned graph_\(\operatorname{Prune}\left(G\right)\) through the following algorithmic procedure.
1. Initialise \(H=G\).
2. If \(H\) contains no vertex of degree \(1\), move to Step 3. Otherwise, choose a vertex \(v\) with degree \(1\), set \(H=H\setminus\left\{v\right\}\), and repeat Step 2.
3. Output \(H:=\operatorname{Prune}\left(G\right)\).
Another way of viewing the pruning operation is that it removes "tree branches" that were attached at some vertices in the graph \(G\). An example is shown on Figure 4 below. Note that the output \(\operatorname{Prune}\left(G\right)\) only depends on the choice of vertex \(v\) in Step (2) if at some stage of the process we reach a graph \(H\) which is reduced to a single edge, which occurs in fact exactly when the original graph \(G\) is a tree. In that case, removing either of the two vertices will yield a graph consisting of a single vertex, and we simply set \(\operatorname{Prune}\left(G\right)\) to be any arbitrary vertex of the original graph.
**Lemma 2.7**.: _For any graph \(G\), we have \(\operatorname{minrec}\left(G\right)=\operatorname{minrec}\left(\operatorname{ Prune}\left(G\right)\right)\)._
Proof.: By induction, it suffices to show that if a vertex \(v\in G\) has degree \(1\), then \(\operatorname{minrec}\left(G\setminus\left\{v\right\}\right)=\operatorname{ minrec}\left(G\right)\). Let \(\mathcal{O}\) be a \(v\)-rooted acyclic orientation of \(G\). Let \(v^{\prime}\) be the sole neighbour of \(v\) in \(G\), and define \(\mathcal{O}^{\prime}\) be the orientation of \(G^{\prime}:=G\setminus\left\{v\right\}\) obtained by deleting the edge \(v\stackrel{{\mathcal{O}}}{{\subset}}v^{\prime}\) from \(\mathcal{O}\). By construction, \(\mathcal{O}^{\prime}\) is acyclic, so must have at least one target. But the only possible target is \(v^{\prime}\), since \(v\) is the only target of \(\mathcal{O}\). Therefore \(\mathcal{O}^{\prime}\) is a \(v^{\prime}\)-rooted acyclic orientation of \(G^{\prime}\), and the map \(\mathcal{O}\mapsto\mathcal{O}^{\prime}\) is clearly a bijection from \(\operatorname{AcycOrient}_{v}\left(G\right)\) to \(\operatorname{AcycOrient}_{v^{\prime}}\left(G^{\prime}\right)\). The result then follows from Proposition 2.6.
Our next lemma concerns graphs whose pruned graph is a cycle. For \(k\geq 3\), we say that a graph \(G\) is a _decorated \(k\)-cycle_ if its pruned graph \(\operatorname{Prune}\left(G\right)\) is the \(k\)-cycle \(C_{k}\). Equivalently, \(G\) contains a \(k\)-cycle, and deleting any edge in that \(k\)-cycle makes \(G\) a tree.
**Lemma 2.8**.: _If \(G\) is a decorated \(k\)-cycle for some \(k\geq 3\), then we have \(\operatorname{minrec}\left(G\right)=k-1\)._
Proof.: If \(G\) is a decorated \(k\)-cycle, then \(\operatorname{Prune}\left(G\right)\) is the \(k\)-cycle \(C_{k}\) by definition. Lemma 2.7 then yields that \(\operatorname{minrec}\left(G\right)=\operatorname{minrec}\left(C_{k}\right)\). The fact that \(\operatorname{minrec}\left(C_{k}\right)=k-1\) is well known, see e.g. [20]. Indeed, in this case the minimal recurrent configurations are given by removing one grain from the all-\(1\) configuration \((1,\cdots,1)\) at any non-sink vertex (there are \((k-1)\) such vertices).
Figure 4. Illustrating the pruning operation: a graph \(G\) on the left, and its pruned graph \(\operatorname{Prune}\left(G\right)\) on the right. The tree branches of \(G\) (removed in the pruning) are represented in blue.
Our next two lemmas show that when we "grow" a graph \(G\) (in a certain sense) we increase its number of minimal recurrent configurations.
**Lemma 2.9**.: _Let \(G\) be a graph, with a pair of vertices \(v,w\) such that \((v,w)\) is not an edge of \(G\). Let \(G^{\prime}:=G\cup\{(v,w)\}\) be the graph \(G\) to which we add the edge \((v,w)\). Then \(\operatorname{minrec}\left(G\right)<\operatorname{minrec}\left(G^{\prime}\right)\)._
If \(G^{\prime}\) is obtained from \(G\) as above, we say that \(G^{\prime}\) is an _edge addition_ of \(G\).
Proof.: Let \(G\), \((v,w)\), and \(G^{\prime}\) be as in the statement of the lemma. Define \(N_{G}(w)\) to be the set of neighbours of \(w\) in \(G\) (also the set of neighbours of \(w\) in \(G^{\prime}\) other than \(v\)). For any \(\mathcal{O}\in\operatorname{AcycOrient}_{v}\left(G\right)\), we can construct \(\mathcal{O}^{\prime}\in\operatorname{AcycOrient}_{v}\left(G^{\prime}\right)\) be setting \(v\xleftarrow{\mathcal{O}^{\prime}}w\) and leaving directions of other edges unchanged from \(\mathcal{O}\). Clearly the map \(\mathcal{O}\mapsto\mathcal{O}^{\prime}\) is injective. Moreover, for any \(\mathcal{O}\), since \(w\) is not a target of \(\mathcal{O}\), there must be \(w^{\prime}\in N_{G}(w)\) such that \(w\xrightarrow{\mathcal{O}}w^{\prime}\), i.e. \(w\xrightarrow{\mathcal{O}^{\prime}}w^{\prime}\). It therefore suffices to show that we can construct \(\mathcal{O}^{\prime}\in\operatorname{AcycOrient}_{v}\left(G^{\prime}\right)\) such that for all \(w^{\prime}\in N_{G}(w)\), we have \(w\xleftarrow{\mathcal{O}^{\prime}}w^{\prime}\).
For this, consider the graph \(H:=G^{\prime}\setminus\{v\}\) to be the graph \(G\) with \(v\) removed. The graph \(H\) has \(k\) connected components \(H_{1},\cdots,H_{k}\) for some \(k\geq 1\). By construction each of these connected components must have at least one vertex adjacent to \(v\) in \(G^{\prime}\). Assume without loss of generality that \(w\in V(H_{1})\). Note that this implies that all neighbours \(w^{\prime}\in N_{G}(w)\) of \(w\) in \(G\) are also in the connected component \(H_{1}\). Define \(w_{1}:=w\), and for each \(j\in\{2,\cdots,k\}\) choose some vertex \(w_{j}\in V(H_{j})\) which is adjacent to \(v\) in \(G\). Now for \(j\in\{1,\cdots,k\}\), let \(\mathcal{O}^{(j)}\) be a \(w_{j}\)-rooted acyclic orientation of \(H_{j}\). In particular, we have \(w\xleftarrow{\mathcal{O}^{(1)}}w^{\prime}\) for all \(w^{\prime}\in N_{G}(w)\). Finally, define an orientation \(\mathcal{O}^{\prime}\) of \(G^{\prime}\) by leaving the directions of any edge \((u,u^{\prime})\) in \(\mathcal{O}^{(j)}\) (for all \(j\)) unchanged, and setting \(v\xleftarrow{\mathcal{O}^{\prime}}v^{\prime}\) for all neighbours \(v^{\prime}\) of \(v\) in \(G^{\prime}\). By construction, \(\mathcal{O}^{\prime}\) is a \(v\)-rooted acyclic orientation of \(G^{\prime}\), with \(w\xleftarrow{\mathcal{O}^{\prime}}w^{\prime}\) for all \(w^{\prime}\in N_{G}(w)\), as desired.
**Lemma 2.10**.: _Let \(G\) be a graph, and \(e=(v,w)\) an edge of \(G\). Let \(G^{\prime}\) be the graph \(G\) where we have replaced \(e\) with two edges \((v,u)\) and \((u,w)\) for a new vertex \(u\notin G\). Then \(\operatorname{minrec}\left(G\right)\leq\operatorname{minrec}\left(G^{\prime}\right)\)._
If \(G^{\prime}\) is obtained from \(G\) as above, we say that \(G^{\prime}\) is an _edge duplication_ of \(G\). Note that in this case the inequality is not always strict. For example, duplicating an edge that belongs to a "tree branch" (i.e. a branch that would be removed in the pruning of \(G\), such as the blue edges on the left of Figure 4) does not change the number of minimal recurrent configurations.
Proof.: Let \(G\) and \(G^{\prime}\) be as in the statement of the lemma. Fix some vertex \(s\) in \(G\). For any \(s\)-rooted acyclic orientation \(\mathcal{O}\) of \(G\), we define an \(s\)-rooted acyclic orientation \(\mathcal{O}^{\prime}\) of \(G^{\prime}\) by replacing \(v\xrightarrow{\mathcal{O}}w\), resp. \(v\xleftarrow{\mathcal{O}}w\), with \(v\xrightarrow{\mathcal{O}^{\prime}}u\) and \(u\xrightarrow{\mathcal{O}^{\prime}}w\), resp. \(v\xleftarrow{\mathcal{O}^{\prime}}u\) and \(u\xleftarrow{\mathcal{O}^{\prime}}w\). Then \(\mathcal{O}\mapsto\mathcal{O}^{\prime}\) is clearly an injection \(\operatorname{AcycOrient}_{s}\left(G\right)\rightarrow\operatorname{AcycOrient}_{s} \left(G^{\prime}\right)\), and by Proposition 2.6, the lemma follows.
Our final structural result is more-or-less part of the ASM "folklore", but we have been unable to find it stated as such anywhere in the sandpile literature. For completeness, we therefore state and prove it here.
**Lemma 2.11**.: _Let \(G\) be a graph. Then we have \(\operatorname{minrec}\left(G\right)=1\) if, and only if, \(G\) is a tree._
Proof.: If \(G\) is a tree, and \(s\) a vertex of \(G\), then we can reduce \(G\) to the graph \(H\) consisting of a single edge \((s,t)\) in the pruning process. It is straightforward to see that \(\operatorname{Stable}_{s}\left(H\right)=\operatorname{Rec}_{s}\left(H\right)= \operatorname{MinRec}_{s}\left(H\right)=\{(0_{t})\}\) (the configuration consisting of no grains at the non-sink vertex \(t\)) in this case. Conversely, if \(G\) contains a cycle of length \(k\geq 3\), we can construct \(G\) from the \(3\)-cycle through a succession of edge additions and duplications. From Lemmas 2.9 and 2.10, we get that \(\operatorname{minrec}\left(G\right)\geq\operatorname{minrec}\left(C_{3}\right)=3 -1=2\) (applying Lemma 2.8), as desired.
### Connecting the ASM to CNATs
We are now ready to state the connection between CNATs and the ASM. This will allow us to prove a number of conjectures from [8] in Section 4.
**Theorem 2.12**.: _Let \(\pi\in S_{n}\) be an \(n\)-permutation for some \(n\geq 1\), \(G_{\pi}\) the corresponding permutation graph, and \(s\in[n]\) a fixed sink vertex of \(G_{\pi}\). Then the following three sets are equinumerous._
1. _The set_ \(\mathrm{CNAT}\left(\pi\right)\) _of CNATs whose associated permutation is_ \(\pi\)_._
2. _The set_ \(\mathrm{MinRec}_{s}\left(G_{\pi}\right)\) _of minimal recurrent configurations for the ASM on_ \(\left(G_{\pi},s\right)\)_._
3. _The set_ \(\mathrm{AcycOrient}_{s}\left(G_{\pi}\right)\) _of_ \(s\)_-rooted acyclic orientations of_ \(G_{\pi}\)_._
Proof.: The bijection between \(\mathrm{MinRec}_{s}\left(G_{\pi}\right)\) and \(\mathrm{AcycOrient}_{s}\left(G_{\pi}\right)\) is given in Proposition 2.6, and holds for general graphs \(\left(G,s\right)\), not just permutation graphs. That \(\mathrm{CNAT}\left(\pi\right)\) and \(\mathrm{MinRec}_{s}\left(G_{\pi}\right)\) are equinumerous is stated in [13, Proposition 31, Corollary 33]. Note that it remains an open problem to find a direct bijective proof of this fact.
We end the preliminary section by stating another structural lemma, linking the number of CNATs of a permutation \(\pi\) to the number of CNATs of any pattern occurring in \(\pi\).
**Lemma 2.13**.: _Suppose that \(\pi,\tau\) are permutations such that \(\pi\) contains the pattern \(\tau\). Then we have \(\mathrm{cnat}\left(\tau\right)\leq\mathrm{cnat}\left(\pi\right)\)._
Proof.: Since \(\pi\) contains the pattern \(\tau\), the permutation graph \(G_{\pi}\) contains an induced subgraph graph \(H\) which is isomorphic to \(G_{\tau}\) by the remarks at the end of Section 2.2. Note that we have \(\mathrm{minrec}\left(G_{\tau}\right)=\mathrm{minrec}\left(H\right)\) in this case. The graph \(G_{\pi}\) can then be constructed as follows. Fix some spanning tree \(G^{\prime}\) of \(G_{\pi}\). Let \(G^{0}\) be the graph with vertex set \([n]=V\left(G_{\pi}\right)\) and edge set \(E\left(G^{0}\right):=E(G^{\prime})\cup E(H)\). By construction, since \(G^{\prime}\) is a tree, we have \(\mathrm{Prune}\left(G^{0}\right)=\mathrm{Prune}\left(H\right)\), so that \(\mathrm{minrec}\left(G^{0}\right)=\mathrm{minrec}\left(H\right)\) by Lemma 2.7. Moreover, we have \(E\left(G^{0}\right)\subseteq E\left(G_{\pi}\right)\), so that the graph \(G_{\pi}\) can be obtained from \(G^{0}\) through a (possibly empty) sequence of edge additions. Applying Lemma 2.9 then yields that \(\mathrm{minrec}\left(G^{0}\right)\leq\mathrm{minrec}\left(G_{\pi}\right)\). Combined with the above, we get \(\mathrm{minrec}\left(G_{\tau}\right)=\mathrm{minrec}\left(H\right)\leq\mathrm{ minrec}\left(G_{\pi}\right)\), and the result follows from Theorem 2.12.
## 3. Upper-diagonal CNATs and permutations
In [8], the authors showed that for the decreasing permutation \(\mathrm{dec}_{n}=n(n-1)\cdots 1\), we have \(\mathrm{cnat}\left(\mathrm{dec}_{n}\right)=(n-1)!\). This is implicit through the results of Section 2. Indeed, in this case the graph \(G_{\pi}\) is the complete graph \(K_{n}\), and it is known that minimal recurrent configurations for the complete graph \(K_{n}\) are exactly permutations of \(\{0,\cdots,n-2\}\) (see e.g. [22] or [9]).
The proof of this result in [8] is via so-called tiered trees, with the authors exhibiting a bijection between upper-diagonal fully tiered trees on \(n\) vertices and permutations of \([n-1]\). In this section, we give a new proof of this result by exhibiting a direct bijection between the set \(\mathrm{CNAT}\left(\mathrm{dec}_{n}\right)\) and the set of permutations of \([n-1]\). Our bijection also maps certain statistics of CNATs to statistics on the corresponding permutation. Following notation from [8], we call a _\(\mathrm{CNAT}\) upper-diagonal_ if its associated permutation is a decreasing permutation.
**Definition 3.1**.: A _labelled CNAT_ is a pair \(\mathcal{T}=(T,\lambda)\) where \(T\) is an upper-diagonal \(\mathrm{CNAT}\) of size \((n+1)\) for some \(n\geq 0\) and \(\lambda=\{\lambda_{1},\lambda_{2},\cdots,\lambda_{n}\}\subset\mathbb{N}^{n}\) is a set of \(n\) distinct natural numbers. We call \(\lambda\) the _labels_ of \(\mathcal{T}\), and say that \(\mathcal{T}\) is a \(\lambda\)-labelled \(\mathrm{CNAT}\).
In the above, we think of \(\lambda\) as labelling the columns of the \(\mathrm{CNAT}\)\(T\) other than the right-most column, as in Figures 5 and 6 below (the labels are written on top of the columns). With a slight abuse of notation, we assume that \(\lambda_{1}<\lambda_{2}<\cdots<\lambda_{n}\), and use \(\lambda\) to denote both the set of labels and the ordered tuple \((\lambda_{1},\cdots,\lambda_{n})\). Note that the number of labels is one less than the size of the underlying \(\mathrm{CNAT}\).
Now suppose that \(\mathcal{T}=(T,\lambda)\) is a \(\lambda\)-labelled \(\mathrm{CNAT}\), such that the underlying \(\mathrm{CNAT}\)\(T\) has at least two internal dots in the top row. Let \(\lambda_{r}\) be the label of the right-most of these, i.e. the right-most internal dot in the top row of \(T\) is in the \(r\)-th column. The _top-row decomposition_ of \(\mathcal{T}\) is the pair \(\left(\mathcal{T}^{\ell},\mathcal{T}^{r}\right)\) defined as follows:
* \(\mathcal{T}^{r}:=(T^{r},\lambda^{r})\), where \(T^{r}\) is the sub-tree of \(T\) whose root is the dot in cell \((r,1)\) (the right-most internal dot in the top row of \(T\)). In other words, \(T^{r}\) is obtained from \(T\) by keeping exactly the rows and columns containing dots whose path to the root in \(T\) goes through the dot in cell \((1,r)\). Then \(\lambda^{r}\) is the set of labels whose columns have at least one dot in \(T^{r}\).
* \(T^{\ell}:=(T^{\ell},\lambda^{\ell})\), where \(T^{\ell}\) is obtained from \(T\) by replacing the internal dot in cell \((1,r)\) by a leaf dot and moving it to the right-most column, and \(\lambda^{\ell}\) is again the set of labels whose columns have at least one dot in \(T^{\ell}\) (excluding the new leaf dot).
We will sometimes refer to \(\mathcal{T}^{r}\), resp. \(\mathcal{T}^{\ell}\), as the right, resp. left, subtree of \(\mathcal{T}\). The top-row decomposition of a labelled \(\mathrm{CNAT}\) is illustrated in Figure 5.
We write \(\mathcal{T}=\mathcal{T}^{\ell}\oplus\mathcal{T}^{r}\) for the top-row decomposition of a labelled \(\mathrm{CNAT}\)\(\mathcal{T}\). Note that this partitions the label set \(\lambda\) into two (disjoint) subsets \(\lambda^{\ell}\) and \(\lambda^{r}\). Moreover, the operation is invertible in the following sense. Given two labelled \(\mathrm{CNA}\)\(\mathcal{T}^{\ell}=(T^{\ell},\lambda^{\ell})\) and \(\mathcal{T}^{r}=(T^{r},\lambda^{r})\) with disjoint label sets such that the minimum label \(\lambda^{r}_{1}\) of \(\mathcal{T}^{r}\) is strictly greater than the labels of all (internal) top row dots in \(\mathcal{T}^{\ell}\), then there exists a unique labelled \(\mathrm{CNAT}\)\(\mathcal{T}\) such that \(\mathcal{T}=\mathcal{T}^{\ell}\oplus\mathcal{T}^{r}\). Here \(\mathcal{T}\) is obtained by "gluing" the tree \(T^{r}\) onto the right-most leaf of \(T^{\ell}\) (in the top row), and shifting the columns of the glued subtree in such a way that the labelling of the tree obtained remains in increasing order.
Now suppose that we have a labelled \(\mathrm{CNAT}\)\(\mathcal{T}=(T,\lambda)\) such that the only internal dot in the top row of \(T\) is the root of the tree. Such a tree must necessarily have a dot in the second row of the first column, which we call \(\rho^{\prime}\). We define the _top-row deletion_ of \(\mathcal{T}\) as the labelled tree \(\mathcal{T}^{\prime}:=(T^{\prime},\lambda^{r})\) where \(T^{\prime}\) is the \(\mathrm{CNAT}\)\(T\) with top row deleted (i.e. the subtree rooted at \(\rho^{\prime}\)), and \(\lambda^{\prime}:=\lambda\setminus\{\lambda_{1}\}\) is the set of labels \(\lambda\) with its minimum element removed. This operation is shown on Figure 6, with the labelled \(\mathrm{CNAT}\) on the left, and its top-row deleted labelled \(\mathrm{CNAT}\) on the right.
This operation is not completely invertible, since we lose the information of the minimum label of \(\mathcal{T}\). However, this is the only information that is lost. That is, given a labelled \(\mathrm{CNAT}\)\(\mathcal{T}^{\prime}=(T^{\prime},\lambda^{\prime})\), and a label \(k<\min\lambda^{\prime}\), there exists a unique labelled \(\mathrm{CNAT}\)\(\mathcal{T}=(T,\lambda:=\lambda^{\prime}\cup\{k\})\) whose top-row deletion is \(\mathcal{T}^{\prime}\).
Figure 5. Illustrating the top-row decomposition of a labelled \(\mathrm{CNAT}\).
We are now equipped to define the desired bijection from upper-diagonal CNATs to permutations. In fact, given any label set \(\lambda\), we define a bijection from \(\lambda\)-labelled CNATs to permutations of \(\lambda\). Since upper-diagonal CNATs of size \(n\) can be viewed as \([n-1]\)-labelled CNATs, this gives the desired bijection.
Fix a label set \(\lambda=(\lambda_{1},\cdots,\lambda_{n})\), and a \(\lambda\)-labelled CNAT \(\mathcal{T}=(T,\lambda)\). We define a permutation \(\Psi(\mathcal{T})\) of the label set \(\lambda\) recursively as follows.
* If \(T\) is the CNAT reduced a a single root dot, i.e. is the CNAT of size \(1\) (in which case \(\lambda=\emptyset\)), we define \(\Psi(\mathcal{T})\) to be the empty word (base case).
* If the only internal dot in the top row of \(T\) is the root, we define recursively \(\Psi(\mathcal{T}):=\lambda_{1}\Psi(\mathcal{T}^{\prime})\) where \(\mathcal{T}^{\prime}\) is the top-row deletion of \(\mathcal{T}\) (here the \(\cdot\) operation denotes concatenation).
* Otherwise, if there is more than one internal dot in the top row of \(T\), we define recursively \(\Psi(\mathcal{T}):=\Psi\left(\mathcal{T}^{\prime}\right)\cdot\Psi\left( \mathcal{T}^{\ell}\right)\), where \(\mathcal{T}=\mathcal{T}^{\ell}\oplus\mathcal{T}^{\prime}\) is the top-row decomposition of \(\mathcal{T}\).
**Example 3.2**.: Consider the labelled CNAT \(\mathcal{T}\) in the left of Figure 5. We construct the associated permutation \(\Psi(T)\) of the label set \(\lambda=(2,4,5,7,9,10,13,14,16)\). The first step is the top-row decomposition of \(\mathcal{T}\) as in the figure. We therefore have \(\Psi(\mathcal{T})=\Psi\left(\mathcal{T}^{r}\right)\cdot\Psi\left(\mathcal{T}^{ \ell}\right)\), where \(\mathcal{T}^{r}\) is the CNAT with label set \((9,10,14)\) and \(\mathcal{T}^{\ell}\) the CNAT with label set \((2,4,5,7,13,16)\) in the right-hand side of the equality in Figure 5. Here \(\mathcal{T}^{r}\) has a unique dot in the top row, so \(\Psi\left(\mathcal{T}^{r}\right)=9\cdot\Psi\big{(}\left(\mathcal{T}^{r}\right) ^{\prime}\big{)}\), where \(\left(\mathcal{T}^{r}\right)^{\prime}\) is the top-row deletion of \(\mathcal{T}^{r}\). This is the CNAT with \(2\) internal dots in the top row and label set \((10,14)\). Applying top-row decomposition followed by top-row deletion and the base case to the two subtrees gives \(\Psi\big{(}\left(\mathcal{T}^{r}\right)^{\prime}\big{)}=14\cdot 10\), so that \(\Psi\left(\mathcal{T}^{r}\right)=9\cdot 14\cdot 10\).
We then compute \(\Psi\left(\mathcal{T}^{\ell}\right)\). Again, we start by applying top-row decomposition. The right subtree \(\left(\mathcal{T}^{\ell}\right)^{r}\) in this decomposition has all its internal dots in the left-most row. One can easily check through successive applications of top-row deletion that such a labelled CNAT maps to the increasing permutation of its label set, so that \(\Psi\left(\left(\mathcal{T}^{\ell}\right)^{r}\right)=5\cdot 7\cdot 13\). Finally, applying top-row deletion followed by top-row decomposition on the left sub-tree \(\left(\mathcal{T}^{\ell}\right)^{\ell}\) gives \(\Psi\left(\left(\mathcal{T}^{\ell}\right)^{\ell}\right)=2\cdot 16\cdot 4\). Bringing it all together, we get \(\Psi\left(\mathcal{T}\right)=9\cdot 14\cdot 10\cdot 5\cdot 7\cdot 13\cdot 2 \cdot 16\cdot 4\).
Note that in Example 3.2 above, the left-to-right minima of the permutation \(\pi:=\Psi(\mathcal{T})\) are exactly the labels of the top-row dots of \(\mathcal{T}\). Moreover, \(\pi\) has \(4=5-1\) descents, and there are \(5\) empty rows in the underlying CNAT \(T\) (the rows whose leaves are in columns labelled \(2\), \(4\), \(5\), \(9\), and \(10\)). It turns out that both of these observations are true in general, which is the main result of this section.
**Theorem 3.3**.: _For any label set \(\lambda\), the map \(\Psi:\mathcal{T}\mapsto\Psi(T)\) is a bijection from the set of \(\lambda\)-labelled CNATs to the set of permutations of \(\lambda\). Moreover, this bijection maps column labels of internal top row dots in \(\mathcal{T}\) to left-to-right minima in the permutation \(\Psi(\mathcal{T})\). Finally, the number of empty rows in the underlying CNAT \(T\) is equal to one plus the number of descents of \(\Psi(\mathcal{T})\)._
Figure 6. Illustrating the top-row deletion of a labelled CNAT.
In particular, if the label set is \(\lambda=[n-1]\) for some \(n\geq 1\), this gives us a bijection from upper-diagonal CNATs of size \(n\) and \((n-1)\)-permutations, providing a direct proof of Theorem 4.12 in [8], as announced.
Proof.: Since top-row decomposition partitions the label set \(\lambda\) into two (smaller) label sets, and top-row deletion removes the minimum label \(\lambda_{1}\), we have that \(\Psi(\mathcal{T})\) is a permutation of \(\lambda\) by a straightforward induction on the size of \(\lambda\) using the recursive definition.
Let us now show that if there is a dot in the column labelled \(\lambda_{i}\) of \(\mathcal{T}\), then \(\lambda_{i}\) is a left-to-right minimum of the permutation \(\Psi(\mathcal{T})\). Again, we proceed by induction on \(n\), the size of the label set \(\lambda\). For \(n=0\) the result is trivial. Now fix some \(n>0\), and suppose we have shown this for all label sets \(\lambda^{\prime}\) of size at most \(n-1\). Fix a label set \(\lambda=(\lambda_{1},\cdots,\lambda_{n})\) of size \(n\), and a \(\lambda\)-labelled CNAT \(\mathcal{T}\).
Firstly, consider the case where \(\mathcal{T}\) has a unique internal dot in the top row, which is necessarily the root, in column labelled \(\lambda_{1}=\min\lambda\). By construction \(\lambda_{1}\) is the first element in the permutation \(\Psi(\mathcal{T})\), and is therefore its unique left-to-right minimum, as desired.
It thus remains to consider the case where \(\mathcal{T}\) has at least two internal dots in the top row. In this case write the top-row decomposition \(\mathcal{T}=\mathcal{T}^{\ell}\oplus\mathcal{T}^{r}\) of \(\mathcal{T}\). By construction the labels of top-row internal dots of \(\mathcal{T}\) are exactly those of top-row dots of \(\mathcal{T}^{\ell}\), plus the label \(\lambda_{1}^{r}\) of the right-most dot in that row (the root of \(T^{r}\)). By the induction hypothesis, \(\Psi(\mathcal{T}^{\ell})\) is a permutation of \(\lambda^{\ell}\) whose left-to-right minima are the column lables of the top-row dots of \(\mathcal{T}^{\ell}\). In particular, this implies that the first letter in \(\Psi(\mathcal{T}^{\ell})\) is strictly less than \(\lambda_{1}^{r}\). Moreover, by the above argument \(\Psi(\mathcal{T}^{r})\) is a permutation of \(\lambda^{r}\) starting with \(\lambda_{1}^{r}\) (the minimal element of \(\lambda^{r}\)). This implies that the left-to-right minima in the permutation \(\Psi(\mathcal{T})=\Psi(\mathcal{T}^{r})\!\cdot\!\Psi(\mathcal{T}^{\ell})\) are exactly \(\lambda_{1}^{r}\) and the left-to-right minima of \(\Psi(\mathcal{T}^{\ell})\), and the result follows by applying the induction hypothesis to \(\Psi(\mathcal{T}^{\ell})\).
To show that \(\Psi\) is a bijection, we construct its inverse recursively, and show it maps left-to-right minima to top-row internal dots in the labelled CNAT. Let \(\pi=\pi_{1}\cdots\pi_{n}\) be a permutation of a label set \(\lambda=(\lambda_{1},\cdots,\lambda_{n})\). We define a \(\lambda\)-labelled CNAT \(\mathcal{T}:=\Phi(\pi)\) as follows.
* If \(\lambda=\emptyset\) (\(\pi\) is the empty word in this case), we set \(T\) to be the CNAT consisting of a unique root dot, and \(\mathcal{T}=(T,\emptyset)\) to be the labelled CNAT (base case).
* If \(\pi_{1}=\lambda_{1}\) (the first element of \(\pi\) is the minimum label in \(\lambda\)), then we first calculate recursively \(\mathcal{T}^{\prime}=\Phi(\pi^{\prime}:=\pi_{2}\cdots n)\), and then let \(\mathcal{T}\) be the unique \(\lambda\)-labelled CNAT whose top-row deletion is \(\mathcal{T}^{\prime}\) (see preceding remarks). By construction the CNAT has a unique dot in the top-row, which is in column labelled \(\lambda_{1}\), which gives the desired result.
* Otherwise, define \(k=\min\{i\geq 2;\pi_{i}<\pi_{1}\}\), and decompose \(\pi=(\pi_{1}\cdots\pi_{k-1})(\pi_{k}\cdots\pi_{n}):=\pi^{r}\pi^{\ell}\). In other words, the first element of \(\pi^{\ell}\) is the second left-to-right minimum in \(\pi\). Then calculate \(\mathcal{T}^{\ell}=\Phi(\pi^{\ell})\) and \(\mathcal{T}^{r}=\Phi(\pi^{r})\) recursively. By construction, the permutation \(\pi^{r}\) begins with its minimum label, so the top-row of \(\mathcal{T}^{r}\) is empty except for the leaf and root dot from the above case. Moreover, the minimum label of \(\mathcal{T}^{r}\) is \(\pi_{1}\), which by definition is greater than all left-to-right minima in \(\pi^{\ell}\). By the induction hypothesis, this means exactly that the minimum label of \(\mathcal{T}^{r}\) is greater than the labels of all the top-row internal dots in \(\mathcal{T}^{\ell}\). Preceding remarks then say that there is a unique labelled CNAT \(\mathcal{T}\) such that \(\mathcal{T}=\mathcal{T}^{\ell}\oplus\mathcal{T}^{r}\), and we set \(\Phi(\pi)=\mathcal{T}\). By construction, the column labels of the top-row dots of \(\mathcal{T}\) are those of \(\mathcal{T}^{\ell}\) plus that of the root of \(\mathcal{T}^{r}\), which by the induction hypothesis are exactly the left-to-right minima of \(\pi\), as desired.
That \(\Psi\) and \(\Phi\) are inverse of each other is straightforward from the construction, which proves the first part of the theorem.
It remains to show that the number of empty rows in \(\mathcal{T}\) is one plus the number of descents of \(\Psi(\mathcal{T})\). Again, we proceed by induction. For a labelled CNAT \(\mathcal{T}\), resp. a permutation \(\pi\), let \(\text{EmptyRow}\left(\mathcal{T}\right)\), resp. \(\text{Desc}\left(\pi\right)\), be its number of empty rows, resp. descents. Again, we proceed by induction on the size \(n\) of the label set \(\lambda\). The base case \(n=0\) is trivial. For \(n\geq 1\), suppose we
have shown that \(\operatorname{EmptyRow}\left(\mathcal{T}^{\prime}\right)=1+\operatorname{Desc} \left(\Psi(\mathcal{T}^{\prime})\right)\) for all \(\lambda^{\prime}\) of size at most \(n-1\) and \(\lambda^{\prime}\)-labelled \(\operatorname{CNATs}\)\(\mathcal{T}^{\prime}\).
Fix some label set \(\lambda=\left(\lambda_{1},\cdots,\lambda_{n}\right)\) and \(\lambda\)-labelled \(\operatorname{CNAT}\).
* If \(\mathcal{T}\) has a unique internal dot in its top-row, let \(\mathcal{T}^{\prime}\) be the top-row deletion of \(\mathcal{T}\). Then \(\operatorname{EmptyRow}\left(\mathcal{T}\right)=\operatorname{EmptyRow} \left(\mathcal{T}^{\prime}\right)=1+\operatorname{Desc}\left(\Psi(\mathcal{T }^{\prime})\right)\) by induction. But by construction \(\Psi(\mathcal{T})=\lambda_{1}\Psi(\mathcal{T}^{\prime})\) with \(\lambda_{1}=\min\lambda\), which immediately implies \(\operatorname{Desc}\left(\Psi(\mathcal{T})\right)=\operatorname{Desc}\left( \Psi(\mathcal{T}^{\prime})\right)\), and the desired result follows.
* Otherwise, write \(\mathcal{T}=\mathcal{T}^{\ell}\oplus\mathcal{T}^{\prime}\) for the top-row decomposition of \(\mathcal{T}\). By construction we have \(\Psi(\mathcal{T})=\Psi(\mathcal{T}^{\prime})\cdot\Psi(\mathcal{T}^{\ell})\). Moreover, we know from the previous part of the proof that the first element of \(\Psi(\mathcal{T}^{\tau})\), which is also its smallest element, is strictly greater than all the left-to-right minima of \(\Psi(\mathcal{T}^{\ell})\), and thus in particular than its first element. This implies that there is a descent between the last element of \(\Psi(\mathcal{T}^{\tau})\) and the first element of \(\Psi(\mathcal{T}^{\ell})\), so that \(\operatorname{Desc}\left(\Psi(\mathcal{T})\right)=\operatorname{Desc}\left( \Psi(\mathcal{T}^{\tau})\right)+1+\operatorname{Desc}\left(\Psi(\mathcal{T}^{ \ell}\right)\). Applying the induction hypothesis to \(\mathcal{T}^{\tau}\) and \(\mathcal{T}^{\ell}\), this yields: \[\operatorname{EmptyRow}\left(\mathcal{T}\right) =\operatorname{EmptyRow}\left(\mathcal{T}^{\tau}\right)+ \operatorname{EmptyRow}\left(\mathcal{T}^{\ell}\right)\] \[=1+\operatorname{Desc}\left(\Psi(\mathcal{T}^{\tau})\right)+1+ \operatorname{Desc}\left(\Psi(\mathcal{T}^{\ell})\right)\] \[=1+\operatorname{Desc}\left(\Psi(\mathcal{T})\right),\] as desired. This completes the proof of the theorem.
## 4. Counting \(\operatorname{CNATs}\) according to their associated permutations
In [8] the authors were interested in the numbers \(\operatorname{cnat}\left(\pi\right)\) enumerated in Theorem 2.12. More specifically, they were interested in how many permutations \(\pi\) had the same fixed number of associated \(\operatorname{CNATs}\). For \(n,k\geq 1\), we define \(B(n,k):=\{\pi\in S_{n};\,\operatorname{cnat}\left(\pi\right)=k\}\) to be the set of \(n\)-permutations associated with exactly \(k\)\(\operatorname{CNATs}\), and \(b(n,k):=\left|B(n,k)\right|\) to be the number of such permutations. For brevity we will simply say that a permutation \(\pi\in B(n,k)\)_has_\(k\)\(\operatorname{CNATs}\), and refer to elements of \(B(n,k)\) as permutations _with_ (exactly) \(k\)\(\operatorname{CNATs}\). The main goal of this section is to prove a number of conjectures on the enumeration sequence \(\big{(}b(n,k)\big{)}_{n,k\geq 1}\) from [8].
### Linking permutations with one and two \(\operatorname{CNATs}\)
In this part, we establish a bijection between _marked_ permutations with a single \(\operatorname{CNAT}\), and permutations with \(2\)\(\operatorname{CNATs}\) (Theorem 4.3). We begin by describing the set \(B(n,1)\) of permutations with a single \(\operatorname{CNAT}\). In [8], the authors gave a characterisation of this set in terms of so-called _L-subsets_. We first give an equivalent characterisation in terms of _quadrants_. For an (irreducible) \(n\)-permutation \(\pi\), we let \(\Pi=\{(i,\pi_{i});i\in[n]\}\) be the points in the graphical representation of \(\pi\). Given an index \(k\in\{2,\cdots,n\}\), we partition \(\Pi\) into four quadrants, called \(k\)-quadrants when the index \(k\) needs to be explicit, as follows.
* The upper-left quadrant is \(\Pi_{<k,<k}:=\Pi\cap\Big{(}[1,k-1]\times[1,k-1]\Big{)}\).
* The lower-left quadrant is \(\Pi_{<k,\geq k}:=\Pi\cap\Big{(}[1,k-1]\times[k,n]\Big{)}\).
* The upper-right quadrant is \(\Pi_{\geq k,<k}:=\Pi\cap\Big{(}[k,n]\times[1,k-1]\Big{)}\).
* The lower-right quadrant is \(\Pi_{\geq k,\geq k}:=\Pi\cap\Big{(}[k,n]\times[k,n]\Big{)}\).
This partition is illustrated on Figure 7 below, with \(\pi=561243\) and \(k=4\).
Note that since \(\pi\) is a permutation, the first \(k-1\) columns or rows of its graphical representation must have \(k-1\) dots in total. Therefore we have
\[|\Pi_{<k,<k}|+|\Pi_{<k,\geq k}|=|\Pi_{<k,<k}|+|\Pi_{\geq k,<k}|=k-1, \tag{4}\]
so that \(|\Pi_{<k,\geq k}|=|\Pi_{\geq k,<k}|\). Moreover, this number must be strictly positive, since otherwise \(\pi\) would induce a permutation on \([k-1]\), and therefore be reducible. In words, given an (irreducible) \(n\)-permutation \(\pi\), and an index \(k\in\{2,\cdots,n\}\), the lower-left and upper-right quadrants are non-empty, and have the same number of dots.
**Definition 4.1**.: Let \(n\geq 2\). We say that an \(n\)-permutation \(\pi\) satisfies the _quadrant condition_ if
\[\forall k\in\{2,\cdots,n\},\ |\Pi_{<k,\geq k}|=|\Pi_{\geq k,<k}|=1. \tag{5}\]
That is, a permutation satisfies the quadrant condition when every lower-left quadrant (equivalently upper-right quadrant) has exactly one dot.
We now state our characterisation of the set \(B(n,1)\).
**Proposition 4.2**.: _Let \(\pi\) be a permutation. Then \(\pi\) has a unique CNAT if, and only if, \(\pi\) satisfies the quadrant condition and has no fixed point._
Proof.: Let \(\pi\) be an \(n\)-permutation. For \(j\in[n]\), we define the \(j\)-th _L-subset_\(L_{j}\) of a permutation \(\pi\) by \(L_{j}:=\Pi_{<j+1,<j+1}\setminus\Pi_{<j,<j}\). Therorem 3.11 in [8] states that a permutation has a unique CNAT if, and only if, \(\pi\) has no fixed point, \(|L_{1}|=0\), \(|L_{n}|=2\), and \(|L_{j}|=1\) for all \(j\in\{2,\cdots,n-1\}\). We show that this is equivalent to \(\pi\) having no fixed point and satisfying the quadrant condition.
Suppose first that \(\pi\) satisfies the above conditions on its L-subsets, and fix some \(k\in\{2,\cdots,n\}\). Then we have \(|\Pi_{<k,<k}|=\sum\limits_{j=1}^{k-1}|L_{j}|=k-2\). By Equation (4) this implies that \(|\Pi_{<k,\geq k}|=|\Pi_{\geq k,<k}|=k-1-|\Pi_{<k,<k}|=1\). Thus \(\pi\) satisfies the quadrant condition as desired.
Conversely, suppose that \(\pi\) satisfies the quadrant condition, and has no fixed point. Owing to the fact that \(\pi\) is irreducible, we have \(|L_{1}|=0\) (we cannot have \(\pi_{1}=1\)). Moreover there is always exactly one dot in the rightmost column \(n\) and exactly one dot in the bottom-most row \(n\), which both belong to the L-subset \(L_{n}\). Again, \(\pi\) is irreducible, so these dots must be distinct (we cannot
Figure 7. The partition of a permutation’s graphical representation into four quadrants.
have \(\pi_{n}=n\), leading to the equality \(|L_{n}|=2\). It therefore remains to show that
\[\forall j\in\{2,\cdots,n-1\},\,|L_{j}|=1. \tag{6}\]
Seeking contradiction, suppose that this is not the case. Because \(\pi\) is a permutation, there are \(n\) dots in total in \(\Pi\), and therefore \((n-2)\) dots in \(\bigcup\limits_{j=2}^{n-1}L_{j}\). Thus if the L-condition (6) is not satisfied, there must exist \(j\in\{2,\cdots,n-1\}\) such that the \(j\)-th L-subset \(L_{j}\) is empty. In that case, the dot in the column \(j\) will be placed below row \(j\), i.e. in column \(j\) of the lower-left quadrant \(\Pi_{<j+1,\geq j+1}\). Now since \(L_{j}\) is empty, each dot located in the first \((j-1)\) columns should be placed either in the upper-left quadrant \(\Pi_{<j,<j}\) or the lower-left quadrant \(\Pi_{<j+1,\geq j+1}\) but before the column \(j\). Because \(\pi\) is irreducible, there must be at most \((j-2)\) dots in the upper-left quadrant \(\Pi_{<j,<j}\), which means that there must be at least one dot placed in the first \((j-1)\) columns of the lower-left quadrant \(\Pi_{<j+1,\geq j+1}\), say in some column \(j^{\prime}<j\). Hence, in the lower-left quadrant \(\Pi_{<j+1,\geq j+1}\) there are at least two dots: one in column \(j^{\prime}\) and one in column \(j\). This contradicts the fact that \(\pi\) satisfies the quadrant condition. Therefore, the L-condition (6) is satisfied, and \(\pi\) has a unique CNAT by [8, Theorem 3.11].
For \(n\geq 1\), and an index \(j\in\{2,\cdots,n\}\), we now define the _insertion operation_\(\operatorname{Insert}_{j}:S_{n}\to S_{n+1}\) as follows. Given a \(n\)-permutation \(\pi\), we construct \(\pi^{\prime}:=\operatorname{Insert}_{j}(\pi)\) by inserting a fixed point \(j\) into the \(n\)-permutation \(\pi\), and increasing the labels of all letters \(k\geq j\) in the original permutation \(\pi\) by \(1\). For example, if \(\pi=521634\) and \(j=3\), then \(\pi^{\prime}=62\mathbf{3}1745\) (fixed point in bold). It is straightforward to check that \(\operatorname{Insert}_{j}(\pi)\) is irreducible if, and only if, \(\pi\) is irreducible.
**Theorem 4.3**.: _Let \(n\geq 2\). Define a map \(\Phi:\{2,\cdots,n\}\times S_{n}\to S_{n+1}\) by \(\Phi(j,\pi):=\operatorname{Insert}_{j}(\pi)\) for all \(j\in\{2,\cdots,n\}\) and \(\pi\in S_{n}\). Then \(\Phi\) is a bijection from \(\{2,\cdots,n\}\times B(n,1)\) to \(B(n+1,2)\)._
**Remark 4.4**.: The insertion index \(j\in\{2,\cdots,n\}\) can be thought of as _marking_ the "space" between \(\pi_{j-1}\) and \(\pi_{j}\) in \(\pi\), with the insertion operation corresponding to inserting a fixed point in the marked space. The product set \(\{2,\cdots,n\}\times B(n,1)\) can then be thought of as permutations with a single CNAT, marked in such a space. Hence the bijection of Theorem 4.3 can be interpreted as a bijection between marked \(n\)-permutations with a single CNAT, and \((n+1)\)-permutations with two CNATs.
From [8, Corollary 3.12], we have \(b(n,1)=2^{n-2}\) for any \(n\geq 2\). This formula is also implicit from the combination of Theorem 2.12 and [1, Theorem 1]. Indeed, in that work the authors show that the number of \(n\)-permutations whose graphs are trees is \(2^{n-2}\), and trees are the only graphs with exactly one minimal recurrent configuration for the ASM (Lemma 2.11). Together with Theorem 4.3, this enumeration formula immediately implies the following, which answers in the affirmative the conjecture from [8] that \(\big{(}b(n,2)\big{)}_{n\geq 2}\) is given by Sequence A001787 in [14].
**Corollary 4.5**.: _For any \(n\geq 2\), we have \(b(n,2)=(n-2)\cdot 2^{(n-3)}\)._
To prove Theorem 4.3, we first state and prove two lemmas on the insertion operation \(\operatorname{Insert}_{j}\) and the set \(B(n,2)\).
**Lemma 4.6**.: _Let \(n\geq 2\), and \(j\in\{2,\cdots,n\}\). For any \(k\geq 1\), if \(\pi\) is an \(n\)-permutation with \(k\) CNATs, then \(\operatorname{Insert}_{j}(\pi)\) is an \((n+1)\)-permutation with at least \(2k\) CNATs. Moreover, if \(\pi\) has a single CNAT (i.e. \(k=1\)), then \(\operatorname{Insert}_{j}(\pi)\) has exactly \(2\) CNATs._
Proof.: Let \(T\in\operatorname{CNAT}\left(\pi\right)\) be a CNAT with permutation \(\pi\). Define \(T^{\prime}\) to be the "partial" CNAT obtained by inserting a new dot \(d\) in cell \((j,j)\) and shifting all dots in cells \((x,y)\) with \(x\geq j\), resp. \(y\geq j\), one column rightwards, resp. one row downwards (see Figure (b)b). Since the lower-left and upper-right \(j\)-quadrants of \(\pi\) are non-empty, \(T^{\prime}\) must have at least one leaf dot \(d_{1}\) below and to the left of \(d\), and one leaf dot \(d_{2}\) above and to the right of \(d\). The path from \(d_{1}\), resp. \(d_{2}\), to the root
must cross row \(j\), resp. column \(j\), in some column \(j_{1}<j\), resp. row \(j_{2}<j\). Let \(T_{1}\), resp. \(T_{2}\), be \(T^{\prime}\) with an extra dot in the cell \((j_{1},j)\) (i.e. in row \(j\) and column \(j_{1}\)), resp. in the cell \((j,j_{2})\). Then \(T_{1}\) and \(T_{2}\) are two CNATs with permutation \(\pi^{\prime}:=\operatorname{Insert}_{j}(\pi)\), and the maps \(T\mapsto T_{1}\) and \(T\mapsto T_{2}\) are both injective. This shows that if \(\pi\) has \(k\) CNATs, then \(\pi^{\prime}\) has at least \(2k\) CNATs, as desired.
The above construction is illustrated in Figure 8 below. Here we start with a \(\operatorname{CNAT}\)\(T\) and associated permutation \(\pi\) (Figure 7(a)). We then show in Figures 7(c) and 7(d) possible choices for the two CNATs \(T_{1}\) and \(T_{2}\) with permutation \(\pi^{\prime}=\operatorname{Insert}_{4}(\pi)\). Note that here in each case we actually had two choices for the new edge in the CNATs \(T_{1}\) and \(T_{2}\), since there were two edges of \(T\) crossing row \(4\) to the left of the new leaf dot (in columns \(1\) and \(2\)), and two edges crossing column \(4\) above the new leaf dot (in rows \(2\) and \(3\)).
Now suppose that \(k=1\), i.e. that \(T\) is the unique \(\operatorname{CNAT}\) associated with the permutation \(\pi\). Let \(T^{\prime}\) be a \(\operatorname{CNAT}\) associated with \(\pi^{\prime}\). \(T^{\prime}\) has a leaf dot \(d\) in cell \((j,j)\) by definition. Moreover, in \(T^{\prime}\), there must be either a dot \(d_{1}\) in a cell \((j_{1},j)\) for some \(j_{1}<j\) (to the left of \(d\)), or a dot \(d_{2}\) in a cell \((j,j_{2})\) for some \(j_{2}<j\) (above \(d\)).
First, consider the case where there is a dot \(d_{1}\) to the left of \(d\). Any such dot \(d_{1}\) must have a leaf dot below it since the tree \(T^{\prime}\) is complete. By Lemma 4.2, this leaf dot must be the unique dot in the lower-left quadrant \(\Pi^{\prime}_{<j+1,\geq j+1}\) of \(\pi^{\prime}\) (equivalently, the unique dot in the lower-left quadrant \(\Pi_{<j,\geq j}\) of \(\pi\)). This implies that the dot \(d_{1}\) is unique, i.e. that there is only one dot to the left of \(d\) in its row. It is then straightforward to check that deleting the row and column \(j\) in the \(\operatorname{CNAT}\)
Figure 8. Illustrating how inserting a fixed point into a permutation with \(k\) CNATs gives a permutation with at least \(2k\) CNATs.
yields a \(\mathrm{CNAT}\)\(T\) with permutation \(\pi\). Since \(T\) is unique, this means that there is only one \(\mathrm{CNAT}\)\(T^{\prime}\) with permutation \(\pi^{\prime}\) such that the leaf dot \(d\) in cell \((j,j)\) has a dot to its left. Analogously, there is only one \(\mathrm{CNAT}\)\(T^{\prime}\) with permutation \(\pi^{\prime}\) such that the leaf dot \(d\) in cell \((j,j)\) has a dot above it. Since every \(\mathrm{CNAT}\)\(T^{\prime}\) must be in one of these two cases by definition of a \(\mathrm{CNAT}\), this implies that \(\mathrm{cnat}\,(\pi^{\prime})=2\), as desired.
**Lemma 4.7**.: _For \(n\geq 2\), let \(\pi\in B(n,2)\) be an \(n\)-permutation with exactly \(2\)\(\mathrm{CNA}\)s. Then \(\pi\) satisfies the quadrant condition from Definition 4.1._
Proof.: Let \(\pi\) be an \(n\)-permutation, and \(k\in\{2,\cdots,n\}\) an index. We show that if the lower-left and upper-right \(k\)-quadrants have at least \(2\) points, then we can construct at least \(3\)\(\mathrm{CNA}\)s with associated permutation \(\pi\). The construction is similar to the one in the proof of Lemma 4.6, so we allow ourselves to be a little briefer here.
Let \(d_{1},d_{1}^{\prime}\), resp. \(d_{2},d_{2}^{\prime}\), be two dots in the lower-left, resp. upper-right, quadrant. We assume that the column of \(d_{1}\) is to the left of that of \(d_{1}^{\prime}\), and the row of \(d_{2}\) above that of \(d_{2}^{\prime}\). Let \(\pi^{\prime}\) be the permutation obtained from \(\pi\) by removing the rows and columns containing \(d_{1}^{\prime}\) and \(d_{2}^{\prime}\) from its graphical representation. Let \(T\) be a \(\mathrm{CNAT}\) with permutation \(\pi^{\prime}\)3. As in the proof of Lemma 4.6, we let \(T^{\prime}\) be the "partial" \(\mathrm{CNAT}\) obtained from \(T\) by re-inserting the leaf dots \(d_{1}^{\prime}\) and \(d_{2}^{\prime}\).
Footnote 3: See Remark 4.8 for why the assumption that \(T\) exists can be justified.
Let \(c=(i^{\prime},j^{\prime})\) denote the cell in the same column \(i^{\prime}\) as \(d_{1}^{\prime}\) and row \(j^{\prime}\) as \(d_{2}^{\prime}\). By construction this cell is in the upper-left \(k\)-quadrant of \(\pi\). Since \(d_{1}\) is in some column \(i<i^{\prime}\) (to the left of \(d_{1}^{\prime}\)), the path from \(d_{1}\) to the root in the partial \(\mathrm{CNAT}\)\(T^{\prime}\) must cross the row \(j^{\prime}\) in some cell \(c_{\mathrm{left}}\) to the left of \(c\). Similarly, since the row \(j\) of \(d_{2}\) is above the row \(j^{\prime}\) of \(d_{2}^{\prime}\), the path from \(d_{2}\) to the root must cross the column \(i^{\prime}\) in some cell \(c_{\mathrm{up}}\) above \(c\). Then putting internal dots in any two of the three cells \(c,c_{\mathrm{left}},c_{\mathrm{up}}\) will yield a \(\mathrm{CNAT}\) with permutation \(\pi\). Since there are three such possibilities, this implies that \(\pi\) has at least three \(\mathrm{CNA}\)s, which completes the proof.
These constructions are illustrated on Figure 9 below. Figure 9a shows the permutation \(\pi\) with the \(4\)-quadrants. The lower-left and upper-right have two dots. The green dots are \(d_{1}^{\prime}\) (in cell \((2,6)\)) and \(d_{2}^{\prime}\) (in cell \((6,3)\)). Figure 9b shows a partial \(\mathrm{CNAT}\)\(T^{\prime}\) obtained by constructing a \(\mathrm{CNAT}\)\(T\) with permutation \(\pi^{\prime}=4132\) (which is \(\pi\) with \(d_{1}^{\prime}\) and \(d_{2}^{\prime}\) deleted) and re-inserting the dots \(d_{1}^{\prime},d_{2}^{\prime}\), but leaving them unconnected. Then the cell \(c=(2,3)\) is in the same column as \(d_{1}^{\prime}\) and row as \(d_{2}^{\prime}\). The cell \(c_{\mathrm{left}}=(1,3)\) is where the path from \(d_{1}\) in cell \((1,5)\) crosses the row \(3\) of cell \(c\). The cell \(c_{\mathrm{up}}=(2,2)\) is where the path from \(d_{2}\) in cell \((5,2)\) crosses the column \(2\) of cell \(c\). Figures 9c, 9d and 9e then show the three possible \(\mathrm{CNA}\)s with permutation \(\pi\) obtained by choosing to put dots in two of the cells \(c,c_{\mathrm{left}},c_{\mathrm{up}}\), with the inserted dots and edges drawn in green each time.
**Remark 4.8**.: There is a slight imprecision in the proof of Lemma 4.7, when we assume that the permutation \(\pi^{\prime}\) obtained by deleting \(d_{1}^{\prime}\) and \(d_{2}^{\prime}\) from \(\pi\) has a \(\mathrm{CNAT}\), which is equivalent to \(\pi^{\prime}\) being irreducible. This may in fact not be the case here, for example if we had chosen \(\pi=451263\) above. In this case deleting the dots in cells \((2,5)\) and \((6,3)\) would yield \(\pi^{\prime}=3124\) which is reducible, so does not have a \(\mathrm{CNAT}\). Visually, the only way of connecting the leaf dot in cell \((5,6)\) is to a dot in cell \((5,3)\), i.e. to the dot immediately to the left of the leaf \(d_{1}^{\prime}\) (which is in cell \((6,3)\)) in its column, hence the deletion of \(d_{1}^{\prime}\) is problematic here. The solution is to also delete the leaf dot in cell \((5,6)\), leaving us with a permutation \(\pi^{\prime}=312\) which is now irreducible, so has a \(\mathrm{CNAT}\).
In general, the dot \(d_{1}^{\prime}\) can be safely deleted if deleting the vertex \(j\) of its row index does not disconnect the corresponding permutation graph \(G_{\pi}\). Otherwise, if deleting \(j\) from \(G_{\pi}\) disconnects the graph, we consider the connected components which do _not_ contain the vertex corresponding to the second dot \(d_{1}\) in the lower-left quadrant. When we delete \(d_{1}^{\prime}\), we then also delete the dots whose vertices are in these connected components. We do the same when deleting the dot \(d_{2}^{\prime}\). The rest of the proof is unchanged.
Combined with Proposition 4.2, Lemma 4.7 implies the following.
**Lemma 4.9**.: _Let \(n\geq 2\), and \(\pi\in B(n,2)\) an \(n\)-permutation with exactly \(2\) CNATs. Then \(\pi\) has a unique fixed point \(j\in\{2,\cdots,n-1\}\)._
Proof.: Suppose \(\pi\) is a permutation with \(2\) CNATs. Based on Lemma 4.7, \(\pi\) satisfies the quadrant condition. However, from Proposition 4.2, if \(\pi\) has no fixed point, then it will have a single CNAT. Hence, there must be at least one fixed point in \(\pi\). However, if \(\pi\) has two fixed points \(j\) and \(j^{\prime}\), then we can write \(\pi=\operatorname{Insert}_{j}\bigl{(}\operatorname{Insert}_{j^{\prime}}(\pi^ {\prime})\bigr{)}\) for some permutation \(\pi^{\prime}\). By Lemma 4.6, we then have \(\operatorname{cnat}\left(\pi\right)\geq 4\cdot\operatorname{cnat}\left(\pi^{ \prime}\right)\geq 4\), which contradicts the fact that \(\pi\) has \(2\) CNATs.
Much like Proposition 4.2 for \(B(n,1)\), Lemmas 4.7 and 4.9 in fact give a characterisation of the sets \(B(n,2)\), as follows.
**Proposition 4.10**.: _Let \(\pi\) be a permutation. Then \(\pi\) has exactly \(2\) CNATs if, and only if, \(\pi\) satisfies the quadrant condition and has a unique fixed point \(j\geq 2\)._
Proof.: If \(\pi\) has exactly \(2\) CNATs, then it satisfies the quadrant condition by Lemma 4.7 and has a unique fixed point \(j\geq 2\) by Lemma 4.9. Conversely, if \(\pi\) satisfies the quadrant condition and has a unique fixed point \(j\), then deleting \(j\) from \(\pi\) yields a permutation \(\pi^{\prime}\) with no fixed point
Figure 9. Illustrating how to construct three CNATs with an associated permutation \(\pi=561423\) which does not satisfy the quadrant condition for \(k=4\).
which also satisfies the quadrant condition. Thus \(\pi^{\prime}\) has a unique CNAT by Proposition 4.2. Since \(\pi=\operatorname{Insert}_{j}(\pi^{\prime})\) by construction, we deduce that \(\pi\) has exactly \(2\) CNATs by Lemma 4.6.
Theorem 4.3 now follows straightforwardly from the two Characterisation Propositions 4.2 and 4.10. Indeed, we have shown that permutations with a single CNAT are those which satisfy the quadrant condition and have no fixed point, while permutations with two CNATs are those that satisfy the quadrant condition and have a single fixed point. Since the insertion or deletion of a fixed point does not affect whether a permutation satisfies the quadrant condition or not, it is immediate that the map \(\Phi\) defined by \(\Phi(j,\pi):=\operatorname{Insert}_{j}(\pi)\) is indeed a bijection from \(\{2,\cdots,n\}\times B(n,1)\) to \(B(n+1,2)\).
### Linking permutations with two and three CNATs
In this part, we establish a bijection between \(n\)-permutations with \(2\) CNATs and \((n+1)\)-permutations with \(3\) CNATs (Theorem 4.15). We begin by giving characterisations of the sets \(B(n,2)\) and \(B(n,3)\) in terms of permutation patterns, as introduced at the end of Section 2.2.
**Proposition 4.11**.: _Let \(\pi\) be a permutation. Then \(\pi\) has exactly \(2\) CNATs if, and only if, \(\pi\) contains a unique occurrence of the \(321\) pattern and avoids \(3412\). Moreover, if \(\pi_{i},\pi_{j},\pi_{k}\) is the unique occurrence of \(321\), with \(i<j<k\), then we have \(\pi_{j}=j\), i.e. \(j\) is the unique fixed point of \(\pi\)._
Proof.: First suppose that \(\pi\) contains a unique occurrence of the \(321\) pattern, and avoids \(3412\). By Proposition 2.2, this implies that the permutation graph \(G_{\pi}\) of \(G\) is a decorated \(3\)-cycle. From Lemma 2.8, we then obtain \(\operatorname{minrec}\left(G\right)=3-1=2\), which implies by Theorem 2.12, that \(\pi\) has exactly \(2\) CNATs.
Now suppose that \(\pi\) has exactly \(2\) CNATs. We know from Lemma 4.9 that \(\pi\) has a (unique) fixed point \(j\in\{2,\cdots,n-1\}\). Consider the lower-left \(j\)-quadrant \(\Pi_{<j,\geq j}\). By Proposition 4.10 it has a unique dot, and this dot cannot be in row \(j\), since the dot in row \(j\) is in column \(j\). Thus this dot must be \((i,\pi_{i})\) with \(i<j\) and \(\pi_{i}>j=\pi_{j}\). Similarly, the upper-right \(j\)-quadrant \(\Pi_{\geq j,<j}\) has a unique dot which is not in column \(j\), so is therefore \((k,\pi_{k})\) with \(k>j\) and \(\pi_{k}<j=\pi_{j}\). In other words, \(\pi_{i},\pi_{j}=j,\pi_{k}\) forms a \(321\) pattern in the permutation \(\pi\), or equivalently induces a \(3\)-cycle in the permutation graph \(G_{\pi}\). We claim that there can be no other occurrences of \(321\), or any occurrence of \(3412\), in the permutation \(\pi\).
Indeed, if \(\pi\) contains an occurrence of \(3412\), then by Lemma 2.13, we have \(\operatorname{cnat}\left(\pi\right)\geq\operatorname{cnat}\left(3412\right)=3\), which is a contradiction (applying Lemma 2.8 to the \(4\)-cycle \(G_{3412}\)). Now suppose \(G\) contains a second occurrence \(\pi_{i^{\prime}},\pi_{j^{\prime}},\pi_{k^{\prime}}\) of the pattern \(321\). Similarly to the proof of Lemma 2.8, we first construct a subgraph \(G^{0}\) of \(G_{\pi}\) whose edges are the union of the edges of a given spanning subtree \(G^{\prime}\) of \(G_{\pi}\) and of the \(3\)-cycle on \(\pi_{i^{\prime}},\pi_{j^{\prime}},\pi_{k^{\prime}}\). The graph \(G^{0}\) is a decorated \(3\)-cycle, so that \(\operatorname{minrec}\left(G^{0}\right)=2\) by Lemma 2.8. To obtain \(G_{\pi}\) from \(G^{0}\) we need to add at least one edge of the \(3\)-cycle on \(\pi_{i},\pi_{k},\pi_{k}\), since by construction this cycle is not contained in \(G^{0}\). Lemma 2.9 therefore yields \(2=\operatorname{minrec}\left(G^{0}\right)<\operatorname{minrec}\left(G_{\pi} \right)=\operatorname{cnat}\left(\pi\right)\), which is a contradiction (applying Theorem 2.12).
We have therefore shown that \(\pi\) has exactly \(2\) CNATs if, and only if, \(\pi\) contains a unique occurrence of the \(321\) pattern and avoids \(3412\). Moreover, in the above, we also showed that when \(\pi\) has \(2\) CNATs, the fixed point \(j\) of \(\pi\) is the middle point in the (unique) occurrence of \(321\). This therefore completes the proof of the proposition.
Figure 10 illustrates the \(321\) pattern structure \(\pi_{i},\pi_{j},\pi_{k}\) in \(\pi\in B(n,2)\).
**Proposition 4.12**.: _Let \(\pi\) be a permutation. Then \(\pi\) has exactly \(3\) CNATs if, and only if, \(\pi\) contains a unique occurrence of the \(3412\) pattern and avoids \(321\). Moreover, if \(\pi_{i},\pi_{j},\pi_{k},\pi_{\ell}\) is the occurrence of \(3412\), with \(i<j<k<\ell\), then we have \(\pi_{i}=\pi_{\ell}+1\) and \(k=j+1\)._
Proof.: First suppose that \(\pi\) contains a unique occurrence of the 3412 pattern, and avoids 321. By Proposition 2.2, this implies that the permutation graph \(G_{\pi}\) of \(G\) is a decorated 4-cycle. From Lemma 2.8, we then obtain \(\operatorname{minrec}\left(G\right)=4-1=3\), which implies by Theorem 2.12, that \(\pi\) has exactly 3 CNATs.
We now show the converse. Let \(\pi\) be a permutation with exactly 3 CNATs. We claim that \(G:=G_{\pi}\) must be a decorated 4-cycle. Assume for now this claim proved. Then Proposition 2.2 implies that \(\pi\) must indeed contain a unique occurrence of 3412 (corresponding to the induced 4-cycle of \(G_{\pi}\)), and avoids 321 (since \(G\) does not induce a 3-cycle). It therefore remains to prove that \(G\) is indeed a decorated 4-cycle in this case.
First, note that if \(G\) is a tree, then \(\operatorname{cnat}\left(\pi\right)=\operatorname{minrec}\left(G\right)=1\neq 3\), so that \(G\) must contain at least one cycle. If \(G\) contains a unique cycle, which has length 3, then by Lemma 2.8 we would have \(\operatorname{cnat}\left(\pi\right)=\operatorname{minrec}\left(G\right)=2\neq 3\). Therefore \(G\) either contains at least two cycles, or a cycle of length at least 4. If \(G\) contains a cycle of length 5 or more, then \(G\) can be constructed from the 5-cycle through a series of edge additions or edge duplications. This would imply that \(\operatorname{cnat}\left(\pi\right)=\operatorname{minrec}\left(G\right)\geq \operatorname{minrec}\left(C_{5}\right)=4\), which is a contradiction (applying Lemmas 2.9 and 2.10 to get the inequality, and Lemma 2.8 for the right-hand equality). Similarly, if \(G\) contains a 4-cycle, to get \(\operatorname{minrec}\left(G\right)=3\) this must be the unique cycle contained in \(G\), i.e. \(G\) is a decorated 4-cycle. Otherwise edge additions are necessary to construct \(G\) from \(C_{4}\), and these strictly increase the number of minimal recurrent configurations.
It therefore remains to show that \(G\) cannot contain two 3-cycles. We again seek contradiction. If \(G\) has two 3-cycles which share an edge, then the "outer cycle" is a 4-cycle (see Figure 15a), and we are in the above case. We therefore only need to consider the case where \(G\) contains at least two edge-disjoint 3-cycles. But such a graph can be constructed through a series of edge additions starting from the "butterfly" graph \(B\) consisting of two 3-cycles joined at a vertex (see Figure 11), and it is straightforward to check that we have \(\operatorname{minrec}\left(B\right)=4\). As above, applying Lemma 2.9 then gives \(\operatorname{cnat}\left(\pi\right)=\operatorname{minrec}\left(G\right)\geq \operatorname{minrec}\left(B\right)=4\), which gives the desired contradiction. This completes the proof of our claim that if \(\pi\) has exactly 3 CNATs, the associated permutation graph \(G_{\pi}\) is a decorated 4-cycle, as desired.
Figure 11. The “butterfly” graph consisting of two 3-cycles joined at a vertex.
Figure 10. Illustrating the unique 321 pattern in \(\pi\in B(n,2)\), with the fixed point of \(\pi\) being the middle dot in the pattern.
We now prove the second statement of the proposition. Let \(\pi_{i},\pi_{j},\pi_{k},\pi_{\ell}\) be the occurrence of the \(3412\) pattern in \(\pi\in B(n,3)\), with \(i<j<k<\ell\). We wish to show that there are no dots between columns \(j\) and \(k\) (i.e. \(k=j+1\)), and no dots between rows \(\pi_{\ell}\) and \(\pi_{i}\) (i.e. \(\pi_{i}=\pi_{\ell}+1\)). Let us first show that \(k=j+1\), i.e. that there are no dots between columns \(j\) and \(k\). Otherwise, let \(m\) be such that \(j<m<k\). We show that the dot \((m,\pi_{m})\) is either part of an occurrence of the \(321\) pattern, or of the \(3412\) pattern, which contradicts the above. There are three possible cases to consider.
1. If \(\pi_{m}<\pi_{k}\), then \(\pi_{i},\pi_{j},\pi_{m},\pi_{\ell}\) is an occurrence of the \(3412\) pattern.
2. If \(\pi_{m}>\pi_{j}\), then \(\pi_{i},\pi_{m},\pi_{k},\pi_{\ell}\) is an occurrence of the \(3412\) pattern.
3. If \(\pi_{k}<\pi_{m}<\pi_{j}\), then \(\pi_{j},\pi_{m},\pi_{k}\) is an occurrence of the \(321\) pattern.
We show similarly that \(\pi_{i}=\pi_{\ell}+1\). Otherwise, let \(m\) be such that \(\pi_{\ell}<\pi_{m}<\pi_{i}\). As above, there are three cases to consider.
1. If \(m<i\), then \(\pi_{m},\pi_{j},\pi_{k},\pi_{\ell}\) is an occurrence of the \(3412\) pattern.
2. If \(m>\ell\), then \(\pi_{i},\pi_{j},\pi_{k},\pi_{m}\) is an occurrence of the \(3412\) pattern.
3. If \(i<m<\ell\), then \(\pi_{i},\pi_{m},\pi_{\ell}\) is an occurrence of the \(321\) pattern.
As above, these all contradict the fact that \(\pi_{i},\pi_{j},\pi_{k},\pi_{\ell}\) is the unique occurrence of \(3412\) in \(\pi\), and that \(\pi\) avoids \(321\). This concludes the proof.
Figure 12 illustrates this structure of the unique \(3412\) pattern \(\pi_{i},\pi_{j},\pi_{k},\pi_{\ell}\) in \(\pi\in B(n,3)\), which has \(\pi_{i}=\pi_{\ell}+1\) and \(k=j+1\).
**Remark 4.13**.: We can directly construct the three CNATs corresponding to the permutation \(\pi\) based on the \(3412\) pattern of Figure 12. The construction is similar to that of Figure 9. Namely, we define three cells \(c:=(j,\pi_{\ell}),c_{\rm left}:=(i,\pi_{\ell}),c_{\rm up}:=(j,\pi_{k})\). Deleting the dots \((j,\pi_{j})\) and \((\ell,\pi_{\ell})\) from the permutation \(\pi\) yields a permutation \(\pi^{\prime}\) which has a single \(\mathrm{CNAT}\)\(T^{\prime}\) (the permutation graph \(G_{\pi^{\prime}}\) is acyclic, since \(\pi^{\prime}\) avoids \(321\) and \(3412\))4. The three CNATs associated with \(T\) are then constructed from \(T^{\prime}\) by adding dots in any two of the three cells \(c=(j,\pi_{\ell}),\ c_{\rm left}=(i,\pi_{\ell}),\ c_{\rm up}=(j,\pi_{k})\), as on Figure 9.
Footnote 4: As in Remark 4.8 we are slightly abusive here, since we need to take care not to disconnect the permutation graph.
As in Section 4.1, it is possible to give a characterisation of \(B(n,3)\) in terms of quadrants. We state the result without proof here, since the proof involves a case-by-case study of the numerous possibilities where a permutation does _not_ satisfy the stated condition on its quadrants. Indeed, we would have to consider a permutation with two separate values \(k,k^{\prime}\) having two dots in their
Figure 12. Illustrating the unique occurrence of \(3412\) in \(\pi\in B(n,3)\): the rows of the “2” and “3”, resp. columns of the “4” and “1”, are contiguous.
lower-left and upper-right quadrants, and there are many possibilities for where these dots are located. We consider that such a proof does not add any real value to this paper, so prefer leaving it as an exercise to the dedicated reader.
**Proposition 4.14**.: _Let \(\pi\) be an \(n\)-permutation for some \(n\geq 4\). Then \(\pi\) has exactly \(3\) CNATs if, and only, if, the following three conditions are all satisfied._
1. _There exists a unique value of_ \(k\in\{2,\cdots n\}\) _such that the lower-left and upper-right_ \(k\)_-quadrants of_ \(\Pi\) _have exactly two dots._
2. _All other lower-left and upper-right_ \(k^{\prime}\)_-quadrants, for_ \(k^{\prime}\neq k\)_, have a single dot._
3. _The permutation_ \(\pi\) _has no fixed point._
With Propositions 4.11 and 4.12, we are now equipped to define the bijection between \(B(n,2)\) and \(B(n+1,3)\). Let \(\pi\in B(n,2)\) be an \(n\)-permutation with \(2\) CNATs, and suppose that \(\pi_{i},\pi_{j}=j,\pi_{k}\) is the unique occurrence of the pattern \(321\) in \(\pi\), with \(i<j<k\). We first define an \((n+1)\)-permutation \(\tilde{\pi}\) by \(\tilde{\pi}:=\mathrm{Insert}_{j+1}(\pi)\). Since \(\pi\) has a unique occurrence of the pattern \(321\), given by \(\pi_{i},\pi_{j}=j,\pi_{k}\), it follows by construction that \(\tilde{\pi}\) has a unique occurrence of the pattern \(4231\), given by \(\tilde{\pi}_{i}=\pi_{i}+1,\tilde{\pi}_{j}=j,\tilde{\pi}_{j+1}=j+1,\tilde{\pi }_{k+1}=\pi_{k}\). Moreover, \(\tilde{\pi}\) avoids \(3412\) (since \(\pi\) does), and has exactly two occurrences of \(321\), given by \(\tilde{\pi}_{i},\tilde{\pi}_{j},\tilde{\pi}_{k+1}\) and \(\tilde{\pi}_{i},\tilde{\pi}_{j+1},\tilde{\pi}_{k+1}\). We then define a permutation \(\pi^{\prime}\), obtained from \(\tilde{\pi}\) by changing the \(4231\) pattern to a \(3412\) pattern (on the same elements), and leaving all other elements unchanged. More formally, we have:
\[\begin{cases}\pi_{i}^{\prime}=\tilde{\pi}_{j+1}=j+1\\ \pi_{j}^{\prime}=\tilde{\pi}_{i}=\pi_{i}+1\\ \pi_{j+1}^{\prime}=\tilde{\pi}_{k+1}=\pi_{k}\\ \pi_{k+1}^{\prime}=\tilde{\pi}_{j}=j\\ \pi_{\ell}^{\prime}=\tilde{\pi}_{\ell}\qquad\text{for all other values of $\ell$}\end{cases} \tag{7}\]
**Theorem 4.15**.: _Let \(n\geq 3\). For a permutation \(\pi\in B(n,2)\), define \(\pi^{\prime}\) as in Equation (7), where \(\pi_{i},\pi_{j}=j,\pi_{k}\) is the unique occurrence of the pattern \(321\) in \(\pi\), and \(\tilde{\pi}:=\mathrm{Insert}_{j+1}(\pi)\). Then the map \(\pi\mapsto\pi^{\prime}\) is a bijection from \(B(n,2)\) to \(B(n+1,3)\)._
The construction \(\pi\mapsto\pi^{\prime}\) is illustrated on Figure 13 below. On the left (Figure 13a) we have the occurrence \(\pi_{i},\pi_{j}=j,\pi_{k}\) of \(321\) in the permutation \(\pi\in B(n,2)\). On the right (Figure 13b) we have the occurrence \(\pi_{i}^{\prime}=\pi_{j}+1=j+1,\pi_{j}^{\prime}=\pi_{i}+1,\pi_{j+1}^{\prime}= \pi_{k},\pi_{k+1}^{\prime}=\pi_{j}=j\) of \(3412\) in \(\pi^{\prime}\). The other dots of \(\pi\) are unchanged in \(\pi^{\prime}\), after the necessary shifting downwards and/or to the right to create a permutation.
Theorem 4.15 answers in the affirmative Conjecture 6.4 in [8]. Combining with Corollary 4.5 gives the following.
**Corollary 4.16**.: _For any \(n\geq 3\), we have \(b(n,3)=(n-3)\cdot 2^{(n-4)}\)._
Proof of Theorem 4.15.: Let \(\pi\in B(n,2)\). By construction \(\pi^{\prime}\) is an \((n+1)\)-permutation. We first show that \(\pi^{\prime}\in B(n+1,3)\), using the Characterisation Proposition 4.12. We do this through a _block decomposition_ of the permutations \(\pi\) and \(\pi^{\prime}\), refining Figure 13. This block decomposition is illustrated on Figure 14 below. The unlabelled blocks are empty, since otherwise we would get forbidden patterns. For example, if a block of \(\pi\) to the left of column \(j\) and below row \(j\) contained a dot, this would form an occurrence of \(321\) with \(\pi_{j}\) and \(\pi_{k}\).
Now the dots in the upper-left block \(A\) cannot form a permutation (since \(\pi\) is indecomposable). Therefore if \(A\) is non-empty, then \(A_{1}\) or \(A_{2}\) must also be non-empty. However, if there is a dot in \(A_{1}\) and a dot in \(A_{2}\), this would form an occurrence of \(3412\) together with \(\pi_{i}\) and \(\pi_{k}\). So at most one of these could be non-empty. Similarly, if the block \(B\) is non-empty, then one of \(B_{1},B_{2}\) must be non-empty, and moreover at most one of these can be non-empty (even if \(B\) is empty). One can
check that these block conditions prevent any occurrences of \(321\) or \(3412\) in \(\pi^{\prime}\), other than the \(3412\) already present. By Proposition 4.12 we therefore have \(\pi^{\prime}\in B(n+1,3)\) as desired.
The fact that the map \(\pi\mapsto\pi^{\prime}\) is a bijection is straightforward. Its inverse simply replaces the occurrence of \(3412\) in \(\pi^{\prime}\in B(n+1,3)\) which by Proposition 4.12 is as in Figure (b)b, with the occurrence of \(321\) in \(\pi\) as in Figure(a)a, with suitable relabelling of the other elements. As above, by considering the block decompositions and using Proposition 4.11, we can show that \(\pi\) is in \(B(n,2)\), as desired. This concludes the proof.
### Permutations with \(5\) Cnats
In [8, Conjecture 6.5], it was conjectured that there are no permutations with exactly \(5\) CNATs. We answer this conjecture in the affirmative.
**Theorem 4.17**.: _For any \(n\geq 1\), we have \(b(n,5)=0\). That is, there are no permutations (of any length) with exactly \(5\) CNATs._
For this, we use the enumeration from Theorem 2.12. A permutation graph is associated with exactly \(5\) CNATs if, and only if, it has exactly \(5\) minimal recurrent configurations for the ASM. It turns out that essentially only one graph satisfies this enumeration (and that it is not a permutation
Figure 14. The block decompositions of \(\pi\in B(n,2)\) (left) and \(\pi^{\prime}\in B(n+1,3)\) (right). Unlabelled blocks are empty.
graph). We say that a graph \(G\) is _two-connected_ if for any vertex \(v\in G\), the graph \(G\setminus\{v\}\), obtained by removing \(v\) and any incident edges from \(G\), is connected.
**Lemma 4.18**.: _If \(G\) is two-connected with at least \(3\) vertices, then every vertex \(v\) of \(G\) belongs to a cycle._
Proof.: A two-connected graph with at least \(3\) vertices cannot contain a vertex of degree one, since removing the neighbour of such a vertex would disconnect the graph. Therefore any vertex \(v\) in a two-connected graph \(G\) must have two distinct neighbours \(w_{1}\) and \(w_{2}\). Now by definition the graph \(G\setminus\{v\}\) is connected, so must contain a path from \(w_{1}\) to \(w_{2}\). Connecting this path to the edges \((v,w_{1})\) and \((w_{2},v)\) in \(G\) gives the desired cycle.
**Lemma 4.19**.: _Let \(G\) be a two-connected graph. Then \(\operatorname{minrec}\left(G\right)=5\) if, and only if, \(G\) is the \(6\)-cycle \(C_{6}\)._
Proof.: Let \(G\) be a two-connected graph, and \(k\) be the length of its longest cycle. That is, \(k\) is the maximal value such that \(G\) contains a \(k\)-cycle \(v_{0},v_{1},\cdots,v_{k-1},v_{k}=v_{0}\) where the first \(k-1\) vertices are distinct. With a slight abuse of notation, we denote this \(k\)-cycle \(C_{k}\). A _chord_ of \(G\) is then a path between two distinct vertices \(v_{i},v_{j}\) of \(C_{k}\) which only intersects \(C_{k}\) at the two end-points \(v_{i}\) and \(v_{j}\).
We claim that if \(G\) is not the \(k\)-cycle \(C_{k}\), then it must contain a chord. Indeed, if \(G\) is not \(C_{k}\), then there must be an edge \((v,w)\) for some vertex \(v\in C_{k}\) such that \((v,w)\) is not an edge of \(C_{k}\) (i.e. if \(v=v_{i}\), then \(w\notin\{v_{i-1},v_{i+1}\}\)). If \(w\in C_{k}\), then the edge \((v,w)\) is a chord by definition, so suppose that \(w\notin C_{k}\). Since \(G\) is two-connected, removing \(v\) does not disconnect the graph, so there must be a path from \(w\) to one of the neighbours \(v^{\prime}\) of \(v\) on the cycle \(C_{k}\) which does not use the edge \((v,v^{\prime})\). Taking this path from \(w\) to the first point at which it intersects the cycle \(C_{k}\) gives the desired chord.
We now show the lemma. From Lemma 2.8, if \(G\) is the \(k\)-cycle, then we have \(\operatorname{minrec}\left(G\right)=k-1\). Therefore if \(G\) is a \(k\)-cycle, then \(\operatorname{minrec}\left(G\right)=5\) if and only if \(k=6\). It remains to show that if \(G\) contains a chord, we cannot have \(\operatorname{minrec}\left(G\right)=5\).
We proceed case-by-case based on the value of \(k\). In each case, the number \(\operatorname{minrec}\left(G\right)\) can be checked e.g. through the results of Section 2.5 or through using the sandpile module in SageMath [24]. Since \(G\) contains a chord, we must have \(k\geq 4\) (otherwise \(C_{k}\) would not be the longest cycle). The "base cases" are as follows, with the graphs illustrated on Figure 15 below.
1. If \(G\) is \(C_{4}\) with one chord of length \(1\) (Figure 15a), then \(\operatorname{minrec}\left(G\right)=4\).
2. If \(G\) is \(C_{4}\) with one chord of length \(2\) (Figure 15b), then \(\operatorname{minrec}\left(G\right)=7\).
3. If \(G\) is \(C_{4}\) with two chords of length \(1\) (Figure 15c, this is the complete graph \(K_{4}\)), then \(\operatorname{minrec}\left(G\right)=6\).
4. If \(G\) is \(C_{5}\) with one chord of length \(1\) (Figure 15d), then \(\operatorname{minrec}\left(G\right)=6\).
Aside from Case (1), all other possible graphs (i.e. two-connected graphs that are not cycles) can be obtained from at least one of Cases (2), (3) or (4) through a succession of edge addition or duplication operations, in the sense of Lemmas 2.9 and 2.10. Since these operations only increase the number of minimal recurrent configurations, which was at least \(6\) in all three cases, this shows that such graphs cannot have exactly \(5\) minimal recurrent configurations, and the lemma is proved.
Proof of Theorem 4.17.: Now suppose that \(G\) is such that \(\operatorname{minrec}\left(G\right)=5\). Let \(G^{\prime}:=\operatorname{Prune}\left(G\right)\) be the pruned graph of \(G\) as in Section 2.5. We claim that \(G^{\prime}\) is two-connected in this case. Seeking contradiction, suppose that \(G^{\prime}\) has a cut-vertex \(v\) (i.e. removing \(v\) disconnects the graph), and let \(G^{\prime}_{1}\) and \(G^{\prime}_{2}\) be the two components of \(G^{\prime}\) joined at \(v\). By [12, Theorem 3.2], we have \(\operatorname{minrec}\left(G^{\prime}\right)=\operatorname{minrec}\left(G^{ \prime}_{1}\right)\cdot\operatorname{minrec}\left(G^{\prime}_{2}\right)\). But since \(\operatorname{minrec}\left(G^{\prime}\right)=\operatorname{minrec}\left(G\right)=5\) is prime, this implies that one of \(\operatorname{minrec}\left(G^{\prime}_{1}\right)\) and \(\operatorname{minrec}\left(G^{\prime}_{2}\right)\) must be equal to \(1\). Lemma 2.11 then implies that \(G^{\prime}_{1}\) or \(G^{\prime}_{2}\) must be a tree, which contradicts the fact that \(G^{\prime}\) is a pruned graph.
Applying Lemma 4.19 then gives that \(G^{\prime}\) must be isomorphic to the \(6\)-cycle. In particular, the original graph \(G\) induces a \(6\)-cycle (the pruned graph of \(G\) is always an induced subgraph of \(G\)). But from Point (3) of Proposition 2.2 this is impossible if \(G\) were a permutation graph. As such, we have shown that a graph \(G\) satisfying \(\operatorname{minrec}\left(G\right)=5\) cannot be a permutation graph, which implies that there are no permutations with \(5\) CNATs by Theorem 2.12, as desired.
**Problem 4.20**.: Our proof of Theorem 4.17 relies heavily on the structure of permutation graphs, and the fact that \(\operatorname{cnat}\left(\pi\right)=\operatorname{minrec}\left(G_{\pi}\right)\) (Theorem 2.12), which is non-trivial. It would be interesting to find a more direct combinatorial proof of this result.
### Permutations with maximal numbers of CNATs
Our final result in this section looks at the maximum value of \(k\) such that \(b(n,k)>0\). We answer in the affirmative Conjecture 6.3 in [8].
**Theorem 4.21**.: _For any \(n\geq 1\), we have \(\max\{\operatorname{cnat}\left(\pi\right);\pi\in S_{n}\}=(n-1)!\), and this maximum is achieved only for the decreasing permutation \(\operatorname{dec}_{n}=n(n-1)\cdots 1\). In other words, we have \(B(n(n-1)!)=\{\operatorname{dec}_{n}\}\) and \(B(n,k)=\emptyset\) for all \(k>(n-1)!\)._
Proof.: That \(\operatorname{cnat}\left(\operatorname{dec}_{n}\right)=(n-1)!\) is a consequence of Theorem 3.3. We therefore need to show that if \(\pi\neq\operatorname{dec}_{n}\), we have \(\operatorname{cnat}\left(\pi\right)<\operatorname{cnat}\left(\operatorname{ dec}_{n}\right)\). We may assume \(n\geq 3\) here since for \(n\leq 2\) the only irreducible permutation is the decreasing one.
For this, note that the permutation graph \(G_{\operatorname{dec}_{n}}\) associated with the decreasing permutation is the complete graph \(K_{n}\) on \(n\) vertices, and that \(\operatorname{dec}_{n}\) is the only permutation whose graph is the complete graph. We claim that if \(G\) is a graph on \(n\) vertices which is not complete, then \(\operatorname{minrec}\left(G\right)<\operatorname{minrec}\left(K_{n}\right)\). Combining this with the above observation and Theorem 2.12 then gives the desired result. Suppose therefore that \(G\) is a graph with vertex set \([n]\) that is not complete. Then by definition there is a pair of vertices \(i,j\in[n]\) such that the edge \((i,j)\) is not in \(G\). We then have \(\operatorname{minrec}\left(G\right)<\operatorname{minrec}\left(G\cup\{(i,j)\}\right)\) by Lemma 2.9. In other words, if \(G\) is not complete, we can add an edge to it and strictly increase its number of minimal recurrent configurations. We can then repeat this process until no more edges can be added, i.e. we reach the complete graph, which shows that \(\operatorname{minrec}\left(G\right)<\operatorname{minrec}\left(K_{n}\right)\) as desired.
**Remark 4.22**.: Theorem 4.21 implies that \(b(n,(n-1)!)=1\). We have also shown (Corollary 4.5) that \(b(n,2)=b(n,2!)=(n-2)\cdot 2^{n-3}\), and from [8, Corollary 3.12] we know that \(b(n,1)=\operatorname{minrec}\left(G\cup\{(i,j)\}\right)\).
Figure 15. The four “base cases” for cycles with chords.
\(b(n,1!)=2^{n-2}\). Finally, it was also conjectured in [8] that \(b(n,6)=b(n,3!)=\frac{(n-2)(n-3)}{2}\cdot 2^{n-4}\). This suggests that in general, we might have \(b(n,k!)=\binom{n-2}{k-1}\cdot 2^{n-1-k}\) for all \(n\geq 2\) and \(1\leq k<n\) (Sequence A038207 in [14]). However, this formula in fact breaks down almost immediately after these initial values, since our simulations showed that \(b(6,24)=31\), which is actually a long way off the value of \(\binom{4}{3}\cdot 2^{1}=8\) given by the above sequence. In fact, the sequence \(\big{(}b(n,24)\big{)}_{n\geq 5}\) does not appear at all in the OEIS, even given just its first three terms \(1,31,176\). Table 1 below gives the first few rows of the triangular sequence \(\big{(}b(n,k!)\big{)}_{n\geq 2,\,1\leq k<n}\).
## 5. Conclusion and future perspectives
In this paper, we deepened the combinatorial studies of CNATs initiated in the literature (see e.g. [2, 8]) by connecting CNATs with a given permutation to minimal recurrent configurations of the ASM on the corresponding permutation graph (Theorem 2.12). In Theorem 3.3, we provided a new bijection between permutations and so-called upper-diagonal CNATs (CNATs whose associated permutation is the decreasing permutation \(n(n-1)\cdots 1\)), which has the added benefit of preserving certain statistics of these objects. This bijection is defined recursively, based on two operations on labelled CNATs called top-row decomposition and top-row deletion.
We then investigated the enumeration of permutations with a given number of CNATs, answering a number of conjectures from [8]. We gave separate characterisations of permutations with exactly \(1\), \(2\) or \(3\) CNATs in terms of so-called _quadrants_ of the permutation's graphical representation, and in terms of permutation patterns. This allowed us to establish bijections between marked permutations with a single CNAT and permutations with \(2\) CNATs (Theorem 4.3), and between permutations with \(2\) CNATs and permutations with \(3\) CNATs (Theorem 4.15). From this we obtained enumerative formulas for permutations with \(2\) or \(3\) CNATs (Corollaries 4.5 and 4.16). Finally, we showed that there are no permutations of any length with exactly \(5\) CNATs (Theorem 4.17), and that the maximal number of CNATs associated with a given permutation of length \(n\) is \((n-1)!\), achieved uniquely for the decreasing permutation \(n(n-1)\cdots 1\).
We end this paper with some possible directions for future research.
* Find a more "direct" combinatorial proof of the fact that there are no permutations with exactly \(5\) CNATs, as explained in Problem 4.20.
* Investigate the enumerative sequences \(\big{(}b(n,k)\big{)}_{n\geq 1}\) for more values of \(k\). In this paper, we have looked at the cases \(k=1,2,3,5,(n-1)!\). The paper [8] suggests that the entries for \(k=4,6,7\) do indeed appear in the OEIS [14]. As explained in Remark 4.22, this is not the case for general factorial values of \(k\), so it may be that these specific values essentially rely on there being relatively few permutation graphs with the specified number of minimal recurrent configurations. For instance, it seems plausible that the only graphs with \(4\) minimal recurrent configurations are the two graphs made up of two triangles in
Figures 11 and 15a with some trees attached. It also would be interesting to know if there are values of \(k\) other than \(5\) for which \(b(n,k)\) is always \(0\).
* One observation form [8, Table 1] which lists the first few values of \(\left(b(n,k)\right)\) is that there are very few values for which \(b(n,k)\) is odd, other than the \(1\)'s at the right-hand end of each row implies by Theorem 4.21. This is also true in our Table 1 above: \(b(6,24)=31\) is the only other odd value here. Can this observation be quantified in some way?
* A final direction of future research could be to restrict ourselves to certain subsets of permutations, for example _derangements_ (permutations with no fixed point). Fixed points seem to play an important role in the enumeration of permutations according to their number of associated CNATs, since a fixed point implies the existence of a \(321\) pattern, which in turn means a \(3\)-cycle in the permutation graph (and cycles play an important role in the ASM). For instance, we know that there are no derangements with \(2\) CNATs (Proposition 4.10), while permutations with \(1\) and \(3\) CNATs are all derangements (Propositions 4.2 and 4.14). It is intriguing to consider other values of \(k\) and check whether there are derangements or not which have \(k\) CNATs. Besides derangements, we could also restrict ourselves to permutations containing or avoiding certain patterns.
## Acknowledgments
The first author would like to thank Einar Steingrimsson for helpful discussions that led to the results of Section 3. The research leading to these results received funding from the National Natural Science Foundation of China (NSFC) under Grant Agreement No 12101505.
|
2309.00466 | Submanifolds with constant Moebius curvature and flat normal bundle | We classify isometric immersions $f\colon M^{n}\to \mathbb{R}^{n+p}$, $n \geq
5$ and $2p \leq n$, with constant Moebius curvature and flat normal bundle. | M. S. R. Antas, R. Tojeiro | 2023-09-01T14:05:18Z | http://arxiv.org/abs/2309.00466v1 | # Submanifolds with constant Moebius curvature and flat normal bundle
###### Abstract
We classify isometric immersions \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 5\) and \(2p\leq n\), with constant Moebius curvature and flat normal bundle.
M. S. R. Antas and R. Tojeiro\({}^{*}\)12
Footnote 1: Corresponding author
Footnote 2: The first author was supported by FAPESP Grant 2019/04027-7. The second author is partially supported by FAPESP grant 2022/16097-2 and CNPq grant 307016/2021-8.
Data availability statement: Not applicable.
_2020 Mathematics Subject Classification:_ 53B25, 53C40.
_Key words and phrases: Moebius metric, constant Moebius curvature, flat normal bundle, Moebius reducible submanifolds._
## 1 Introduction
In a seminal paper in Moebius Submanifold Geometry, C. P. Wang [22] introduced a Moebius invariant metric \(g^{*}\) on an umbilic-free hypersurface \(f\colon M^{n}\to\mathbb{R}^{n+1}\), called the _Moebius metric_, and a Moebius invariant \(2\)-form \(B\) on \(M^{n}\), the _Moebius second fundamental form_ of \(f\), and proved that, for \(n\geq 4\), the pair \((g^{*},B)\) forms a complete Moebius invariant system which determines the hypersurface up to Moebius transformations (see also Theorem 9.22 in [5] for a general Moebius fundamental theorem for submanifolds of arbitrary dimension and codimension). The corresponding conformal Gauss, Codazzi and Ricci equations involve two other important Moebius invariant tensors named the _Blaschke tensor_ and the _Moebius form_.
C. P. Wang's work motivated several authors to investigate umbilic-free immersions whose associated Moebius invariants have a simple structure or particular natural properties (see, among others, [17], [21], [10], [14], [7], [16], [9] and [13]).
Among the relevant classes of umbilic-free immersions \(f:M^{n}\to\mathbb{R}^{n+p}\) is that of submanifolds with _constant Moebius curvature_, meaning that \((M^{n},g^{*})\) has constant sectional curvatures. A local classification of the umbilic-free hypersurfaces \(f:M^{n}\to\mathbb{R}^{n+1}\), \(n\geq 4\), which have constant Moebius curvature was first obtained in [8], and a somewhat simpler proof was subsequently given in [15]. The classification also applies to the case in which \(n=3\) and \(f\) is assumed to have a principal curvature of multiplicity \(2\).
The starting point for achieving such classification is the fact that every umbilic-free hypersurface with constant Moebius curvature is conformally flat. Recall that a Riemannian manifold \((M^{n},g)\) is _conformally flat_ if every point of \(M^{n}\) has an open neighborhood that is conformal to an open subset of the Euclidean space \(\mathbb{R}^{n}.\) In particular, by a well-known result due to E. Cartan [1], an umbilic-free conformally flat hypersurface \(f:M^{n}\to\mathbb{R}^{n+1},\)\(n\geq 4,\) must have a principal curvature \(\lambda\) of multiplicity \(n-1\). It is not difficult to show that this implies the Moebius form of \(f\) to be closed.
These facts enabled the authors, by means of the method of moving frames, to reduce the problem to proving that a conformally flat Euclidean hypersurface with closed Moebius form must be locally Moebius congruent to either a cylinder over a curve \(\gamma:I\to\mathbb{R}^{2},\) a cylinder over a surface of \(\mathbb{R}^{3}\) which is itself a cone over a curve \(\gamma:I\to\mathbb{S}^{2},\) or a rotation hypersurface over a curve \(\gamma:I\to\mathbb{R}^{2}_{+},\) where \(\mathbb{R}^{2}_{+}\) is regarded as the Poincare half-plane model of the hyperbolic plane. Then, imposing the sectional curvatures of the Moebius metric to be constant has led to an explicit expression for the curvature of \(\gamma,\) which was called a _curvature spiral_.
In this article we initiate the investigation of umbilic-free immersions \(f:M^{n}\to\mathbb{R}^{n+p},\)\(n\geq 5,\) with constant Moebius curvature and higher codimension. We restrict ourselves in this paper to the case in which the submanifold has flat normal bundle.
Our main result is a classification of the umbilic-free imersions \(f:M^{n}\to\mathbb{R}^{n+p}\) with constant Moebius curvature and flat normal bundle for which \(n\geq 5\) and \(2p\leq n\). See Theorem 5.1 for the precise statement. As an important first step, for any umbilic-free imersion \(f:M^{n}\to\mathbb{R}^{n+p}\) of a conformally flat manifold \(M^{n},\) we determine the intrinsic local structure of \(M^{n}\) endowed with its Moebius metric. We also classify, with no restrictions on the codimension, the isometric immersions \(f:M^{n}\to\mathbb{R}^{n+p},\)\(n\geq 3,\) with constant Moebius curvature and flat normal bundle that have exactly two distinct principal normal vector fields. The latter result already extends (and substantially simplifies the proofs of) the classifications in both [8] and [15] of the umbilic-free hypersurfaces with constant Moebius curvature.
## 2 Preliminaries
In this section we collect several definitions and known results that will be needed in the proofs of our results.
### Principal normal vector fields
A normal vector \(\eta\in N_{f}M(x)\) is said to be a _principal normal vector with multiplicity \(s\)_ of an isometric immersion \(f:M^{n}\to\tilde{M}^{m}\) at \(x\in M^{n}\) if the vector subspace
\[E_{\eta}(x)=\{X\in T_{x}M:\alpha^{f}(X,Y)=\langle X,Y\rangle\eta\ \ \text{ for all }Y\in T_{x}M\}\]
has dimension \(s>0,\) where \(\alpha^{f}\) denotes the second fundamental form of \(f\). A normal vector field \(\eta\in\Gamma(N_{f}M)\) is called a _principal normal vector field_ of \(f\) with multiplicity \(s\) if \(\dim E_{\eta}(x)=s\) for all \(x\in M^{n},\) in which case \(E_{\eta}\) is a smooth distribution. The normal vector field \(\eta\in\Gamma(N_{f}M)\) is said to be _Dupin_ if \(\nabla^{\perp}_{T}\eta=0\) for all \(T\in\Gamma(E_{\eta})\). If, in particular, \(\eta\) is identically zero, then \(E_{\eta}(x)\) is the kernel of \(\alpha_{f}\), called the _relative nullity_ subspace of \(f\) at \(x\) and \(\nu(x):=\dim E_{\eta}(x)\) the _index of relative nullity_ of \(f\) at \(x\).
A smooth distribution \(E\) on a Riemannian manifold \(M^{n}\) is _umbilical_ if there exists a smooth section \(\delta\) of \(E^{\perp}\), named the _mean curvature vector field_ of \(E\), such that
\[\langle\nabla_{T}S,X\rangle=\langle T,S\rangle\langle\delta,X\rangle\]
for all \(T,S\in\Gamma(E)\) and \(X\in\Gamma(E^{\perp}).\) If \(\delta\) is identically zero, then \(E\) is said to be _totally geodesic_. An umbilical distribution on \(M^{n}\) is always integrable and its leaves are umbilical submanifolds of \(M^{n}.\) If, in addition, \((\nabla_{T}\delta)_{E^{\perp}}=0\) for all \(T\in\Gamma(E)\), then \(E\) is said to be _spherical_, and its leaves are called _extrinsic spheres_ of \(M^{n}\).
The following fact is well-known (see, e.g., Proposition 1.22 of [5]).
**Proposition 2.1** ([20]).: _Let \(f:M^{n}\to\mathbb{Q}_{c}^{m}\) be an isometric immersion with a principal normal vector field \(\eta\) of multiplicity \(s>0\). Then the following assertions hold:_
**(i)**: _If_ \(s\geq 2\)_, then_ \(\eta\) _is Dupin._
**(ii)**: _The principal normal vector field_ \(\eta\) _is Dupin if and only if_ \(E_{\eta}\) _is a spherical distribution and_ \(f\) _maps each leaf of_ \(E_{\eta}\) _into an extrinsic sphere of_ \(\mathbb{Q}_{c}^{m}\)_._
A key result in the proof of our main result in this article is the following.
**Theorem 2.1** ([2]).: _Let \(f\colon M^{n}\to\mathbb{R}^{m}\) be an isometric immersion that carries a Dupin principal normal vector field \(\eta\) with multiplicity \(k\). Assume that \(E^{\perp}_{\eta}\) is an umbilical distribution. If \(k=n-1\), suppose further that the integral curves of \(E^{\perp}_{\eta}\) are extrinsic circles of \(M^{n}\). Then \(f(M^{n})\) is, up to a conformal transformation of \(\mathbb{R}^{m}\), an open subset of a submanifold of one of the following types:_
1. \(A\) \(k\)_-cylinder over an isometric immersion_ \(g\colon M^{n-k}\to\mathbb{R}^{m-k}\)_;_
2. \(A\) \((k-1)\)_-cylinder over an isometric immersion_ \(G\colon M^{n-k+1}\to\mathbb{R}^{m-k+1}\) _which is itself a cone over an isometric immersion_ \(g\colon M^{n-k}\to\mathbb{S}^{m-k}\)_;_
3. _A rotation submanifold over an isometric immersion_ \(h\colon M^{n-k}\to\mathbb{R}^{m-k}\)_._
The immersion \(f\) in part \((ii)\) (respectively, part \((iii)\)) of the preceding theorem can be alternatively described in terms of the conformal diffeomorphism \(\Theta:\mathbb{S}^{m-k}\times\mathbb{H}^{k}\to\mathbb{R}^{m}\) (respectively, \(\Theta:\mathbb{S}^{k}\times\mathbb{H}^{m-k}\to\mathbb{R}^{m}\)) defined by \(\Theta(y,z)=(z_{1}y,z_{2},\ldots,z_{k})\) (respectively, \(\Theta(y,z)=(z_{1}y,z_{2},\ldots,z_{m-k})\)). Namely, \(M^{n}\) splits as \(M^{n}=M^{n-k}\times\mathbb{H}^{k}\) (respectively, \(M^{n}=\mathbb{S}^{k}\times M^{n-k}\)) and \(f=\Theta\circ(g\times Id)\) (respectively, \(f=\Theta\circ(Id\times h)\)), where \(\mathbb{H}^{k}\) is given by its half-space model.
### Conformally flat manifolds and submanifolds
A Riemannian manifold \(M^{n}\) is said to be _conformally flat_ if each point \(x\) has a neighborhood which is conformally diffeomorphic to an open subset of Euclidean space \(\mathbb{R}^{n}\). In particular, every Riemannian manifold with constant sectional curvature is conformally flat. We will need the following well-known characterization of conformally flat Riemannian products.
**Proposition 2.2**.: _A Riemannian product \((M_{1},g_{1})\times(M_{2},g_{2})=(M_{1}\times M_{2},g_{1}+g_{2})\) of dimension \(n\geq 3\) is conformally flat if and only if one of the following possibilities hold_
**i)**: \((M_{i},g_{i})\) _is_ \(1\)_-dimensional and_ \((M_{j},g_{j})\)_,_ \(i\neq j\)_, has constant sectional curvature._
**ii)**: \((M_{1},g_{1})\) _and_ \((M_{2},g_{2})\) _are either both flat or are manifolds with dimension at least two of constant sectional curvatures with the same absolute values and opposite signs._
We will also make use of the following extension due to J. D. Moore ([19]) of E. Cartan's theorem on Euclidean conformally flat hypersurfaces of dimension \(n\geq 4\).
**Theorem 2.2**.: _Let \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 4\), be an isometric immersion of a conformally flat manifold. If \(p\leq n-3\), then for each \(x\in M^{n}\) there exists a principal normal vector \(\eta\in N_{f}M(x)\) such that \(\dim\,E_{\eta}(x)\geq n-p\geq 3.\)_
Isometric immersions with flat normal bundle \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 4\), of a conformally flat manifold have been recently studied by Dajczer, Onti and Vlachos [3], who in particular proved the following fact.
**Theorem 2.3**.: _A proper isometric immersion \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 4\), with flat normal bundle of a conformally flat manifold can admit at most one principal normal vector field of multiplicity greater than one._
### Twisted products
Let \(M=\Pi_{i=0}^{k}M_{i}\) be a product of smooth manifolds \(M_{0},\ldots,M_{k}\). A metric \(\langle\,,\,\rangle\) on \(M^{n}\) is called a _twisted product metric_ if there exist Riemannian metrics \(\langle\,,\,\rangle_{i}\) on \(M_{i}\) and smooth maps \(\rho_{i}:M\to\mathbb{R}_{+}\), \(0\leq i\leq k\), such that
\[\langle\,,\,\rangle=\sum_{i=0}^{k}\rho_{i}^{2}\pi_{i}^{*}\langle\,,\,\rangle_ {i},\]
where \(\pi_{i}:M\to M_{i}\) denotes the canonical projection. Then \((M,\langle\,,\,\rangle)\) is said to be a _twisted product_ and is denoted by \({}^{\rho}\prod_{i=0}^{k}(M_{i},\langle\,,\,\rangle_{i})\), where \(\rho=(\rho_{0},\ldots,\rho_{k})\). When \(\rho_{1},\ldots,\rho_{k}\) are independent of \(M_{1},\ldots,M_{k}\), that is, there exist \(\tilde{\rho}_{a}\in C^{\infty}(M_{0})\) such that \(\rho_{a}=\tilde{\rho}_{a}\circ\pi_{0}\) for \(a=1,\ldots,k\) and, in addition, \(\rho_{0}\) is identically \(1\), then \(\langle\,,\,\rangle\) is called a _warped product metric_ and \((M,\langle\,,\,\rangle):=(M_{0},\langle\,,\,\rangle_{0})\times_{\rho}\prod_{a =1}^{k}(M_{a},\langle\,,\,\rangle_{a})\) a _warped product_ with _warping function_\(\rho=(\rho_{1},\ldots,\rho_{k}).\) If \(\rho_{i}\) is identically \(1\) for \(i=0,\ldots,k\), the metric \(\langle\,,\,\rangle\) is a usual _Riemannian product metric_, in which case \((M,\langle\,,\,\rangle)\) is called a _Riemannian product_.
A _net_\(\mathcal{E}=(E_{i})_{i=0}^{r}\) on a differentiable manifold \(M\) is a splitting \(TM=\oplus_{i=1}^{r}E_{i}\) of its tangent bundle into a family of integrable distributions. A net \(\mathcal{E}=(E_{i})_{i=1}^{r}\) on a Riemannian manifold \(M\) is called an _orthogonal net_ if the distributions of \(\mathcal{E}\) are mutually orthogonal.
Let \(M\) be a product manifold. The _product net_ of \(M\), \(\mathcal{E}=(E_{i})_{i=0}^{k}\), is defined by
\[E_{i}(x)=\tau_{i}^{x}\,{}_{*}T_{x_{i}}M_{i},\quad 0\leq i\leq k,\]
for any \(x=(x_{0},\ldots,x_{k})\in M\), where \(\tau_{i}^{x}:M_{i}\to M\) is the inclusion of \(M_{i}\) into \(M\) given by
\[\tau_{i}^{x}(\bar{x}_{i})=(x_{0},\ldots,\bar{x}_{i},\ldots,x_{k}),\quad 0\leq i \leq k.\]
The relation between the Levi-Civita connections of a twisted-product metric and of the corresponding Riemannian product metric is as follows (see [18]).
**Proposition 2.3** ([18]).: _Let \((M,\langle\,,\,\rangle)={}^{\rho}\prod_{i=0}^{k}(M_{i},\langle\,,\,\rangle,_{i})\) be a twisted product with twist function \(\rho=(\rho_{0},\ldots,\rho_{k})\) and product net \(\mathcal{E}=(E_{i})_{i=0}^{k}\). Let \(\nabla\) and \(\tilde{\nabla}\) be the Levi-Civita connections of \(\langle\,,\,\rangle\) and of the product metric \(\langle\,,\,\rangle\), respectively, and let \(U_{i}=-\text{grad}\,(\log\circ\rho_{i})\), \(0\leq i\leq k\), where the gradient is calculated with respect to \(\langle\,,\,\rangle.\) Then_
\[\nabla_{X}Y=\tilde{\nabla}_{X}Y+\sum_{i=0}^{k}(\langle X^{i},Y^{i}\rangle U_{ i}-\langle X,U_{i}\rangle Y^{i}-\langle Y,U_{i}\rangle X^{i}), \tag{1}\]
_where \(X\mapsto X^{i}\) is the orthogonal projection over \(E_{i}.\)_
It follows from (1) that
\[(\nabla_{X_{i}}Y_{i})_{E_{i}^{\perp}}=\langle X_{i},Y_{i}\rangle(U_{i})_{E_{i }^{\perp}}. \tag{2}\]
for all \(X_{i},Y_{i}\in E_{i}\). Therefore, \(E_{i}\) is an umbilical distribution with mean curvature vector field \((U_{i})_{E_{i}^{\perp}}\). It also follows immediately from (1) that \(E_{i}^{\perp}=\oplus_{\begin{subarray}{c}j=0\\ j\neq i\end{subarray}}^{k}E_{j}\) is integrable for all \(0\leq i\leq k\).
An orthogonal net \(\mathcal{E}=(E_{i})_{i=0}^{k}\) on a Riemannian manifold \(M\) is called a TP-net if \(E_{i}\) is umbilical and \(E_{i}^{\perp}\) is integrable for all \(i=0,\ldots,k\). In particular, the product net of a twisted product is a TP-net.
Let \(\mathcal{F}=\{F_{i}\}_{i=0}^{k}\) be a net on a smooth manifold \(N\). Then \(\bar{\Phi}\colon M:=\prod_{i=0}^{k}M_{i}\to N\) is called a _product representation_ of \(\mathcal{F}\) if \(\bar{\Phi}\) is a diffeomorphism and \(\bar{\Phi}_{*}E_{i}(x)=F_{i}(\bar{\Phi}(x))\) for each \(x\in M\) and each \(i=0,\ldots,k\), where \(\mathcal{E}=(E_{i})_{i=0}^{k}\) is the product net of \(M\).
The following de Rham-type decomposition theorem was proved in [18].
**Theorem 2.4**.: _Let \(\mathcal{E}=(E_{i})_{i=0}^{k}\) be a TP-net on a Riemannian manifold \(M\). Then, for each \(x\in M\), there exists a local product representation \(\bar{\Phi}:\prod_{i=0}^{k}M_{i}\to U\) of \(\mathcal{E}\), with \(x\in U\subset M\), which is an isometry with respect to a twisted product metric on \(\prod_{i=0}^{k}M_{i}.\)_
### Moebius invariants for Euclidean submanifolds
Let \(\mathbb{L}^{m+2}\) be the _Minkowski space_ of dimension \(m+2\), that is, \(\mathbb{R}^{m+2}\) endowed with the inner-product
\[\langle v,w\rangle=-v_{0}w_{0}+v_{1}w_{1}+\ldots+v_{m+1}w_{m+1}\]
for all \(v=(v_{0},\ldots,v_{m+1})\), \(w=(w_{0},\ldots,w_{m+1})\in\mathbb{R}^{m+2}\). The _light cone_\(\mathbb{V}^{m+1}\) of \(\mathbb{L}^{m+2}\) is the upper half of
\[\{v\in\mathbb{L}^{m+2}:\langle v,v\rangle=0\,\,\,\text{and}\,\,\,v\neq 0\},\]
restricted to which the inner-product of \(\mathbb{L}^{m+2}\) is degenerate. Its subset
\[\mathbb{E}^{m}=\mathbb{E}^{m}_{w}=\{p\in\mathbb{V}^{m+1}:\langle p,w\rangle=1\},\]
gives rise to a model of the \(m\)-dimensional Euclidean space \(\mathbb{R}^{m}\) for each fixed \(w\in\mathbb{V}^{m+1}\). Indeed, for any \(p_{0}\in\mathbb{E}^{m}\) and any linear isometry \(C:\mathbb{R}^{m}\to(\text{span}\{p_{0},w\})^{\perp}\subset\mathbb{L}^{m+2}\), the map \(\Psi=\Psi_{p_{0},w,C}:\mathbb{R}^{m}\to\mathbb{L}^{m+2}\), given by
\[\Psi(x)=p_{0}+Cx-\frac{1}{2}||x||^{2}w, \tag{3}\]
is an isometric embedding such that \(\Psi(\mathbb{R}^{m})=\mathbb{E}^{m}.\) The position vector field \(\Psi\) is a light-like parallel normal vector field along \(\Psi,\) whose second fundamental form is
\[\alpha^{\Psi}(X,Y)=-\langle X,Y\rangle w\]
for all \(X,Y\in\mathfrak{X}(\mathbb{R}^{m}).\)
Now let \(f:M^{n}\to\mathbb{R}^{m}\) be an immersion free of umbilical points. Then, the function
\[\rho^{2}=\frac{n}{n-1}(||\alpha^{f}||^{2}-n||\mathcal{H}^{f}||^{2})\]
does not vanish on \(M^{n},\) where \(\mathcal{H}^{f}\) and \(||\alpha^{f}||\) stand for the mean curvature vector field of \(f\) and the norm of the second fundamental form of \(f,\) respectively. The _Moebius lift_ of \(f\) is the immersion \(F:M\to\mathbb{V}^{m+1}\subset\mathbb{L}^{m+2}\) given by \(F=\rho\,\Psi\circ f,\) whose induced metric is
\[\langle\,,\,\rangle^{*}=\rho^{2}\langle\,,\,\rangle_{f}. \tag{4}\]
The metric (4) is called the _Moebius metric_ determined by \(f\). It was proved in [22] that the Moebius metric is invariant under conformal transformations of \(\mathbb{R}^{m}.\)
_The Moebius second fundamental form_ of \(f:M^{n}\to\mathbb{R}^{m}\) is the symmetric section \(\beta=\beta^{f}\in\mathrm{Hom}^{2}(TM,TM;N_{f}M)\) defined by
\[\beta(X,Y)=\rho(\alpha^{f}(X,Y)-\langle X,Y\rangle\mathcal{H}^{f})\]
for all \(X,Y\in\mathfrak{X}(M)\). Notice that \(\beta\) is traceless and that its norm with respect to \(\langle\,,\,\rangle^{*}\) is \(||\beta||_{*}=\sqrt{\frac{n-1}{n}}\). Associated with \(\beta\) one defines the _Moebius third fundamental form_\(III_{\beta}:\mathfrak{X}(M)\times\mathfrak{X}(M)\to\mathbb{R}\) by
\[III_{\beta}(X,Y)=\sum_{i=1}^{n}\langle\beta(X,X_{i}),\beta(Y,X_{i})\rangle, \tag{5}\]
where \(X_{1},\ldots,X_{n}\) is an orthonormal frame with respect to \(\langle\,,\,\rangle^{*}.\)
The _Blaschke tensor_\(\psi=\psi^{f}\) of \(f\) is the symmetric \(C^{\infty}(M)\)-bilinear form given by
\[\psi(X,Y)=\frac{1}{\rho}\langle\beta^{f}(X,Y),\mathcal{H}^{f}\rangle+\frac{1 }{2\rho^{2}}(||\mathrm{grad}\,^{*}\rho||_{*}^{2}+||\mathcal{H}^{f}||^{2}) \langle X,Y\rangle^{*}-\frac{1}{\rho}\mathrm{Hess}\,^{*}\rho(X,Y)\]
and its _Moebius form_\(\omega=\omega^{f}\) is the normal bundle valued one-form defined by
\[\omega(X)=-\frac{1}{\rho}(\nabla_{X}^{\perp}\mathcal{H}^{f}+\beta(X,\mathrm{ grad}\,^{*}\rho)),\]
where \(\mathrm{grad}^{*}\) and \(\mathrm{Hess}^{*}\) denote the gradient and the Hessian on \((M^{n},\langle\,,\,\rangle^{*}).\)
**Proposition 2.4** ([22]).: _The Blaschke tensor is given in terms of the Moebius metric and the Moebius third fundamental form by_
\[(n-2)\psi(X,Y)=\text{Ric}^{*}(X,Y)+III_{\beta}(X,Y)-\frac{n^{2}s^{*}+1}{2n} \langle X,Y\rangle^{*}, \tag{6}\]
_for all \(X,Y\in\mathfrak{X}(M)\), where \(\text{Ric}^{*}\) and \(s^{*}=\frac{1}{n(n-1)}\text{tr}\text{Ric}^{*}\) are the Ricci curvature and the scalar curvature of \((M^{n},\langle\,,\,\rangle^{*}).\) In particular,_
\[\text{tr}\,\psi=\frac{n^{2}s^{*}+1}{2n}=\frac{n}{2}\langle\mathcal{H}^{F}, \mathcal{H}^{F}\rangle. \tag{7}\]
**Proposition 2.5** ([22]).: _The following equations hold:_
**(i)**: _The conformal Gauss equation:_
\[\langle R^{*}(X,Y)Z,W\rangle^{*}= \langle\beta(X,W),\beta(Y,Z)\rangle-\langle\beta(X,Z),\beta(Y,W)\rangle\] \[+\psi(X,W)\langle Y,Z\rangle^{*}+\psi(Y,Z)\langle X,W\rangle^{*}\] \[-\psi(X,Z)\langle Y,W\rangle^{*}-\psi(Y,W)\langle X,Z\rangle^{*} \tag{8}\]
_for all_ \(X,Y,Z,W\in\mathfrak{X}(M)\)_._
**(ii)**: _The Codazzi conformal equations:_
\[(^{f}\nabla^{\perp}_{X}\beta)(Y,Z)-(^{f}\nabla^{\perp}_{Y}\beta)(X,Z)=\omega( (X\wedge Y)Z) \tag{9}\]
_and_
\[(\nabla^{*}_{X}\psi)(Y,Z)-(\nabla^{*}_{Y}\psi)(X,Z)=\langle\omega(Y),\beta(X,Z)\rangle-\langle\omega(X),\beta(Y,Z)\rangle \tag{10}\]
_for all_ \(X,Y,Z\in\mathfrak{X}(M),\) _where_
\[(^{f}\nabla^{\perp}_{X}\beta)(Y,Z)={}^{f}\nabla^{\perp}_{X}\beta(Y,Z)-\beta (\nabla^{*}_{X}Y,Z)-\beta(Y,\nabla^{*}_{X}Z),\]
\[(\nabla^{*}_{X}\psi)(Y,Z)=X(\psi(Y,Z))-\psi(\nabla^{*}_{X}Y,Z)-\psi(Y,\nabla^{ *}_{X}Z)\]
_and_
\[(X\wedge Y)Z=\langle Y,Z\rangle^{*}X-\langle X,Z\rangle^{*}Y.\]
**(iii)**: _The conformal Ricci equations:_
\[d\omega(X,Y)=\beta(Y,\hat{\psi}X)-\beta(X,\hat{\psi}Y) \tag{11}\]
_and_
\[\langle R^{\perp}(X,Y)\xi,\eta\rangle=\langle[B_{\xi},B_{\eta}]X,Y\rangle^{*} \tag{12}\]
_for all_ \(X,Y\in\mathfrak{X}(M)\) _and_ \(\xi,\eta\in\Gamma(N_{f}M)\)_, with_ \(\hat{\psi}\in\Gamma(\text{End}(TM))\) _given by_
\[\langle\hat{\psi}X,Y\rangle^{*}=\psi(X,Y).\]
**Theorem 2.5** ([22]).: _Let \(f,g:M^{n}\to\mathbb{R}^{m}\), \(n\geq 2\), be immersions free of umbilical points. Then there exists a conformal transformation \(\tau:\mathbb{R}^{m}\to\mathbb{R}^{m}\) such that \(g=\tau\circ f\) if and only if \(f\) and \(g\) share the same Moebius metric and there exists a vector bundle isometry \(\mathcal{T}:N_{f}M\to N_{g}M\) such that_
\[\mathcal{T}^{f}\nabla^{\perp}={}^{g}\nabla^{\perp}\mathcal{T}\quad\text{and} \quad\mathcal{T}\circ\beta^{f}=\beta^{g}.\]
### Submanifolds with flat normal bundle
An isometric immersion \(f:M^{n}\to\mathbb{Q}^{m}_{c}\) is said to have _flat normal bundle_ if the curvature tensor of its normal bundle vanishes identically. It is well-known (see [20]) that, under this assumption, at each \(x\in M^{n}\) the tangent space \(T_{x}M\) decomposes orthogonally as \(T_{x}M=E_{\eta_{1}}(x)\oplus\ldots E_{\eta_{s}}(x)\), where \(s=s(x)\) and \(\eta_{1},\ldots,\eta_{s}\) are the distinct principal
normal vectors of \(f\) at \(x\). If \(s(x)=k\in\mathbb{N}\) for all \(x\in M^{n}\), then \(f\) is said to be _proper_. In this case, the Codazzi equations of \(f\) are equivalent to
\[\langle X_{j},Y_{j}\rangle\nabla^{\perp}_{X_{l}}\eta_{j} =\langle\nabla_{X_{j}}Y_{j},X_{i}\rangle(\eta_{j}-\eta_{i}), \tag{13}\] \[\langle\nabla_{X_{j}}X_{i},X_{l}\rangle(\eta_{i}-\eta_{l}) =\langle\nabla_{X_{i}}X_{j},X_{l}\rangle(\eta_{j}-\eta_{l}), \tag{14}\]
if \(X_{i}\in\Gamma(E_{\eta_{i}})\), \(X_{j},Y_{j}\in\Gamma(E_{\eta_{j}})\) and \(X_{l}\in\Gamma(E_{\eta_{l}})\) for \(1\leq i\neq j\neq l\neq i\leq k\). By the Gauss equation, the sectional curvature of \(M^{n}\) at \(x\) along the plane \(\sigma\) spanned by \(X\in E_{\eta_{i}}\) and \(Y\in E_{\eta_{j}}\) is
\[K(\sigma)=c+\langle\eta_{i}(x),\eta_{j}(x)\rangle. \tag{15}\]
Let \(f:M^{n}\to\mathbb{R}^{n+p}\) be a proper isometric immersion with flat normal bundle and closed Moebius form. Let \(\eta_{1},\ldots,\eta_{k}\) be the principal normal vector fields of \(f\) associated with the smooth distributions \(E_{\eta_{1}},\ldots,E_{\eta_{k}}\). Given unit vector fields \(X_{i}\in\Gamma(E_{\eta_{i}})\) and \(X_{j}\in\Gamma(E_{\eta_{j}})\), since \(\langle\,,\,\rangle^{*}=\rho^{2}\langle\,,\,\rangle_{f}\) and \(\beta=\rho(\alpha^{f}-\langle\,,\,\rangle_{f}\mathcal{H}^{f})\), then \(\bar{X}_{i}=\rho^{-1}X_{i}\) and \(\bar{X}_{j}=\rho^{-1}X_{j}\) are unit vector fields with respect to \(\langle\,,\,\rangle^{*}\) such that
\[\beta(\bar{X}_{i},\bar{X}_{j}) =\rho(\rho^{-2}\alpha^{f}(X_{i},X_{j})-\rho^{-2}\delta_{ij} \mathcal{H}^{f})\] \[=\delta_{ij}\rho^{-1}(\eta_{i}-\mathcal{H}^{f}).\]
We call the normal vector fields \(\bar{\eta}_{i}=\rho^{-1}(\eta_{i}-\mathcal{H}^{f})\), \(1\leq i\leq k\), the _Moebius principal normal vector fields_ of \(f\).
## 3 Examples
We start this section by computing the Moebius metric of an isometric immersion \(f:M^{n}\to\mathbb{R}^{n+p}\) that is constructed as in Theorem 2.1 in terms of an isometric immersion \(g:M^{p-\ell}_{\tilde{c}}\to\mathbb{Q}^{2p-\ell}_{\tilde{c}}\), \(0\leq\ell\leq p-1\), of a Riemannian manifold with constant sectional curvature \(\tilde{c}\) that has flat normal bundle and vanishing index of relative nullity. Then we establish a condition that the mean curvature vector field \(\mathcal{H}^{g}\) of \(g\) must satisfy in order that \(f\) has constant Moebius curvature \(c\).
Observe that, if \(g:M^{p-\ell}_{\tilde{c}}\to\mathbb{Q}^{2p-\ell}_{\tilde{c}}\), \(0\leq\ell\leq p-1\), is an isometric immersion with flat normal bundle and vanishing index of relative nullity, by the Gauss equation (15) there exist \(p-\ell\) distinct nowhere-vanishing pairwise orthogonal principal normal vector fields \(\eta_{1},\ldots,\eta_{p-\ell}\).
**Examples 3.1**.: \((i)\) Let \(g:M^{p-\ell}\to\mathbb{R}^{2p-\ell}\), \(0\leq\ell\leq p-1\), be an immersion with flat normal bundle, vanishing index of relative nullity and flat induced metric \(ds^{2}\). By the _cylinder in \(\mathbb{R}^{n+p}\) over \(g\)_ we mean the immersion given by
\[f=g\times\mathrm{Id}:M^{p-\ell}\times\mathbb{R}^{n-p+\ell}\to\mathbb{R}^{2p- \ell}\times\mathbb{R}^{n-p+\ell}=\mathbb{R}^{n+p},\]
where \(\mathrm{Id}:\mathbb{R}^{n-p+\ell}\to\mathbb{R}^{n-p+\ell}\) is the identity map. Let \(\eta_{1},\ldots,\eta_{p-\ell}\) be the nowhere vanishing pairwise orthogonal principal normal vector fields of \(g\) and let \(X_{1},\ldots,X_{p-\ell}\) be an orthonormal frame of \((M^{p-\ell},ds^{2})\) such that \(\alpha^{g}(X_{i},X_{j})=\delta_{ij}\eta_{i}\) for \(1\leq i,j\leq p-\ell\). Then
\[\rho^{2}=\frac{n}{n-1}(||\alpha^{f}||^{2}-n||\mathcal{H}^{f}||^{2})=\frac{n}{n -1}(||\alpha^{g}||^{2}-\frac{1}{n}||\alpha^{g}||^{2})=||\alpha^{g}||^{2}=(p- \ell)^{2}||\mathcal{H}^{g}||^{2}.\]
The induced metric by \(f\) is \(\langle\,,\,\rangle_{f}=ds^{2}+du^{2}_{p-\ell+1}+\ldots+du^{2}_{n}\), where \((u_{p-\ell+1},\ldots,u_{n})\) are the canonical coordinates on \(\mathbb{R}^{n-p+\ell}\). Thus the Moebius metric determined by \(f\) is
\[\langle\,,\,\rangle^{*}=(p-\ell)^{2}||\mathcal{H}^{g}||^{2}(ds^{2}+du^{2}_{p- \ell+1}+\ldots+du^{2}_{n}).\]
\((ii)\) Let \(g:M^{p-\ell}\rightarrow\mathbb{S}^{2p-\ell}\), \(0\leq\ell\leq p-1\), be an isometric immersion with flat normal bundle, vanishing index of relative nullity and induced metric \(ds^{2}\) of constant sectional curvature \(1\). The _generalized cone in \(\mathbb{R}^{n+p}\) over \(g\)_ is the immersion \(f\colon M^{p-\ell}\times\mathbb{H}^{n-p+\ell}\rightarrow\mathbb{R}^{n+p}\) given by
\[f(x,z)=\Theta\circ(g,\mathrm{Id})(x,z)=(z_{1}g(x),z_{2},\ldots,z_{n-p+\ell}),\]
where \(\mathbb{H}^{n-p+\ell}=\mathbb{R}_{+}\times\mathbb{R}^{n-p+\ell-1}\) is endowed with the hyperbolic metric
\[dz^{2}=\frac{1}{z_{1}^{2}}(dz_{1}^{2}+\ldots+dz_{n-p+\ell}^{2}),\]
\(\mathrm{Id}:\mathbb{H}^{n-p+\ell}\rightarrow\mathbb{H}^{n-p+\ell}\) is the identity map and \(\Theta:\mathbb{S}^{2p-\ell}\times\mathbb{H}^{n-p+\ell}\rightarrow\mathbb{R}^ {n+p}\) is the conformal diffeomorphism defined by
\[\Theta(y,z)=(z_{1}y,z_{2},\ldots,z_{n-p+\ell}),\]
where its conformal factor is \(z_{1}.\) The induced metric by \(f\) is
\[\langle\,,\,\rangle_{f}=z_{1}^{2}(ds^{2}+dz^{2}).\]
Let \(\eta_{1},\ldots,\eta_{p-\ell}\) be the nowhere vanishing pairwise orthogonal principal normal vector fields of \(g\) and let \(X_{1},\ldots,X_{p-\ell}\) be an orthonormal frame of \((M^{p-\ell},ds^{2})\) such that \(\alpha^{g}(X_{i},X_{j})=\delta_{ij}\eta_{i}\) for \(1\leq i,j\leq p-\ell\). The second fundamental form of \(f\) is given by
\[\alpha^{f}\left(\frac{X_{i}}{z_{1}},\frac{X_{j}}{z_{1}}\right)=\frac{1}{z_{1} }\alpha^{g}(X_{i},X_{j}),\quad\alpha^{f}\left(\frac{d}{dz_{1}},\frac{d}{dz_{1} }\right)=0=\alpha^{f}(\frac{d}{dz_{1}},\frac{X_{i}}{z_{1}}).\]
Thus
\[\rho^{2} =\frac{n}{n-1}(||\alpha^{f}||^{2}-n||\mathcal{H}^{f}||^{2})\] \[=\frac{n}{n-1}(\sum_{i=1}^{p-\ell}||\alpha^{f}(\frac{X_{i}}{z_{1} },\frac{X_{i}}{z_{1}})||^{2}-n||\frac{1}{n}\sum_{i=1}^{p-\ell}\alpha^{f}(\frac {X_{i}}{z_{1}},\frac{X_{i}}{z_{1}})||^{2})\] \[=\frac{n}{n-1}(\frac{1}{z_{1}^{2}}\sum_{i=1}^{p-\ell}||\alpha^{g} (X_{i},X_{i})||^{2}-\frac{1}{n}||\frac{1}{z_{1}}\sum_{i=1}^{p-\ell}\alpha^{g} (X_{i},X_{i})||^{2})\] \[=\frac{1}{z_{1}^{2}}\frac{n}{n-1}(\sum_{i=1}^{p-\ell}||\alpha^{g} (X_{i},X_{i})||^{2}-\frac{1}{n}\sum_{i=1}^{p-\ell}||\alpha^{g}(X_{i},X_{i})|| ^{2})\] \[=\frac{1}{z_{1}^{2}}\sum_{i=1}^{p-\ell}||\alpha^{g}(X_{i},X_{i}) ||^{2}=\frac{1}{z_{1}^{2}}(p-\ell)^{2}||\mathcal{H}^{g}||^{2}.\]
Therefore,
\[\rho=\frac{(p-\ell)||\mathcal{H}^{g}||}{z_{1}},\]
and the Moebius metric determined by \(f\) is
\[\langle\,,\,\rangle^{*}=(p-\ell)^{2}||\mathcal{H}^{g}||^{2}(ds^{2}+dz^{2}).\]
\((iii)\) Let \(g:M^{p-\ell}\rightarrow\mathbb{H}^{2p-\ell}\), \(0\leq\ell\leq p-1\), be an isometric immersion with flat normal bundle, vanishing index of relative nullity and whose induced metric \(ds^{2}\) has constant sectional curvature \(-1\), where
\[\mathbb{H}^{2p-\ell}=\{(z_{1},\ldots,z_{2p-\ell}):z_{2p-\ell}>0\}\]
is endowed with the hyperbolic metric
\[dz^{2}=\frac{1}{z_{2p-\ell}^{2}}(dz_{1}^{2}+\ldots+dz_{2p-\ell}^{2}).\]
The _rotational submanifold over_\(g\) is the immersion \(f:M^{p-\ell}\times\mathbb{S}^{n-p+\ell}\to\mathbb{R}^{n+p}\) defined by
\[f(x,y)=\Theta\circ(g,\operatorname{Id})(x,y)=(g_{1}(x),\ldots,g_{2p-\ell-1}(x),g_{2p-\ell}(x)y),\]
where \(\operatorname{Id}:\mathbb{S}^{n-p+\ell}\to\mathbb{S}^{n-p+\ell}\) is the identity map and \(\Theta:\mathbb{H}^{2p-\ell}\times\mathbb{S}^{n-p+\ell}\to\mathbb{R}^{n+p}\) is the conformal diffeomorphism
\[\Theta(z,y)=(z_{1},\ldots,z_{2p-\ell-1},z_{2p-\ell}y),\]
whose conformal factor is \(z_{2p-\ell}\).
As a consequence, the induced metric by \(f\) is
\[\langle\,,\,\rangle_{f}=g_{2p-\ell}^{2}(ds^{2}+dy^{2}),\]
where \(dy^{2}\) is the canonical metric of \(\mathbb{S}^{n-p+\ell}\).
Again, let \(\eta_{1},\ldots,\eta_{p-\ell}\) be the nowhere vanishing pairwise orthogonal principal normal vector fields of \(g\) and let \(X_{1},\ldots,X_{p-\ell}\) be an orthonormal frame of \((M^{p-\ell},ds^{2})\) such that \(\alpha^{g}(X_{i},X_{j})=\delta_{ij}\eta_{i}\) for \(1\leq i,j\leq p-\ell\).
The second fundamental forms of \(f\) and \(g\) are related by
\[\alpha^{f}=\Theta_{*}\alpha^{g}-\frac{1}{g_{2p-\ell}}\Theta_{*}((\operatorname {grad}z_{2p-\ell})\circ g)^{\perp}(ds^{2}+dy^{2}).\]
Here \(\operatorname{grad}\) is computed with respect to \(dz^{2}\). In particular,
\[\alpha^{f}(\frac{X_{i}}{g_{2p-\ell}},\frac{X_{j}}{g_{2p-\ell}}) =\begin{cases}\Theta_{*}\alpha^{g}(\frac{X_{i}}{g_{2p-\ell}},\frac {X_{i}}{g_{2p-\ell}})-\frac{1}{g_{2p-\ell}^{3}}\Theta_{*}((\operatorname{grad} z_{2p-\ell})\circ g)^{\perp},\quad i=j,\\ 0,\quad i\neq j,\end{cases}\] \[\alpha^{f}(\frac{1}{g_{2p-\ell}}\frac{\partial}{\partial y_{k}}, \frac{1}{g_{2p-\ell}}\frac{\partial}{\partial y_{r}}) =\begin{cases}-\frac{1}{g_{2p-\ell}^{2}}\Theta_{*}(( \operatorname{grad}z_{2p-\ell})\circ g)^{\perp},\quad k=r\\ 0,\quad k\neq r\end{cases}.\]
We have
\[||\alpha^{f}||^{2} =\sum_{i=1}^{p-\ell}||\alpha^{f}(\frac{X_{i}}{g_{2p-\ell}},\frac{ X_{i}}{g_{2p-\ell}})||^{2}+\sum_{i=p-\ell+1}^{n}||\alpha^{f}(\frac{1}{g_{2p- \ell}}\frac{\partial}{\partial y_{i}},\frac{1}{g_{2p-\ell}}\frac{\partial}{ \partial y_{i}})||^{2}\] \[=\sum_{i=1}^{p-\ell}||\alpha^{g}(\frac{X_{i}}{g_{2p-\ell}},\frac{ X_{i}}{g_{2p-\ell}})-\frac{1}{g_{2p-\ell}^{3}}((\operatorname{grad}z_{2p- \ell})\circ g)^{\perp}||^{2}_{\Theta}+\sum_{i=p-\ell+1}^{n}||\frac{1}{g_{2p- \ell}^{3}}((\operatorname{grad}z_{2p-\ell})\circ g)^{\perp}||^{2}_{\Theta}\] \[=\frac{1}{g_{2p-\ell}^{4}}\sum_{i=1}^{p-\ell}||\alpha^{g}(X_{i},X_ {i})||^{2}_{\Theta}-\frac{2}{g_{2p-\ell}^{5}}\sum_{i=1}^{p-\ell}\langle\alpha^ {g}(X_{i},X_{i}),((\operatorname{grad}z_{2p-\ell})\circ g)^{\perp}\rangle_{\Theta}\] \[+\frac{n}{g_{2p-\ell}^{6}}||((\operatorname{grad}z_{2p-\ell}) \circ g)^{\perp}||^{2}_{\Theta}\]
and
\[||\mathcal{H}^{f}||^{2} =||\frac{1}{n}\sum_{i=1}^{p-\ell}\alpha^{g}(\frac{X_{i}}{g_{2p-\ell }},\frac{X_{i}}{g_{2p-\ell}})-\frac{1}{g_{2p-\ell}^{3}}((\operatorname{grad}z_{2 p-\ell})\circ g)^{\perp}||_{\Theta}^{2}\] \[=\frac{1}{n^{2}}\frac{1}{g_{2p-\ell}^{4}}\sum_{i=1}^{p-\ell}|| \alpha^{g}(X_{i},X_{i})||_{\Theta}^{2}-\frac{2}{ng_{2p-\ell}^{5}}\sum_{i=1}^{p- \ell}\langle\alpha^{g}(X_{i},X_{i}),((\operatorname{grad}z_{2p-\ell})\circ g)^ {\perp}\rangle_{\Theta}\] \[+\frac{1}{g_{2p-\ell}^{6}}||((\operatorname{grad}z_{2p-\ell}) \circ g)^{\perp}||_{\Theta}^{2}.\]
Therefore
\[\rho^{2}=\frac{n}{n-1}(||\alpha^{f}||^{2}-n||\mathcal{H}^{f}||^{ 2}) =\frac{1}{g_{2p-\ell}^{4}}\frac{n}{n-1}(\sum_{i=1}^{p-\ell}|| \alpha^{g}(X_{i},X_{i})||_{\Theta}^{2}-\frac{1}{n}\sum_{i=1}^{p-\ell}||\alpha^ {g}(X_{i},X_{i})||_{\Theta}^{2})\] \[=\frac{1}{g_{2p-\ell}^{4}}\sum_{i=1}^{p-\ell}||\alpha^{g}(X_{i},X _{i})||_{\Theta}^{2}\] \[=\frac{1}{g_{2p-\ell}^{2}}\sum_{i=1}^{p-\ell}||\alpha^{g}(X_{i},X _{i})||_{dx^{2}}^{2}=\frac{(p-\ell)^{2}||\mathcal{H}^{g}||_{dz^{2}}^{2}}{g_{2p -\ell}^{2}},\]
that is,
\[\rho=\frac{(p-\ell)||\mathcal{H}^{g}||_{dz^{2}}}{g_{2p-\ell}}.\]
We conclude that the Moebius metric determined by \(f\) is given by
\[\langle\,,\,\rangle^{*}=(p-\ell)^{2}||\mathcal{H}^{g}||_{dz^{2}}^{2}(ds^{2}+ dy^{2}).\]
**Lemma 3.1**.: _Let \(g:M^{p-\ell}\to\mathbb{Q}_{\tilde{c}}^{2p-\ell}\), \(0\leq\ell\leq p-2\), be an isometric immersion with flat normal bundle and vanishing index of relative nullity. Then the immersion \(f=\Theta\circ(g,\text{Id})\colon M^{p-\ell}\times\mathbb{Q}_{-\tilde{c}}^{n-p+ \ell}\to\mathbb{R}^{n+p}\), where \(\Theta=\text{Id}\colon\mathbb{R}^{n+p}\to\mathbb{R}^{n+p}\) if \(\tilde{c}=0\) or \(\Theta\) is as in Examples 3.1-\((ii)\) and \((iii)\) if \(\tilde{c}\neq 0\), has constant Moebius curvature \(c\) if and only if the induced metric \(ds^{2}\) by \(g\) has constant sectional curvature \(\tilde{c}\) and_
\[\text{Hess}\,(1/||\mathcal{H}^{g}||)+\tilde{c}(1/||\mathcal{H}^{g}||)ds^{2}=0 \quad\text{and}\quad||\text{grad}\,(1/||\mathcal{H}^{g}||)||^{2}+\tilde{c}(1/|| \mathcal{H}^{g}||^{2})=-(p-\ell)^{2}c, \tag{16}\]
_for grad_, _Hess and \(||\cdot||\) computed with respect to \(ds^{2}\)._
We make use of the following well-known fact.
**Lemma 3.2**.: _Let \((M^{p},g_{1})\) and \((N^{n-p},g_{2})\) be Riemannian manifolds with \(n-2\geq p\geq 1\). The warped metric \(g_{1}+\mu^{2}g_{2}\) on \(M^{p}\times N^{n-p}\), for \(\mu\in C^{\infty}(M^{p})\), has constant sectional curvature \(c\) if and only if_
1. \(g_{1}\) _has constant sectional curvature_ \(c\) _(for_ \(p\geq 2\)_);_
2. \(\text{Hess}\,\mu+c\mu g_{1}=0\)_;_
3. \(g_{2}\) _has constant sectional curvature_ \(||\text{grad}\,\mu||^{2}+c\mu^{2}\)_,_
_where Hess and grad are computed with respect to \(g_{1}\)._
Proof.: We start by proving the converse assertion. Assume \((M^{p-\ell},ds^{2})\) has constant curvature \(\tilde{c}\). By Examples 3.1, the Moebius metric determined by \(f\) is given by
\[\langle\,,\,\rangle^{*}=(p-\ell)^{2}||\mathcal{H}^{g}||^{2}(ds^{2}+\sigma_{- \tilde{c}}), \tag{17}\]
here \(\sigma_{-\tilde{c}}\) stands for the metric of \(\mathbb{Q}_{-\tilde{c}}^{n-p+\ell}.\) Write
\[\langle\,,\,\rangle^{*}=g_{1}+\mu^{2}g_{2},\]
where \(\mu=(p-\ell)||\mathcal{H}^{g}||,\)\(g_{1}=\mu^{2}ds^{2}\) and \(g_{2}=\sigma_{-\tilde{c}}.\) It follows from Lemma 3.2 that \(g_{1}\) has constant sectional curvature \(c\) (for \(p\geq 2\)), \(\operatorname{Hess}\mu+c\mu g_{1}=0\) and \(g_{2}\) has constant sectional curvature \(||\mathrm{grad}\,\mu||^{2}+c\mu^{2},\) where Hess and \(\mathrm{grad}\) are computed with respect to \(g_{1}.\)
Conversely, if \(f\) has constant Moebius curvature \(c,\) by Example 3.1 the Moebius metric determined by \(f\) is given by (17). Thus (16) holds by Lemma 3.2.
The next corollary gives the solutions of 16 on open subsets \(U\subset\mathbb{Q}_{\tilde{c}}^{p-\ell}.\)
**Corollary 3.1**.: _The solutions of (16) on an open subset \(U\) of \(\mathbb{Q}_{\tilde{c}}^{p-\ell}\) are_
\[||\mathcal{H}^{g}||(x)=\begin{cases}\frac{1}{\langle v,x\rangle+a},&\tilde{c} =0\,\text{ and }\,c\leq 0\\ \frac{1}{\langle v,x\rangle},&\tilde{c}=1\text{ and }c<0\text{ or }\tilde{c}=-1\text{ and }c\in\mathbb{R}\end{cases}, \tag{18}\]
_where \(v\in\mathbb{E}^{p-\ell+1}:=\begin{cases}\mathbb{R}^{p-\ell},&\text{if }\, \tilde{c}=0\\ \mathbb{R}^{p-\ell+1},&\text{if }\,\tilde{c}=1\\ \mathbb{L}^{p-\ell+1},&\text{if }\,\tilde{c}=-1\end{cases}\) is such that \(||v||^{2}=-(p-\ell)^{2}c\) and \(a\in\mathbb{R}_{+}.\)_
Proof.: Consider \((U,ds^{2})\) as an open subset of \(\mathbb{R}^{p-\ell}\). Then
\[\frac{1}{||\mathcal{H}^{g}||}(x)=\langle v,x\rangle+a\quad\forall\,x\in U,\]
where \(a\in\mathbb{R}_{+}\) and \(v\in\mathbb{R}^{p-\ell}\) is a constant vector such that \(||v||^{2}=-(p-\ell)^{2}c,\) is the solution of (16) for \(\tilde{c}=0.\)
Now, consider \((U,ds^{2})\) as an open subset of \(\mathbb{S}^{p-\ell}\subset\mathbb{R}^{p-\ell+1}\) and let \(g:\mathbb{R}^{p-\ell+1}\to\mathbb{R}\) be the linear functional defined by
\[g(x)=\langle x,v\rangle,\quad\text{where }v\in\mathbb{R}^{p-\ell+1}\text{ is such that }||v||^{2}=-(p-\ell)^{2}c.\]
Then the gradient and Hessian of \(g\) and those of \(h=g\circ i\colon U\subset\mathbb{S}^{p-\ell}\to\mathbb{R},\) where \(i:\mathbb{S}^{p-\ell}\to\mathbb{R}^{p-\ell+1}\) is an umbilical inclusion, are related by
\[i_{*}\mathrm{grad}\,h=(\mathrm{grad}\,g)^{T}\]
and
\[\operatorname{Hess}\,h(X,Y)=\operatorname{Hess}\,g(i_{*}X,i_{*}Y)+\langle \mathrm{grad}\,g,\alpha^{i}(X,Y)\rangle\]
for any \(x\in U\) and \(X,Y\in T_{x}\mathbb{S}^{p-\ell}\) (see Proposition 1.2 of [5]). Since \(\mathrm{grad}\,g=v\) e \(\alpha^{i}(X,Y)=-\langle X,Y\rangle x,\) we obtain
\[i_{*}\mathrm{grad}\,h=v^{T}\]
and
\[\operatorname{Hess}\,h(X,Y)=-\langle v,x\rangle\langle X,Y\rangle=-h(x) \langle X,Y\rangle.\]
Noticing
\[||\mathrm{grad}\,h||^{2}=||v^{T}||^{2}=||v-v^{\perp}||^{2}=||v||^{2}-||v^{ \perp}||^{2} =-(p-\ell)^{2}c-\langle v,x\rangle^{2}||x||^{2}\] \[=-(p-\ell)^{2}c-\langle v,x\rangle^{2},\]
then
\[||\mathrm{grad}\,h||^{2}+(p-\ell)^{2}c=-h^{2}.\]
Therefore, \(1/||\mathcal{H}^{g}||:U\subset\mathbb{S}^{p-\ell}\to\mathbb{R}\) defined by
\[(1/||\mathcal{H}^{g}||)(x)=\langle x,v\rangle,\quad\forall x\in U,\]
with \(||v||^{2}=-(p-\ell)^{2}c\), is a solution of (16) for \(\tilde{c}=1\).
Similarly, consider \((U,ds^{2})\) as an open subset of \(\mathbb{H}^{p-\ell}\subset\mathbb{L}^{p-\ell+1}\), where \(\mathbb{H}^{p-\ell}\) is endowed with the hyperboloid model. Let \(g:\mathbb{L}^{p-\ell+1}\to\mathbb{R}\) be the linear functional defined by
\[g(x)=\langle x,v\rangle,\text{ where }v\in\mathbb{L}^{p-\ell+1}\text{ is such that }||v||^{2}=-(p-\ell)^{2}c,\]
and let \(h:=g\circ i:U\subset\mathbb{H}^{p-\ell}\to\mathbb{R}\), where \(i:\mathbb{H}^{p-\ell}\to\mathbb{L}^{p-\ell+1}\) is an umbilical inclusion.
The Hessian and gradient of \(g\) and \(h\) are related by
\[i_{*}\mathrm{grad}\,h=(\mathrm{grad}\,g)^{T}\]
and
\[\mathrm{Hess}\,h(X,Y)=\mathrm{Hess}\,g(i_{*}X,i_{*}Y)+\langle\mathrm{grad}\,g,\alpha^{i}(X,Y)\rangle,\]
for any \(x\in U\) and \(X,Y\in T_{x}\mathbb{H}^{p-\ell}\). Since \(\mathrm{grad}\,g=v\) and \(\alpha^{i}(X,Y)=\langle X,Y\rangle x\), we have
\[i_{*}\mathrm{grad}\,h=v^{T}\]
and
\[\mathrm{Hess}\,h(X,Y)=\langle v,x\rangle\langle X,Y\rangle=h(x)\langle X,Y\rangle.\]
Thus
\[||\mathrm{grad}\,h||^{2} =||v^{T}||^{2}=||v-v^{\perp}||^{2}=||v||^{2}-||v^{\perp}||^{2}\] \[=-(p-\ell)^{2}c-\langle v^{\perp},x\rangle^{2}||x||^{2}\] \[=-(p-\ell)^{2}c+\langle v,x\rangle^{2}\] \[=-(p-\ell)^{2}c+(h(x))^{2}.\]
Therefore, the function \(1/||\mathcal{H}^{g}||:U\subset\mathbb{H}^{p-\ell}\to\mathbb{R}\) given by
\[(1/||\mathcal{H}^{g}||)(x)=\langle v,x\rangle,\quad\text{where }v\in\mathbb{L}^{p- \ell+1}\text{ is such that }||v||^{2}=-(p-\ell)^{2}c,\]
is a solution of (16) for \(\tilde{c}=-1\).
**Remark 3.1**.: Particular cases of isometric immersions \(g:M^{p-\ell}_{\tilde{c}}\to\mathbb{Q}^{2p-\ell}_{\tilde{c}}\), \(0\leq\ell\leq p-2\), satisfying the conditions of Lemma 3.1 are those for which \(||\mathcal{H}^{g}||\) is constant and \(\tilde{c}=0\). From a global point of view, it was proved in [4] that if \(g\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) is an isometric immersion with flat normal bundle, constant index of relative nullity and mean curvature vector field with constant length, then \(g(\mathbb{R}^{n})\) is a Riemannian product of curves with constant first curvature functions. The case \(n=2\) was obtained previously in [6] without assuming the index of relative nullity to be constant. In particular, if \(g\colon M^{2}\to\mathbb{R}^{4}\) is a compact surface with flat induced metric and flat normal bundle whose mean curvature vector field has constant length then \(g(M^{2})\) is a Riemannian product of two circles \(\mathbb{S}^{1}\times\mathbb{S}^{1}\).
**Example 3.1**.: Let \(\gamma_{i}\colon I_{i}\to\mathbb{R}^{2}\) be a smooth curve with curvature \(\kappa_{i}\), \(1\leq i\leq 2.\) Consider the surface \(g=\gamma_{1}\times\gamma_{2}\colon I_{1}\times I_{2}\to\mathbb{R}^{2}\times \mathbb{R}^{2}=\mathbb{R}^{4}\), for which \(||\mathcal{H}^{g}||^{2}=\frac{1}{4}(\kappa_{1}^{2}+\kappa_{2}^{2})\). In particular,
\[\operatorname{Hess}\|\mathcal{H}^{g}\|^{2}(\partial_{1},\partial_{2})= \partial_{1}(\partial_{2}(\kappa_{1}^{2}+\kappa_{2}^{2}))-(\nabla_{\partial_{1 }}\partial_{2})(\kappa_{1}^{2}+\kappa_{2}^{2})=0,\]
where \(\partial_{1}\) and \(\partial_{2}\) are the coordinates vector fields of \(I_{1}\) and \(I_{2}\), respectively, and \(\nabla\) is the Levi-Civita connection of the product metric \(ds_{1}^{2}+ds_{2}^{2}\). Assume \(\|\mathcal{H}^{g}\|^{-1}(x)=\langle v,x\rangle+a\) for all \(x\in I_{1}\times I_{2}\), where \(||v||=2\sqrt{-c}\) for \(c\leq 0\) and \(a\in\mathbb{R}_{+}\). Then
\[0=\operatorname{Hess}||\mathcal{H}^{g}||^{-1}(\partial_{1}, \partial_{2}) =-\frac{1}{2}||\mathcal{H}^{g}||^{-3}\operatorname{Hess}|| \mathcal{H}^{g}||^{2}(\partial_{1},\partial_{2})+\frac{3}{4}||\mathcal{H}^{g }||^{-5}(\partial_{1}||\mathcal{H}^{g}||^{2})\partial_{2}||\mathcal{H}^{g}||^ {2}\] \[=\frac{3}{4}||\mathcal{H}^{g}||^{-5}(\partial_{1}||\mathcal{H}^{g }||^{2})\partial_{2}||\mathcal{H}^{g}||^{2},\]
hence
\[(\partial_{1}\kappa_{1})(\partial_{2}\kappa_{2})=0.\]
If, say, \(k_{1}\) is not constant, then \(\kappa_{2}\equiv r\) for some \(r>0\) and, from \(\frac{1}{||\mathcal{H}^{g}||}(x_{1})=2\sqrt{-c}x_{1}\) for \(c<0\), we obtain
\[\kappa_{1}(x_{1})=\sqrt{-c^{-1}x_{1}^{-2}-r^{2}},\quad\text{for}\;\,0<|x_{1}| <\frac{1}{r\sqrt{-c}}.\]
Therefore, \(g=\gamma_{1}\times\gamma_{2}\), where \(\gamma_{1}\) is a curve with curvature \(\kappa_{1}(x_{1})=\sqrt{-c^{-1}x_{1}^{-2}-r^{2}}\) and \(\gamma_{2}(I_{2})\subset\mathbb{S}^{1}\), hence \(g\times\operatorname{Id}:I_{1}\times I_{2}\times\mathbb{R}^{n-2}\to\mathbb{R}^ {n+2}\) is a cylinder with constant negative Moebius curvature \(c\).
## 4 Moebius invariants of conformally flat submanifolds with closed Moebius form
Let \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n-3\geq p\geq 1\), be a proper isometric immersion with flat normal bundle of a conformally flat manifold. Under these assumptions, Theorem 2.2 assures that \(f\) has a principal normal vector field \(\eta\) with multiplicity \(n-p+\ell\), for some \(0\leq\ell\leq p-1\), whereas Theorem 2.3 implies that all the other \(p-\ell\) principal normal vector fields of \(f\) are simple.
Assume that the Moebius form of \(f\) is closed. Let \(\bar{\eta}_{i}=\rho^{-1}(\eta_{i}-\mathcal{H}^{f})\), \(1\leq i\leq k\), be the associated Moebius principal normal vector fields of \(f\). Since the Moebius form \(\omega^{f}\) is closed, the conformal Ricci equation implies that there exists an orthonormal frame \(X_{1},\dots,X_{n}\) with respect to \(\langle\,,\,\rangle^{*}\) that diagonalizes \(\beta\) and \(\psi\) simultaneously and such that
\[\beta(X_{i},X_{i})=\bar{\eta}:=\rho^{-1}(\eta-\mathcal{H}^{f})\]
for any \(p-\ell+1\leq i\leq n\). Furthermore, all the remaining Moebius principal normal vector fields \(\beta(X_{j},X_{j})\), \(1\leq j\leq p-\ell\), are simple. Now consider the smooth distributions
\[\Delta=\operatorname{span}\{X_{i}:p-\ell+1\leq i\leq n\}\quad\text{and}\quad \Delta^{\perp}=\operatorname{span}\{X_{i}:1\leq i\leq p-\ell\}. \tag{19}\]
Since \(\Delta=E_{\eta}\) and \(\dim\Delta=n-p+\ell\geq 2\), then \(\Delta\) is umbilical with respect to \(\langle\,,\,\rangle_{f}\) by Proposition 2.1, hence also umbilical with respect to \(\langle\,,\,\rangle^{*}\), for \(\langle\,,\,\rangle^{*}\) and \(\langle\,,\,\rangle_{f}\) are conformal metrics.
**Proposition 4.1**.: _If the distribution \(\Delta^{\perp}\) is totally geodesic with respect to \(\langle\,,\,\rangle^{*}\), then \(\Delta^{\perp}\) is spherical with respect to \(\langle\,,\rangle_{f}\) with mean curvature vector field \((\text{grad}_{f}\log\rho)_{\Delta}\)._
Proof.: Using the relation
\[\nabla^{*}_{X_{a}}X_{b}={}^{f}\nabla_{X_{a}}X_{b}+\frac{1}{\rho}(X_{a}(\rho)X_{ b}+X_{b}(\rho)X_{a}-\langle X_{a},X_{b}\rangle_{f}\text{grad}_{f}\rho)\]
and the fact that \(\Delta^{\perp}\) is totally geodesic with respect to \(\langle\,,\,\rangle^{*}\), we obtain
\[\langle{}^{f}\nabla_{X_{a}}X_{b},X_{i}\rangle_{f}=\langle X_{a},X_{b}\rangle_ {f}\langle\frac{\text{grad}_{f}\rho}{\rho},X_{i}\rangle_{f}\]
for all \(i\geq p-\ell+1\) and \(1\leq a,b\leq p-\ell.\) Thus \(\Delta^{\perp}\) is umbilical with respect to the metric induced by \(f\), with \((\text{grad}_{f}\log\rho)_{\Delta}\) as its mean curvature vector field.
In order to prove that \(\Delta^{\perp}\) is spherical with respect to \(\langle\,,\,\rangle_{f}\), we must show that
\[\langle{}^{f}\nabla_{X_{i}}(\text{grad}_{f}\log\rho)_{\Delta},X_{j}\rangle_{f} =0,\quad 1\leq i\leq p-\ell,\quad p-\ell+1\leq j\leq n. \tag{20}\]
Since \(X_{1},\ldots,X_{n}\) diagonalizes \(\beta\) and \(\psi\) simultaneously, then \(\text{Hess}^{*}\rho(X_{i},X_{j})=0\) for all \(1\leq i\neq j\leq n\). Using that
\[\nabla^{*}_{X_{i}}\text{grad}^{*}\rho={}^{f}\nabla_{X_{i}}\text{grad}^{*}\rho +\frac{1}{\rho}((\text{grad}^{*}\rho)(\rho)X_{i}+X_{i}(\rho)\text{grad}^{*}\rho -\langle X_{i},\text{grad}^{*}\rho\rangle_{f}\text{grad}_{f}\rho),\]
and that \(\text{grad}^{*}\rho=\frac{\text{grad}_{f}\rho}{\rho^{2}}\), it follows that
\[0 =\text{Hess}^{*}\rho(X_{i},X_{j})\] \[=\langle\nabla^{*}_{X_{i}}\text{grad}^{*}\rho,X_{j}\rangle^{*}\] \[=\rho^{2}\langle{}^{f}\nabla_{X_{i}}\text{grad}^{*}\rho,X_{j} \rangle_{f}\] \[=\langle X_{i}(\rho^{-1})\text{grad}_{f}\log\rho+\rho^{-1f}\nabla _{X_{i}}\text{grad}_{f}\log\rho,X_{j}\rangle_{f},\]
hence
\[\langle{}^{f}\nabla_{X_{i}}\text{grad}_{f}\log\rho,X_{j}\rangle_{f}=X_{i}( \log\rho)X_{j}(\log\rho). \tag{21}\]
On the other hand, since \(\nabla^{*}_{X_{i}}X_{j}={}^{f}\nabla_{X_{i}}X_{j}+\frac{1}{\rho}(X_{i}(\rho) X_{j}+X_{j}(\rho)X_{i})\in\Gamma(\Delta)\), then
\[\langle{}^{f}\nabla_{X_{i}}(\text{grad}_{f}\log\rho)_{\Delta^{ \perp}},X_{j}\rangle_{f} =-\langle(\text{grad}_{f}\log\rho)_{\Delta^{\perp}},{}^{f}\nabla_{ X_{i}}X_{j}\rangle_{f}\] \[=X_{i}(\log\rho)X_{j}(\log\rho), \tag{22}\]
and hence (20) follows from (21) and (22).
### Submanifolds with exactly two distinct principal normals
The next result classifies isometric immersions with flat normal bundle and arbitrary codimension \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 3\), that have closed Moebius form and a principal normal vector field \(\eta\) of multiplicity \(n-1\). It generalizes Theorem 5.3 in [15] and Corollary 1.2 of [12], which classify umbilic-free conformally flat hypersurfaces with closed Moebius form.
**Proposition 4.2**.: _Let \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 3\) and \(p\geq 1\), be an isometric immersion with flat normal bundle that has a principal normal vector field \(\eta\) of multiplicity \(n-1\). If \(\omega^{f}\) is closed, then \(f(M)\) is the image by a conformal transformation of \(\mathbb{R}^{n+p}\) of an open subset of a cylinder, a generalized cone or a rotational submanifold over a curve \(\gamma:I\to\mathbb{Q}_{\tilde{c}}^{p+1}\), where \(\tilde{c}=0,1\) or \(-1\), respectively._
Proof.: Let \(\bar{\eta}=\beta(X_{i},X_{i})\) for \(2\leq i\leq n\), and consider the smooth distributions
\[\Delta=\operatorname{span}\{X_{i}:2\leq i\leq n\}\quad\text{and}\quad\Delta^{ \perp}=\{X_{1}\}.\]
Since \(\operatorname{tr}\beta=0\) and \(||\beta||_{*}^{2}=\frac{n-1}{n}\), we have
\[\beta(X_{1},X_{1})+(n-1)\bar{\eta}=0\quad\text{and}\quad||\beta(X_{1},X_{1})|| ^{2}+(n-1)||\bar{\eta}||^{2}=\frac{n-1}{n},\]
hence
\[||\beta(X_{1},X_{1})||=\frac{n-1}{n}\quad\text{and}\quad||\bar{\eta}||=\frac{ 1}{n}.\]
Therefore, there is a unit normal vector field \(\xi_{1}\in\Gamma(N_{f}M)\) such that
\[\beta(X_{1},X_{1})=\frac{n-1}{n}\xi_{1}\quad\text{and}\quad\bar{\eta}=-\frac{ 1}{n}\xi_{1}. \tag{23}\]
The conformal Codazzi equation
\[(^{f}\nabla^{\perp}_{X_{j}}\beta)(X_{i},X_{i})-(^{f}\nabla^{\perp}_{X_{i}} \beta)(X_{j},X_{i})=\omega((X_{j}\wedge X_{i})X_{i})\]
for \(2\leq i\neq j\leq n\) yields \(\omega(X_{j})={}^{f}\nabla^{\perp}_{X_{j}}\bar{\eta}\). On the other hand, from
\[(^{f}\nabla^{\perp}_{X_{j}}\beta)(X_{1},X_{1})-(^{f}\nabla^{\perp}_{X_{1}} \beta)(X_{j},X_{1})=\omega((X_{j}\wedge X_{1})X_{1}),\]
for \(2\leq j\leq n\), we obtain
\[\omega(X_{j})={}^{f}\nabla^{\perp}_{X_{j}}\beta(X_{1},X_{1})+\langle\nabla^{ *}_{X_{1}}X_{1},X_{j}\rangle^{*}(\bar{\eta}-\beta(X_{1},X_{1})).\]
Since \(\beta(X_{1},X_{1})=-(n-1)\bar{\eta}\), then
\[{}^{f}\nabla^{\perp}_{X_{j}}\bar{\eta}=\langle\nabla^{*}_{X_{1}}X_{1},X_{j} \rangle^{*}\bar{\eta}. \tag{24}\]
Taking the inner product of the preceding equation with \(\bar{\eta}\) gives \(\langle\nabla^{*}_{X_{1}}X_{1},X_{j}\rangle^{*}=0\) for \(2\leq j\leq n\). Therefore, \(\Delta^{\perp}\) is totally geodesic with respect to the Moebius metric, hence spherical with respect to \(\langle\,,\,\rangle_{f}\) with \((\operatorname{grad}_{f}\log\rho)_{\Delta}\) as its mean curvature vector field by Proposition (4.1). Taking into account that \(\Delta=E_{\eta}\) and that \(\dim\Delta\ \geq 2\), the statement follows from Theorem 2.1.
The next lemma will also be used in the proof of Theorem 5.1.
**Lemma 4.1**.: _Let \(f:M^{n}\to\mathbb{R}^{n+p}\) be an isometric immersion with constant Moebius curvature and flat normal bundle. Then the Moebius form of \(f\) is closed._
Proof.: The conformal Ricci equation
\[\langle R^{\perp}(X,Y)\xi,\eta\rangle=\langle[B_{\xi},B_{\eta}]X,Y\rangle^{*}\]
for all \(X,Y\in\mathfrak{X}(M)\) and \(\xi,\eta\in\Gamma(N_{f}M),\) shows that there exists an orthonormal frame \(\mathcal{B}:=\{X_{1},\ldots,X_{n}\}\) with respect to the Moebius metric such that
\[\beta(X_{i},X_{j})=0,\quad i\neq j.\]
Let \(c\in\mathbb{R}\) be the common value of the sectional curvatures of \((M^{n},\langle\,,\,\rangle^{*})\). Then the Ricci tensor of \((M,\langle\,,\,\rangle)\) satisfies
\[\operatorname{Ric}^{*}(X,Y)=c(n-1)\langle X,Y\rangle^{*}\]
for all \(X,Y\in\mathfrak{X}(M)\). Hence \(\mathcal{B}\) diagonalizes \(\operatorname{Ric}^{*}\). Using the relation
\[(n-2)\psi(X,Y)=\operatorname{Ric}^{*}(X,Y)+III_{\beta}(X,Y)-\frac{n^{2}s^{*}+1 }{2n}\langle X,Y\rangle^{*},\]
where \(s^{*}\) denotes the scalar curvature of \((M,\langle\,,\,\rangle^{*})\), we conclude that the Blaschke tensor \(\psi\) is also diagonalizable by \(\mathcal{B}\). Therefore \(\omega\) is closed by (11).
The next result contains as a particular case the classification of umbilic-free hypersurfaces with constant Moebius curvature in Theorem 1.1 of [8] and Theorem 4.9 of [15].
**Lemma 4.2**.: _The Moebius metric \(\langle\,,\,\rangle^{*}=\kappa^{2}(ds^{2}+\sigma_{-\tilde{c}})\) of the isometric immersion \(f:I\times\mathbb{Q}_{-\tilde{c}}^{n-1}\to\mathbb{R}^{n+p}\) defined by_
\[f=\Theta\circ(\gamma,\text{Id}),\]
_where \(\sigma_{-\tilde{c}}\) denotes the canonical metric of \(\mathbb{Q}_{-\tilde{c}}^{n-1}\), \(\gamma:I\to\mathbb{Q}_{\tilde{c}}^{p+1}\) and \(\Theta=\text{Id}:\mathbb{R}^{n+p}\to\mathbb{R}^{n+p}\) if \(\tilde{c}=0\) or \(\Theta\) is as in Examples 3.1- \((ii)\) and \((iii)\) if \(\tilde{c}\neq 0\), has constant Moebius curvature \(c\) if and only if the first curvature function \(\kappa(s)\) is given respectively by_
\[\kappa(s)=\begin{cases}\frac{1}{r},&c=0\text{ and }s\in\mathbb{R}\\ \frac{1}{\sqrt{-c}s},&c<0\text{ and }s>0,\end{cases}\]
\[\kappa(s)=\frac{1}{\sqrt{-c}\sin s},\,\,\,s\in(0,\pi),\,\,\,c<0,\]
_and_
\[\kappa(s)=\begin{cases}\frac{1}{\sqrt{c}\cosh s},&c>0\text{ and }s\in\mathbb{R}\\ \frac{1}{\sqrt{-c}\sinh s},&c<0\text{ and }s>0\\ e^{s},&c=0\text{ and }s\in\mathbb{R}.\end{cases}\]
**Theorem 4.1**.: _Let \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 3\), be an isometric immersion with constant Moebius curvature \(c\) and flat normal bundle with exactly two distinct principal normal vector fields. Then \(f(M)\) is the image by a conformal transformation of \(\mathbb{R}^{n+p}\) of an open subset of a submanifold given in Example 3.1, with the first curvature function \(\kappa(s)\) of the curve \(\gamma\colon I\to\mathbb{Q}_{\tilde{c}}^{p+1}\) given by Lemma 4.2, with \(\tilde{c}=0,1\) or \(-1\), respectively._
Proof.: Since \(f\) has constant Moebius curvature and flat normal bundle, the Moebius form \(\omega^{f}\) is closed by Lemma 4.1. By the conformal Ricci equation, there exists an orthonormal frame \(X_{1},\ldots X_{n}\) with respect to the metric \(\langle\,,\,\rangle^{*}\) that diagonalizes \(\beta\) and \(\psi\) simultaneously, with \(\beta(X_{i},X_{i})=\bar{\eta}\) for all \(2\leq i\leq n.\) By Proposition 4.2, \(f(M)\) is the image by a conformal transformation of \(\mathbb{R}^{n+p}\) of an open subset of a cylinder, a generalized cone or a rotational submanifold over a curve \(\gamma:I\to\mathbb{Q}_{\tilde{c}}^{p+1}\), where \(\tilde{c}=0,1\) or \(-1\), respectively. Finally, the assumption that the Moebius curvature of \(f\) is constant implies that the first curvature function \(\kappa(s)\) of \(\gamma\) is given as in Lemma 4.2.
### Submanifolds with at least three principal normal vector fields
Let \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n-3\geq p\geq 2\), be a proper conformally flat isometric immersion with flat normal bundle whose Moebius form is closed and let \(\eta\) be a principal normal vector field that has multiplicity \(n-p+\ell\), \(0\leq\ell\leq p-2\), whose existence is guaranteed by Theorem (2.2). Let \(\bar{\eta}\) be the associated Moebius principal normal vector field. We start by finding a suitable orthogonal frame of \(N_{f}M\).
**Lemma 4.3**.: _The subset \(\{\beta(X_{i},X_{i})-\bar{\eta}:1\leq i\leq p-\ell\}\) of \(N_{f}M\) is orthogonal and \(\Delta^{\perp}:=E_{\eta}^{\perp}\) is integrable._
Proof.: The conformal Gauss equation of \(f\) reduces to
\[K^{*}(X_{k},X_{r})=\langle R^{*}(X_{k},X_{r})X_{r},X_{k}\rangle^{*}=\langle \beta(X_{k},X_{k}),\beta(X_{r},X_{r})\rangle+\psi(X_{k},X_{k})+\psi(X_{r},X_{r}), \tag{25}\]
for all \(1\leq k\neq r\leq n\). Since \((M,\langle\,,\,\rangle^{*})\) is conformally flat, it follows from Kulkarni's formula (see [11]) that
\[K^{*}(X_{i},X_{j})+K^{*}(X_{k},X_{r})=K^{*}(X_{i},X_{k})+K^{*}(X_{j},X_{r})\]
for \(k\neq r\geq p-\ell+1\) and \(1\leq i\neq j\leq p-\ell\). Thus
\[\langle\beta(X_{i},X_{i})-\bar{\eta},\beta(X_{j},X_{j})-\bar{\eta}\rangle=0. \tag{26}\]
Since \(\beta(X_{i},X_{i})-\bar{\eta}=\rho^{-1}(\eta_{i}-\eta)\), then (26) becomes
\[\langle\eta_{i}-\eta,\eta_{j}-\eta\rangle=0.\]
Using Codazzi equation (14), we see that \([X_{i},X_{j}]\in E_{\eta}^{\perp}\) for all \(1\leq i,j\leq p-\ell\), which proves the last assertion.
**Proposition 4.3**.: _There exist an orthonormal subset \(\{\xi_{1},\ldots,\xi_{p-\ell}\}\in\Gamma(N_{f}M)\) and \(f_{i}\in C^{\infty}(M)\), \(1\leq i\leq p-\ell\), such that \(f_{i}\) does not vanish at any point, \(1\leq i\leq p-\ell\), \(\sum_{i=1}^{p-\ell}f_{i}^{2}=1\) and_
\[\beta(X_{i},X_{i})-\bar{\eta}=f_{i}\xi_{i}. \tag{27}\]
Proof.: Since \(\beta\) is traceless, then
\[\bar{\eta}=-\frac{1}{n}\sum_{i=1}^{p-\ell}(\beta(X_{i},X_{i})-\bar{\eta}). \tag{28}\]
Taking the inner product of both sides of the preceding equation with \(\beta(X_{j},X_{j})-\bar{\eta}\), for each \(1\leq j\leq p-\ell\), and using (26) we obtain
\[||\beta(X_{j},X_{j})||^{2}+(n-2)\langle\beta(X_{j},X_{j}),\bar{\eta}\rangle+( 1-n)||\bar{\eta}||^{2}=0.\]
Thus
\[\sum_{j=1}^{p-\ell}||\beta(X_{j},X_{j})||^{2}+(n-2)\langle\sum_{j=1}^{p-\ell} \beta(X_{j},X_{j}),\bar{\eta}\rangle+(1-n)(p-\ell)||\bar{\eta}||^{2}=0.\]
Using that \(||\beta||_{*}^{2}=\frac{n-1}{n}\) and that \(\mathrm{tr}\beta=0\), the previous equation becomes
\[\frac{n-1}{n}-(n-p+\ell)||\bar{\eta}||^{2}-(n-2)(n-p+\ell)||\bar{\eta}||^{2}+ (1-n)(p-\ell)||\bar{\eta}||^{2}=0,\]
that is,
\[||\tilde{\eta}||=\frac{1}{n}. \tag{29}\]
Equations (28) and (29) yield \(\sum_{i=1}^{p-\ell}||\beta(X_{i},X_{i})-\bar{\eta}||^{2}=1\). Since each \(\beta(X_{i},X_{i})\) is a simple Moebius principal normal vector field, the statements follow from Lemma 4.3.
**Proposition 4.4**.: _The following formulas hold:_
**(i)**: _The Moebius second fundamental formula:_
\[\beta(X_{k},X_{k}) =\bar{\eta}=-\frac{1}{n}\sum_{i=1}^{p-\ell}f_{i}\xi_{i},\qquad p- \ell+1\leq k\leq n,\] \[\beta(X_{i},X_{i}) =\frac{n-1}{n}f_{i}\xi_{i}-\frac{1}{n}\sum_{\genfrac{}{}{0.0pt}{} {j=1}{j\neq i}}^{p-\ell}f_{j}\xi_{j},\quad 1\leq i\leq p-\ell, \tag{30}\] \[\beta(X_{k},X_{r}) =0,\quad r\neq k\geq 1.\]
**(ii)**: _The normal connection:_
\[\nabla^{\perp}_{X_{k}}\xi_{i} =0,\quad p-\ell+1\leq k\leq n\text{ and }1\leq i\leq p-\ell,\] \[\nu_{ij}(X_{i}) =\frac{f_{i}}{f_{j}}(\langle\text{grad}^{*}\log f_{j},X_{i} \rangle^{*}-\langle\delta,X_{i}\rangle^{*}),\quad 1\leq i\neq j\leq p-\ell, \tag{31}\] \[\nu_{jk}(X_{i}) =0,\quad 1\leq k\neq i\neq j\neq k\leq p-\ell,\quad\text{where} \quad\nu_{ij}(X)=\langle\nabla^{\perp}_{X}\xi_{i},\xi_{j}\rangle.\]
**(iii)**: _The Moebius one-form:_
\[\langle\omega(X_{i}),\xi_{i}\rangle =-\frac{1}{n}(X_{i}(f_{i})-f_{i}\sum_{\genfrac{}{}{0.0pt}{}{r=1} {r\neq i}}^{p-\ell}X_{i}(\log f_{r}))+\frac{n-(p-\ell-1)}{n}f_{i}\langle\delta,X_{i}\rangle^{*},\] \[\langle\omega(X_{i}),\xi_{j}\rangle =-\frac{1}{n}\left(\left(\frac{f_{j}^{2}+f_{i}^{2}}{f_{j}^{2}} \right)X_{i}(f_{j})-\frac{f_{i}^{2}}{f_{j}}\langle\delta,X_{i}\rangle^{*} \right), \tag{32}\] \[\langle\omega(X_{k}),\xi_{i}\rangle =-\frac{1}{n}X_{k}(f_{i}),\quad 1\leq i\neq j\leq p-\ell\quad\text{ and}\quad p-\ell+1\leq k\leq n,\]
_where \(\delta\) is the mean curvature vector field of \(\Delta\) with respect to \(\langle\,,\,\rangle^{*}\) and \(f_{1},\ldots f_{p-\ell}\in C^{\infty}(M)\), \(\xi_{1},\ldots,\xi_{p-\ell}\in\Gamma(N_{f}M)\) are given by Proposition 4.3._
Proof.: Equations (30) are immediate consequences of (27) and (28). Substituting (30) in the conformal Codazzi equation
\[\omega(X_{i}) =(\nabla^{\perp}_{X_{i}}\beta)(X_{k},X_{k})-(\nabla^{\perp}_{X_{ k}}\beta)(X_{i},X_{k})\] \[=\nabla^{\perp}_{X_{i}}\bar{\eta}+\langle\nabla^{*}_{X_{k}}X_{k}, X_{i}\rangle^{*}(\beta(X_{i},X_{i})-\bar{\eta}),\]
for \(1\leq i\leq p-\ell\) and \(p-\ell+1\leq k\leq n\), we obtain
\[\omega(X_{i})=-\frac{1}{n}\sum_{r=1}^{p-\ell}(X_{i}(f_{r})\xi_{r}+f_{r}\nabla^ {\perp}_{X_{i}}\xi_{r})+\langle\delta,X_{i}\rangle^{*}f_{i}\xi_{i},\]
hence
\[\begin{cases}\langle\omega(X_{i}),\xi_{i}\rangle=-\frac{1}{n}(X_{i}(f_{i})+\sum_{ \begin{subarray}{c}r=1\\ r\neq i\end{subarray}}^{p-\ell}f_{r}\nu_{ri}(X_{i}))+f_{i}\langle\delta,X_{i} \rangle^{*},\\ \langle\omega(X_{i}),\xi_{j}\rangle=-\frac{1}{n}(X_{i}(f_{j})+\sum_{ \begin{subarray}{c}r=1\\ r\neq j\end{subarray}}^{p-\ell}f_{r}\nu_{rj}(X_{i})),\quad i\neq j.\end{cases} \tag{33}\]
On the other hand, substituting (30) in the conformal Codazzi equation
\[\omega(X_{i}) =(\nabla^{\perp}_{X_{i}}\beta)(X_{j},X_{j})-(\nabla^{\perp}_{X_{j }}\beta)(X_{i},X_{j})\] \[=\nabla^{\perp}_{X_{i}}\beta(X_{j},X_{j})+\langle\nabla^{*}_{X_{ j}}X_{j},X_{i}\rangle^{*}(\beta(X_{i},X_{i})-\beta(X_{j},X_{j}))\]
for \(1\leq i\neq j\leq p-\ell\), yields
\[\omega(X_{i})=-\frac{1}{n}(\sum_{\begin{subarray}{c}r=1\\ r\neq j\end{subarray}}^{p-\ell}X_{i}(f_{r})\xi_{r}+\sum_{r=1}^{p-\ell}f_{r} \nabla^{\perp}_{X_{i}}\xi_{r})+\frac{n-1}{n}X_{i}(f_{j})\xi_{j}+f_{j}\nabla^{ \perp}_{X_{i}}\xi_{j}+\langle\nabla^{*}_{X_{j}}X_{j},X_{i}\rangle^{*}(f_{i} \xi_{i}-f_{j}\xi_{j}),\]
that is,
\[\begin{cases}\langle\omega(X_{i}),\xi_{i}\rangle=-\frac{1}{n}(X_{i}(f_{i})-nf _{j}\nu_{ji}(X_{i})+\sum_{\begin{subarray}{c}r=1\\ r\neq i\end{subarray}}^{p-\ell}f_{r}\nu_{ri}(X_{i}))+f_{i}\langle\nabla^{*}_{X_ {j}}X_{j},X_{i}\rangle^{*},\\ \langle\omega(X_{i}),\xi_{j}\rangle=\frac{1}{n}((n-1)X_{i}(f_{j})-\sum_{ \begin{subarray}{c}r=1\\ r\neq j\end{subarray}}^{p-\ell}f_{r}\nu_{rj}(X_{i}))-f_{j}\langle\nabla^{*}_{X _{j}}X_{j},X_{i}\rangle^{*},\\ \langle\omega(X_{i}),\xi_{k}\rangle=-\frac{1}{n}(X_{i}(f_{k})+\sum_{ \begin{subarray}{c}r=1\\ r\neq k\end{subarray}}^{p-\ell}f_{r}\nu_{rk}(X_{i}))+f_{j}\nu_{jk}(X_{i}),\end{cases} \tag{34}\]
for all \(1\leq k\neq i\neq j\neq k\leq p-\ell\). Comparing the expressions in (33) and (34), we obtain
\[\langle\nabla^{*}_{X_{j}}X_{j},X_{i}\rangle^{*}= \langle\mathrm{grad}^{*}\log f_{j},X_{i}\rangle^{*}, \tag{35}\] \[\nu_{ij}(X_{i})= \frac{f_{i}}{f_{j}}(\langle\nabla^{*}_{X_{j}}X_{j},X_{i}\rangle^{* }-\langle\delta,X_{i}\rangle^{*}),\] (36) \[\nu_{jk}(X_{i})= 0,\ \ 1\leq k\neq i\neq j\neq k\leq p-\ell. \tag{37}\]
The conformal Codazzi equation
\[(\nabla^{\perp}_{X_{k}}\beta)(X_{r},X_{k})-(\nabla^{\perp}_{X_{r}}\beta)(X_{ k},X_{k})=\omega((X_{k}\wedge X_{r})X_{k}),\]
\(p-\ell+1\leq k\neq r\leq n\), implies
\[\omega(X_{r})=\nabla^{\perp}_{X_{r}}\bar{\eta}, \tag{38}\]
whereas
\[(\nabla^{\perp}_{X_{k}}\beta)(X_{i},X_{i})-(\nabla^{\perp}_{X_{i}}\beta)(X_{ k},X_{i})=\omega((X_{k}\wedge X_{i})X_{i})=\nabla^{\perp}_{X_{k}}\bar{\eta},\]
for \(p-\ell+1\leq k\leq n\) and \(1\leq i\leq p-\ell\), yields
\[\nabla^{\perp}_{X_{k}}(\beta(X_{i},X_{i})-\bar{\eta})=\langle\nabla^{*}_{X_{i} }X_{i},X_{k}\rangle^{*}(\beta(X_{i},X_{i})-\bar{\eta})\Longleftrightarrow\nabla ^{\perp}_{X_{k}}(f_{i}\xi_{i})=\langle\nabla^{*}_{X_{i}}X_{i},X_{k}\rangle^{*}f _{i}\xi_{i}.\]
Therefore
\[\langle\nabla^{*}_{X_{i}}X_{i},X_{k}\rangle^{*}=\langle\mathrm{grad}^{*}\log f _{i},X_{k}\rangle^{*}\quad\text{and}\quad\nabla^{\perp}_{X_{k}}\xi_{i}=0, \tag{39}\]
for all \(p-\ell+1\leq k\leq n\) and \(1\leq i\leq p-\ell\). It follows from (36), (37) and (39) that (31) holds, and substituting it in (33) and (38) gives (32).
**Proposition 4.5**.: _The distribution \(\Delta^{\perp}\) is totally geodesic with respect to \(\langle\,\ \rangle^{*}\) if and only if \((\mathrm{grad}^{*}f_{i})_{\Delta}=0\) for all \(i=1,\ldots,p-\ell\)._
Proof.: The conformal Codazzi equation \((\nabla^{\perp}_{X_{i}}\beta)(X_{j},X_{k})=(\nabla^{\perp}_{X_{j}}\beta)(X_{i},X_{ k})\), for \(1\leq i\neq j\leq p-\ell\) and \(p-\ell+1\leq k\leq n\), implies that
\[\langle\nabla^{*}_{X_{i}}X_{j},X_{k}\rangle^{*}f_{j}\xi_{j}=\langle\nabla^{*}_{ X_{j}}X_{i},X_{k}\rangle^{*}f_{i}\xi_{i}.\]
Given that \(\xi_{i},\xi_{j}\) are linearly independent and that \(f_{i},f_{j}\) do not vanish at any point, then
\[(\nabla^{*}_{X_{i}}X_{j})_{\Delta}=0=(\nabla^{*}_{X_{j}}X_{i})_{\Delta}. \tag{40}\]
The statement then follows from (39) and (40).
**Corollary 4.1**.: _The net \(\mathcal{E}=\{\text{span}\{X_{1}\},\ldots,\text{span}\{X_{p-\ell}\},\Delta\}\) in \((M^{n},\langle\,,\,,\,\rangle^{*})\) is a TP-net._
Proof.: By (35) and (39) we have
\[\nabla^{*}_{X_{i}}X_{i}=\text{grad}^{*}\log f_{i}-\langle\text{grad}^{*}\log f _{i},X_{i}\rangle^{*}X_{i},\quad 1\leq i\leq p-\ell,\]
hence
\[\langle\nabla^{*}_{X_{i}}X_{i},X_{k}\rangle^{*}=\langle\text{grad}^{*}\log f_ {i},X_{k}\rangle^{*}=-\langle\text{grad}^{*}\log f_{i}^{-1},X_{k}\rangle^{*},\]
for all \(1\leq k\leq n\) with \(k\neq i.\) Therefore, \(\text{span}\{X_{i}\}\) is an umbilical distribution in \((M^{n},\langle\,,\,\rangle^{*})\) with mean curvature vector field \(-(\text{grad}^{*}\log f_{i}^{-1})(\text{span}\{X_{i}\})^{\perp}\) for all \(1\leq i\leq p-\ell\). That \(\Delta^{\perp}\) is integrable has been shown in Lemma 4.3.
We claim \((\text{span}\{X_{i}\})^{\perp}\) is integrable for any \(1\leq i\leq p-\ell\). Indeed, as we know that \(\Delta\) is umbilical, then \([X_{j},X_{k}]\in\Delta\subset(\text{span}\{X_{i}\})^{\perp}\) for \(p-\ell+1\leq j,k\leq n\). On the other hand, for all \(1\leq j\leq p-\ell\) and \(p-\ell+1\leq k\leq n\), with \(j\neq i\), it follows from (40) and the conformal Codazzi equation \((\nabla^{\perp}_{X_{i}}\beta)(X_{k},X_{j})=(\nabla^{\perp}_{X_{k}}\beta)(X_{i },X_{j})\) that
\[\langle\nabla^{*}_{X_{k}}X_{j},X_{i}\rangle^{*}(\beta(X_{i},X_{i})-\beta(X_{j },X_{j}))=0,\]
hence \([X_{j},X_{k}]\in(\text{span}\{X_{i}\})^{\perp}\).
Finally, assume \(p-\ell\geq 3\) and let \(1\leq k\neq j\neq i\neq k\leq p-\ell\). From the conformal Codazzi equation
\[(\nabla^{\perp}_{X_{k}}\beta)(X_{j},X_{i})=(\nabla^{\perp}_{X_{j}}\beta)(X_{k },X_{i}),\]
we obtain
\[\langle\nabla^{*}_{X_{k}}X_{i},X_{j}\rangle^{*}(\beta(X_{j},X_{j})-\beta(X_{i },X_{i}))=\langle\nabla^{*}_{X_{j}}X_{i},X_{k}\rangle^{*}(\beta(X_{k},X_{k})- \beta(X_{i},X_{i})),\]
which is equivalent to
\[\langle\nabla^{*}_{X_{k}}X_{i},X_{j}\rangle^{*}(f_{j}\xi_{j}-f_{i}\xi_{i})= \langle\nabla^{*}_{X_{j}}X_{i},X_{k}\rangle^{*}(f_{k}\xi_{k}-f_{i}\xi_{i}).\]
This easily implies that \(\langle\nabla^{*}_{X_{j}}X_{k},X_{i}\rangle^{*}=0=\langle\nabla^{*}_{X_{k}}X_{j },X_{i}\rangle^{*}\), thus proving our claim.
## 5 The main result
We are now in a position to state and prove the main result of this article.
**Theorem 5.1**.: _Let \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 5\) and \(2p\leq n\), be a proper isometric immersion with flat normal bundle and constant Moebius curvature \(c\) with at least three distinct principal normal vector fields. Then \(f(M^{n})\) is the image by a conformal transformation of \(\mathbb{R}^{n+p}\) of an open subset of a submanifold as in one of Examples 3.1, which is determined by a submanifold \(g:U\subset\mathbb{Q}^{p-\ell}_{c}\to\mathbb{Q}^{2p-\ell}_{c}\) with the property that the norm of its mean curvature vector field is given as in Corollary 3.1, with \(0\leq\ell\leq p-2\) and \(\tilde{c}=0,1\) or \(-1,\) respectively._
Proof.: First notice that \((M^{n},\langle\,,\,\rangle_{f})\) is conformally flat, for \((M^{n},\langle\,,\,\rangle^{*})\) has constant curvature and \(\langle\,,\,\rangle_{f}\) and \(\langle\,,\,\rangle^{*}\) are conformal. Since the assumptions on \(n\) and \(p\) imply that \(p\leq n-3\), by Theorem 2.2 there exists a principal normal vector field \(\eta\) with multiplicity \(n-p+\ell\) for some \(0\leq\ell\leq p-2\). Moreover, the remaining \(p-\ell\) principal normal vector fields of \(f\) are all simple by Theorem 2.3. Since the Moebius form of an isometric immersion with constant Moebius curvature and flat normal bundle is closed by Lemma (4.1), the Moebius invariants of \(f\) are given as in Proposition (4.4). Let \(X_{1},\ldots X_{n}\) be an orthonormal frame with respect to the metric \(\langle\,,\,\rangle^{*}\) that diagonalizes \(\beta\) and \(\psi\) simultaneously, and such that
\[\beta(X_{i},X_{i})=\bar{\eta}, \tag{41}\]
for all \(p-\ell+1\leq i\leq n\), where \(\bar{\eta}\) is the Moebius principal normal vector field associated with \(\eta\). Consider the smooth distributions
\[\Delta=\mathrm{span}\{X_{i}:\,p-\ell+1\leq i\leq n\}\quad\text{and}\quad \Delta^{\perp}=\mathrm{span}\{X_{i}:\,1\leq i\leq p-\ell\}. \tag{42}\]
Corollary 4.1 shows that \(\mathcal{E}\) is a TP-net. By Theorem 2.4, at each \(x\in M^{n}\) there exists an open subset \(U\) of \(M^{n}\) containing \(x\) and a product representation \(\Phi:\prod_{k=1}^{p-\ell+1}M_{k}\to U\) of \(\mathcal{E}\) that is an isometry with respect to a twisted product metric
\[\langle\,,\,\rangle=\sum_{k=1}^{p-\ell+1}\rho_{k}^{2}\pi_{k}^{*}(\,,\,)_{k} \tag{43}\]
on \(\prod_{k=1}^{p-\ell+1}M_{k}\), for some _twisting functions_\(\rho_{k}\in C^{\infty}(\prod_{k=1}^{p-\ell+1}M_{k})\), \(1\leq k\leq p-\ell+1\).
Let \((E_{i})_{i=1,\ldots,p-\ell+1}\) be the product net of \(\prod_{k=1}^{p-\ell+1}M_{k}\), that is, \(E_{i}(x):=\tau_{i\,\,*}^{x}T_{x_{i}}M_{i}\) for all \(x=(x_{1},\ldots,x_{p-\ell+1})\in\prod_{k=1}^{p-\ell+1}M_{k}\), and let \(\tau_{i}^{x}:M_{i}\to\prod_{k=1}^{p-\ell+1}M_{k}\) be the standard inclusion. As observed in (2), \(E_{i}\) is an umbilical distribution with mean curvature vector field \(-(\mathrm{grad}(\log\rho_{i}))_{E_{i}^{\perp}}\), where \(\mathrm{grad}\) is the gradient with respect to \(\langle\,,\,\rangle\).
We claim that there exist a coordinate system \((\tilde{x}_{1},\ldots,\tilde{x}_{n})\) on \(\prod_{k=1}^{p-\ell+1}M_{k}\) and functions \(r_{i}=r_{i}(\tilde{x}_{i})\), \(1\leq i\leq p-\ell\), such that \(\rho_{i}=r_{i}(\tilde{x}_{i})f_{i}^{-1}\circ\Phi\). Indeed, as shown in the proof of Corollary 4.1, \(\mathrm{span}\{X_{i}\}\) is an umbilical distribution on \((M^{n},\langle\,,\,\rangle^{*})\) with mean curvature vector field \(-(\mathrm{grad}^{*}\log f_{i}^{-1})(\mathrm{span}\{X_{i}\})^{\perp}\) for any \(1\leq i\leq p-\ell\). Hence
\[-\langle\mathrm{grad}^{*}\log f_{i}^{-1},X_{k}\rangle^{*}=\langle \nabla^{*}_{X_{i}}X_{i},X_{k}\rangle^{*} =\langle\nabla^{*}_{\tau_{i\,*}^{x_{i}}X_{i}}\Phi_{*}\tau_{i\,*}^ {x}\bar{X}_{i},\Phi_{*}\tau_{k\,*}^{x}\bar{X}_{k}\rangle^{*}\] \[=\langle\Phi_{*}\nabla_{\tau_{i\,*}^{x}x_{i}\bar{X}_{i}}\tau_{i\, *}^{x}\bar{X}_{i},\Phi_{*}\tau_{k\,*}^{x}\bar{X}_{k}\rangle^{*}\] \[=\langle\nabla_{\tau_{i\,*}^{x}x_{i}\bar{X}_{i}}\tau_{i\,*}^{x} \bar{X}_{i},\tau_{k\,*}^{x}\bar{X}_{k}\rangle\] \[=-\langle\mathrm{grad}(\log\circ\rho_{i}),\tau_{k\,*}^{x}\bar{X}_ {k}\rangle,\]
for all \(k\neq i\) with \(1\leq k\leq n\). Thus,
\[(\Phi_{*}\mathrm{grad}(\log\circ\rho_{i}))_{(\Phi_{*}E_{i}(x))^{ \perp}} =(\mathrm{grad}^{*}(\log f_{i}^{-1}))_{(\mathrm{span}\{X_{i}\}( \Phi(x)))^{\perp}}\] \[=(\Phi_{*}\mathrm{grad}(\log\circ f_{i}^{-1}\circ\Phi))_{(\Phi_{*}E _{i}(x))^{\perp}}. \tag{44}\]
Introduce coordinates \((\tilde{x}_{1},\ldots,\tilde{x}_{p-\ell},\ldots,\tilde{x}_{n})\) on \(\prod_{k=1}^{p-\ell+1}M_{k}\). Using (44), we have
\[\rho_{i}=r_{i}(\tilde{x}_{i})f_{i}^{-1}\circ\Phi,\]
for some smooth functions \(r_{i}=r_{i}(\tilde{x}_{i})\) with \(1\leq i\leq p-\ell.\) Defining \(x_{i}=\int r_{i}(\tilde{x}_{i})d\tilde{x}_{i},\) with respect to the coordinates \((x_{1},\ldots,x_{p-\ell},\tilde{x}_{p-\ell+1},\ldots,\tilde{x}_{n})\) the metric (43) can be written as
\[\langle\,,\,\rangle=\sum_{i=1}^{p-\ell}f_{i}^{-2}dx_{i}^{2}+\varphi^{2}\sum_{ i,j=p-\ell+1}^{n}\tilde{g}_{ij}d\tilde{x}_{i}d\tilde{x}_{j}, \tag{45}\]
for some \(\varphi\in C^{\infty}(U),\) where \(\tilde{g}_{ij}\) are the coefficients of the metric \(\langle\,,\,\rangle_{p-\ell+1}\) (here we omit \(\Phi\) for the sake of simplicity).
Now we show that we can choose orthogonal coordinates on \(M_{p-\ell+1}\) with respect to \(\langle\,,\,\rangle_{p-\ell+1}\). Indeed, for a fixed choice \(\bar{x}:=(\bar{x}_{1},\ldots,\bar{x}_{p-\ell})\) of the coordinates \((x_{1},\ldots,x_{p-\ell})\), let \(\tau^{\bar{x}}_{p-\ell+1}:M_{p-\ell+1}\rightarrow\prod_{k=1}^{p-\ell+1}M_{k}\) be defined by \(\tau^{\bar{x}}_{p-\ell+1}(x_{p-\ell+1})=(\bar{x}_{1},\ldots,\bar{x}_{p-\ell}, x_{p-\ell+1})\). Then \(\langle\tau^{\bar{x}}_{p-\ell+1},\tau^{\bar{x}}_{p-\ell+1_{k}}w\rangle=\varphi ^{2}_{\bar{x}}(v,w)_{p-\ell+1}\) for all \(x_{p-\ell+1}\in M_{p-\ell+1}\) and all \(v,w\in T_{x_{p-\ell+1}}M_{p-\ell+1}\), where \(\varphi_{\bar{x}}\colon M_{p-\ell+1}\rightarrow\mathbb{R}_{+}\) is given by
\[\varphi_{\bar{x}}(x_{p-\ell+1})=\varphi(\bar{x}_{1},\ldots,\bar{x}_{p-\ell},x _{p-\ell+1}).\]
Thus \(\tau^{\bar{x}}_{p-\ell+1}\) is a conformal diffeomorphism of \(M_{p-\ell+1}\) onto the leaf \(L(\bar{x}):=\tau^{\bar{x}}_{p-\ell+1}(M_{p-\ell+1})\) of \(\Delta\) with conformal factor \(\varphi_{\bar{x}}\).
Since \(L(\bar{x})\) is umbilical in \((M^{n},\langle\,,\,\rangle^{*})\) with mean curvature vector field \(\delta(\bar{x},x_{p-\ell+1})=(\mathrm{grad}^{*}\log\varphi)_{\Delta^{\perp}}\), by the Gauss equation the metric \(g_{\bar{x}}\) in \(L(\bar{x})\) induced by \(\langle\,,\,\rangle^{*}\) has curvature
\[c(\bar{x}) =c+||(\mathrm{grad}^{*}\log\varphi)_{\Delta^{\perp}}||^{2}\] \[=c+\sum_{i=1}^{p-\ell}(X_{i}(\log\varphi))^{2}\] \[=c+\sum_{i=1}^{p-\ell}(f_{i}(\bar{x},x_{p-\ell+1}))^{2}(\frac{ \partial\log\varphi}{\partial x_{i}})^{2}.\]
Since \(\langle\,,\,\rangle^{*}\) has constant curvature, then \(\delta\) is parallel in the normal connection of the inclusion of \(L(\bar{x})\) into \((M^{n},\langle\,,\,\rangle^{*})\). In particular, it has constant length, and hence, the metric \(\tau^{\bar{x}}_{p-\ell+1}{}^{*}g_{\bar{x}}=\varphi^{2}_{\bar{x}}\langle\,,\, \rangle_{p-\ell+1}\) has constant curvature \(c(\bar{x})\). In particular, there exists local orthogonal coordinates \((x_{p-\ell+1},\ldots,x_{n})\) on \((M_{p-\ell+1},\tau^{\bar{x}}_{p-\ell+1}{}^{*}g_{\bar{x}})\). Since \(\tau^{\bar{x}}_{p-\ell+1}{}^{*}g_{\bar{x}}=\varphi^{2}_{\bar{x}}\langle\,,\, \rangle_{p-\ell+1},\)\((x_{p-\ell+1},\ldots,x_{n})\) are also orthogonal coordinates on \((M_{p-\ell+1},\langle\,,\,\rangle_{p-\ell+1})\). Write
\[\langle\,,\,\rangle_{p-\ell+1}=\sum_{i=p-\ell+1}^{n}V_{i}^{2}dx_{i}^{2},\]
with \(V_{i}=\sqrt{\langle\frac{\partial}{\partial x_{i}},\frac{\partial}{\partial x _{i}}\rangle_{p-\ell+1}}\) for \(i\geq p-\ell+1.\) Then the metric (45) takes the form
\[\langle\,,\,\rangle^{*}=\sum_{i=1}^{p-\ell}f_{i}^{-2}dx_{i}^{2}+\varphi^{2} \sum_{i=p-\ell+1}^{n}V_{i}^{2}dx_{i}^{2}. \tag{46}\]
Denote
\[h_{ij}=\frac{1}{v_{i}}\frac{\partial v_{j}}{\partial x_{i}},\quad 1\leq i\neq j \leq n,\]
with \(v_{i}=f_{i}^{-1}\) for \(1\leq i\leq p-\ell\) and \(v_{k}=\varphi V_{k}\) for \(k\geq p-\ell+1.\) Then
\[h_{ij}=\begin{cases}-\dfrac{f_{i}}{f_{j}^{2}}\dfrac{\partial f_{j}}{\partial x_ {i}},&1\leq i\neq j\leq p-\ell,\\ f_{i}V_{j}\dfrac{\partial\varphi}{\partial x_{i}},&1\leq i\leq p-\ell\text{ and }j\geq p-\ell+1,\\ -\dfrac{1}{\varphi V_{i}}f_{j}^{-2}\dfrac{\partial f_{j}}{\partial x_{i}},&i \geq p-\ell+1\text{ and }1\leq j\leq p-\ell,\\ \frac{V_{j}}{V_{i}}\frac{\partial\log\varphi}{\partial x_{i}}+H_{ij},&i\neq j \geq p-\ell+1,\text{ with }\ H_{ij}:=\frac{1}{V_{i}}\frac{\partial V_{j}}{\partial x_{i}}.\end{cases}\]
It is well-known that the conditions for (46) to have constant sectional curvature \(c\) are
\[\begin{cases}\dfrac{\partial h_{ij}}{\partial x_{i}}+\dfrac{ \partial h_{ji}}{\partial x_{j}}+\sum_{k\neq i,j}h_{ki}h_{kj}+cv_{i}v_{j}=0,\\ \dfrac{\partial h_{ik}}{\partial x_{j}}=h_{ij}h_{jk},\quad 1\leq i\neq k \neq j\neq i\leq n.\end{cases}\]
Let us compute \(\frac{\partial h_{ik}}{\partial x_{j}}=h_{ij}h_{jk}\) for \(p-\ell+1\leq i\neq j\leq n\) and \(1\leq k\leq p-\ell.\) We have
\[\frac{\partial h_{ik}}{\partial x_{j}}=\frac{1}{(\varphi V_{i})^{2}}\frac{ \partial(\varphi V_{i})}{\partial x_{j}}f_{k}^{-2}\frac{\partial f_{k}}{ \partial x_{i}}+\frac{2}{\varphi V_{i}}f_{k}^{-3}\frac{\partial f_{k}}{ \partial x_{j}}\frac{\partial f_{k}}{\partial x_{i}}-\frac{1}{\varphi V_{i}}f _{k}^{-2}\frac{\partial^{2}f_{k}}{\partial x_{j}\partial x_{i}}\]
and
\[h_{ij}h_{jk}=(\frac{V_{j}}{V_{i}}\frac{\partial\log\varphi}{ \partial x_{i}}+\frac{1}{V_{i}}\frac{\partial V_{j}}{\partial x_{i}})(-\frac{ 1}{\varphi V_{j}}f_{k}^{-2}\frac{\partial f_{k}}{\partial x_{j}})=-\frac{1}{ \varphi V_{i}}\frac{\partial\log(\varphi V_{j})}{\partial x_{i}}f_{k}^{-2} \frac{\partial f_{k}}{\partial x_{j}}.\]
Multiplying the previous expression by \(\varphi V_{i}f_{k}^{2},\) we obtain
\[\frac{\partial^{2}f_{k}}{\partial x_{j}\partial x_{i}}=\frac{ \partial\log(\varphi V_{i})}{\partial x_{j}}\frac{\partial f_{k}}{\partial x _{i}}+\frac{\partial\log(\varphi V_{j})}{\partial x_{i}}\frac{\partial f_{k}}{ \partial x_{j}}+2f_{k}^{-1}\frac{\partial f_{k}}{\partial x_{j}}\frac{ \partial f_{k}}{\partial x_{i}}. \tag{47}\]
Since \(\sum_{k=1}^{p-\ell}f_{k}^{2}=1,\) then
\[\sum_{k=1}^{p-\ell}f_{k}\frac{\partial f_{k}}{\partial x_{i}}=0 \quad\text{and}\quad\sum_{k=1}^{p-\ell}f_{k}\frac{\partial^{2}f_{k}}{\partial x _{j}\partial x_{i}}=-\sum_{k=1}^{p-\ell}\frac{\partial f_{k}}{\partial x_{j}} \frac{\partial f_{k}}{\partial x_{i}}. \tag{48}\]
It follows from (47) and (48) that
\[\sum_{k=1}^{p-\ell}\frac{\partial f_{k}}{\partial x_{j}}\frac{ \partial f_{k}}{\partial x_{i}}=0,\ \ i\neq j\geq p-\ell+1. \tag{49}\]
Define \(W=(f_{1},\ldots,f_{p-\ell})\quad\text{and}\quad U_{i}=\left(\frac{\partial f _{1}}{\partial x_{i}},\ldots,\frac{\partial f_{p-\ell}}{\partial x_{i}}\right)\), \(p-\ell+1\leq i\leq n.\) By Proposition 4.5, \(\Delta^{\perp}\) is totally geodesic with respect to \(\left\langle\,,\,\right\rangle^{*}\) if and only if \(U_{i}=0\) for all \(p-\ell+1\leq i\leq n.\) Equations (48) and (49) become
\[\left\langle W,U_{i}\right\rangle=0,\quad p-\ell+1\leq i\leq n,\] \[\left\langle U_{i},U_{j}\right\rangle=0,\quad p-\ell+1\leq i\neq j \leq n,\]
that is, \(W,U_{p-\ell+1},\ldots,U_{n}\) are \(n-p+\ell+1\) orthogonal vectors in \(\mathbb{R}^{p-\ell}.\)
Since \(n\geq 2p\) by assumption, then \(n-p+\ell+1>p-\ell,\) that is, \(n>2(p-\ell)-1.\) Thus, we can assume that \(U_{j}=0\) for \(n-p+\ell+1\leq j\leq n.\) This means that \(f_{1},\ldots,f_{p-\ell}\) do not depend on \(x_{j}\) for \(n-p+\ell+1\leq j\leq n.\) Assume by contradiction that \(\Delta^{\perp}\) is not totally geodesic with respect to \(\langle\,,\,\rangle^{*}\). Then there exist \(x\in M^{n}\) and \(j\geq p-\ell+1,\) say, \(j=p-\ell+1,\) such that \(U_{p-\ell+1}(x)\neq 0\). Assume, without loss of generality, that \(\frac{\partial f_{1}}{\partial x_{p-\ell+1}}\neq 0\) on a neighborhood of \(x.\) Applying (47) for \(j=p-\ell+1,\)\(k=1\) and \(i\geq n-p+\ell+1,\) we obtain
\[\frac{\partial\log(\varphi V_{p-\ell+1})}{\partial x_{i}}=0,\]
that is, \(v_{p-\ell+1}:=\varphi V_{p-\ell+1}\) does not depend on \(x_{i}\). We can write
\[\langle\,,\,\rangle^{*}=\sum_{i=1}^{p-\ell}f_{i}^{-2}dx_{i}^{2}+\varphi^{2} \sum_{i=p-\ell+1}^{n-p+\ell}V_{i}^{2}dx_{i}^{2}+(\varphi V_{p-\ell+1})^{2}V_{p -\ell+1}^{-2}\sum_{i=n-p+\ell+1}^{n}V_{i}^{2}dx_{i}^{2}.\]
Denote \(g_{1}=\sum_{i=1}^{p-\ell}f_{i}^{-2}dx_{i}^{2}+\varphi^{2}\sum_{i=p-\ell+1}^{n- p+\ell}V_{i}^{2}dx_{i}^{2}\) and \(g_{2}=V_{p-\ell+1}^{-2}\sum_{i=n-p+\ell+1}^{n}V_{i}^{2}dx_{i}^{2},\) so that
\[\langle\,,\,\rangle^{*}=g_{1}+v_{p-\ell+1}^{2}g_{2}.\]
The fact that \(\langle\,,\,\rangle^{*}\) has constant sectional curvature \(c\) is well-known to be equivalent to the following conditions:
1. \(g_{1}\) has constant curvature \(c.\)
2. \(\operatorname{Hess}v_{p-\ell+1}+cv_{p-\ell+1}g_{1}=0\)
3. \(g_{2}\) has constant curvature \(||\operatorname{grad}v_{p-\ell+1}||^{2}+cv_{p-\ell+1}^{2},\)
where \(\operatorname{Hess}\) and \(\operatorname{grad}\) are computed with respect to \(g_{1}\). In particular,
\[\operatorname{Hess}v_{p-\ell+1}(\partial_{i},\partial_{i})+cv_{p-\ell+1}v_{i} ^{2}=0,\quad 1\leq i\leq p-\ell. \tag{50}\]
We have
\[\nabla_{\partial_{i}}\partial_{i}=\nabla_{\partial_{i}}(v_{i}X_{i}) =\frac{\partial v_{i}}{\partial x_{i}}X_{i}+v_{i}\left(\sum_{ \genfrac{}{}{0.0pt}{}{j=1}{j\neq i}}^{n-p+\ell}\langle\nabla_{\partial_{i}}X _{i},X_{j}\rangle X_{j}\right)\] \[=\frac{\partial v_{i}}{\partial x_{i}}X_{i}-v_{i}\sum_{ \genfrac{}{}{0.0pt}{}{j=1}{j\neq i}}^{n-p+\ell}h_{ji}X_{j}\] \[=\frac{\partial v_{i}}{\partial x_{i}}\frac{1}{v_{i}}\frac{ \partial}{\partial x_{i}}-v_{i}\sum_{\genfrac{}{}{0.0pt}{}{j=1}{j\neq i}}^{n- p+\ell}h_{ji}\frac{1}{v_{j}}\frac{\partial}{\partial x_{j}}.\]
Hence
\[\operatorname{Hess}v_{p-\ell+1}(\partial_{i},\partial_{i}) =\frac{\partial}{\partial x_{i}}(v_{i}h_{i(p-\ell+1)})-(\nabla_{ \partial_{i}}\partial_{i})(v_{p-\ell+1})\] \[=\frac{\partial v_{i}}{\partial x_{i}}h_{i(p-\ell+1)}+v_{i}\frac {\partial h_{i(p-\ell+1)}}{\partial x_{i}}-\frac{1}{v_{i}}\frac{\partial v_{i}} {\partial x_{i}}\frac{\partial v_{p-\ell+1}}{\partial x_{i}}+v_{i}\sum_{ \genfrac{}{}{0.0pt}{}{j=1}{j\neq i}}^{n-p+\ell}h_{ji}\frac{1}{v_{j}}\frac{ \partial v_{p-\ell+1}}{\partial x_{j}}\] \[=v_{i}\left(\frac{\partial h_{i(p-\ell+1)}}{\partial x_{i}}+\sum _{\genfrac{}{}{0.0pt}{}{j=1}{j\neq i,p-\ell+1}}^{n-p+\ell}h_{ji}h_{j(p-\ell+1)} +h_{(p-\ell+1)i}\frac{1}{v_{p-\ell+1}}\frac{\partial v_{p-\ell+1}}{\partial x _{p-\ell+1}}\right).\]
Thus, (50) is equivalent to
\[\frac{\partial h_{i(p-\ell+1)}}{\partial x_{i}}+\sum_{\begin{subarray}{c}j=1\\ j\neq i,p-\ell+1\end{subarray}}^{n-p+\ell}h_{ji}h_{j(p-\ell+1)}+h_{(p-\ell+1)i} \frac{1}{v_{p-\ell+1}}\frac{\partial v_{p-\ell+1}}{\partial x_{p-\ell+1}}+cv_{ p-\ell+1}v_{i}=0. \tag{51}\]
On the other hand, since \(g_{1}=\sum_{i=1}^{n-p+\ell}v_{i}^{2}dx_{i}^{2}\) has constant sectional curvature \(c\), then
\[\frac{\partial h_{i(p-\ell+1)}}{\partial x_{i}}+\frac{\partial h_{(p-\ell+1)i }}{\partial x_{(p-\ell+1)i}}+\sum_{\begin{subarray}{c}j=1\\ j\neq i,p-\ell+1\end{subarray}}^{n-p+\ell}h_{ji}h_{j(p-\ell+1)}+cv_{i}v_{p-\ell +1}=0.\]
Comparing with (51) yields
\[\frac{\partial h_{(p-\ell+1)i}}{\partial x_{p-\ell+1}}=h_{(p-\ell+1)i}\frac{ 1}{v_{p-\ell+1}}\frac{\partial v_{p-\ell+1}}{\partial x_{p-\ell+1}},\]
or equivalently,
\[\frac{\partial}{\partial x_{p-\ell+1}}(h_{(p-\ell+1)i}/v_{p-\ell+1})=0.\]
Using that \(h_{(p-\ell+1)i}=-\frac{1}{v_{p-\ell+1}}f_{i}^{-2}\frac{\partial f_{i}}{ \partial x_{p-\ell+1}}\) for \(1\leq i\leq p-\ell\), we obtain
\[\frac{1}{v_{p-\ell+1}^{2}}f_{i}^{-2}\frac{\partial f_{i}}{\partial x_{p-\ell+ 1}}=\phi_{i}(x_{1},\ldots,x_{p-\ell},x_{p-\ell+2},\ldots,x_{n-p+\ell}),\]
for some function \(\phi_{i}=\phi_{i}(x_{1},\ldots,x_{p-\ell},x_{p-\ell+2},\ldots,x_{n-p+\ell}).\) In particular, for \(i=1\), the preceding equation yields a contradiction, for \(\frac{\partial f_{1}}{\partial x_{p-\ell+1}}\neq 0\) at \(x\) by hypothesis, whereas its right hand side does not depend on \(x_{p-\ell+1}\).
We conclude that \(\Delta^{\perp}\) is totally geodesic with respect to \(\langle\,,\,\rangle^{*}\), hence spherical with respect to the metric induced by \(f\) by Proposition 4.1, with \((\operatorname{grad}_{f}\log\rho)_{\Delta}\) as its mean curvature vector field.
Since \(\Delta=E_{\eta}\), Theorem 2.1 shows that \(f\) is locally, up to a composition with a conformal transformation of \(\mathbb{R}^{n+p}\), an immersion \(f=\Theta\circ(g,\operatorname{Id}):M^{p-\ell}\times\mathbb{Q}_{-\tilde{c}}^{n- p+\ell}\to\mathbb{R}^{n+p}\), where \(g:M^{p-\ell}\to\mathbb{Q}_{\tilde{c}}^{2p-\ell}\), \(0\leq\ell\leq p-2\), is an isometric immersion with flat normal bundle and \(\Theta=\operatorname{Id}:\mathbb{R}^{n+p}\to\mathbb{R}^{n+p}\) if \(\tilde{c}=0\) or \(\Theta\) is as in Examples 3.1-\((ii)\) and \((iii)\) if \(\tilde{c}\neq 0.\) Now, as \(f\) is proper then so is \(g\) (because, it follows from Proposition 5 of [3] that the property of an isometric immersion with flat normal bundle being proper is invariant under conformal changes of the ambient metric), so then, \(\nu^{g}\) vanishes identically on \(M^{n}\), otherwise, if \(\nu^{g}(y)\geq 1\) for some \(y\in M^{n}\) then \(g\) would have at least one of the \(p-\ell\) principal normal vector fields vanishing, which implies \(\eta\) would have multiplicity strictly greater than \(n-p+\ell\), what it is a contradiction. Since \(f\) has constant Moebius curvature \(c\), the conclusion of this theorem is a consequence of Lemma 3.1 and Corollary 3.1.
**Remarks 5.1**.: \((i)\) Theorem (5.1) together with Theorem (4.1) provide a complete classification of all isometric immersions \(f:M^{n}\to\mathbb{R}^{n+p}\), \(n\geq 2p\) and \(n\geq 5\), with constant Moebius curvature and flat normal bundle.
\((ii)\) The proof of Theorem 5.1 also works for isometric immersions \(f:M^{4}\to\mathbb{R}^{6}\) with constant Moebius curvature and flat normal bundle that have a principal normal vector field \(\eta\) of multiplicity \(2\) (observe that the case in which \(\eta\) has multiplicity \(3\) is included in Theorem 4.1). Therefore, for the classification of all submanifolds \(f:M^{n}\to\mathbb{R}^{n+2}\), \(n\geq 4\), with constant Moebius curvature and flat normal bundle, it remains to study the case in which there are four distinct principal normal vector fields. |
2301.12898 | A Role of Fractional Dimension in Study Physics | In this study, we explore the field of physics through the lens of fractional
dimensionality. We propose that space is not confined to integer dimensions
alone but can also be understood as a superposition of spaces that exist
between these integer dimensions. The concept of fractional dimensional space
arises from the idea that the space between integer dimensions is filled, which
occurs through the application of a fractional derivative operator (the local
part) that rotates the integer dimension to encompass all spaces between two
integers. We introduce five foundational axioms that provide a mathematical
framework for studying physics from this fractional dimensional perspective. To
illustrate the framework's utility, we present three scenarios: static systems,
linear trajectories, and quadratic trajectories. In this context, motion along
linear or quadratic trajectories within a fractional dimensional space
corresponds to an inertial reference frame or an accelerated system,
respectively, in classical physics. Moreover, we propose that the coupling of
space and time, commonly referred to as space-time, is better understood as
space-dimension-time within this framework, where the dimension serves as an
interconnecting platform. Finally, we derive the wave equation from the
perspective of fractional dimensions, focusing on linear trajectories in
fractional dimensional space. This approach provides new insights into the
behavior of electromagnetic waves in both lossless and lossy media, and it
offers a fresh interpretation of the Doppler effect and gravitational redshift
(or blueshift) phenomena from the standpoint of fractional dimensionality. | Ali Dorostkar | 2023-01-06T17:01:51Z | http://arxiv.org/abs/2301.12898v5 | # A Role of Fractional Dimension in Study Physics
###### Abstract
We study physics from fractional dimensional perspective. It is considered that the whole space is not only integer dimensions but also a superposition of spaces between integer dimensions. Space filling between integer dimensions makes a fractional dimensional space. This happens through a rotation of integer dimension by fractional derivative operator to sweep all spaces between two integer dimensions. Then, we introduce the dimension principle derived from perspective visual effect of a projective space. It expresses that the dimension of moving particle varies from the observer point of view. Subsequently, the mathematical framework is built via fractional derivative operator to represent fractional dimensional basis vector and function. Afterwards, three scenarios such as static, linear and quadratic trajectories are demonstrated. Where moving with linear or quadratic trajectories in fractional dimensional space corresponds to an inertial framework or accelerated system in physics. Also from this opinion, it is realized that the coupling of space and time (space-time) happens through the dimension with an interconnection role where we call it space-dimension-time. Finally, the wave equation is derived from fractional dimensional point of view with a linear trajectory in fractional dimensional space. And the behavior of electromagnetic wave in lossless and lossy mediums, Doppler effect and gravitational red (blue)-shift are also demonstrated from fractional dimensional aspect.
## 1 Introduction
Perception of the nature is one of the main topics of science and particularly in study of physics. Based on the Godel's incompleteness theorems [1], a feasibility of complete understanding of the nature is impossible. It means that there is no and will not have a complete and comprehensive theory for the nature and we can just improve a model in each step and this process will continue for ever. This process has continued during the history of physics. The classical or Newtonian's physics as a fundamental framework of physics is a great tool to describe most of phenomena in the nature. However, the lack of solution for justification of particles dynamic in microscopic scale physics or a reason for gravitational force has excited us to go further step to bring quantum mechanics (QM) and relativity. As we know from general relativity (GR) being created, gravity is due to the curvature of space-time by mass which presents a large scale physics [2]. On the other side, quantum mechanics is a good model for small scale physics [3]. However, there are some open questions such as trajectory of particle and non-locality effect in QM. There are also several attempts to unify a theory for physics through a relation between GR and QM such as M-theory or string theory [4, 5], however; there is still lack of connection between quantum and gravity. This introduction reminds us there are always some open questions in physics leading the best understanding of nature. In this regard, to understand how more information about physics can be derived from fractional dimensional point of view, we get started to make a concept: how does a moving of particle in the integer space correspond to that in the fractional dimensional space? For instance, three axes of \(x_{1},x_{2},x_{3}\) make a space of three integer dimensions. While in our perspective, in addition of integer dimensions the fractional dimensions must be taken into account. Fractional basis function (vector) can be expressed by a rotation of basis function of integer dimensions [6]. It is shown that this rotation can be described by fractional derivative (FDr) operator. Then, this rotation fills the whole space between the integer dimensions as the fractional
dimensional space. We express dimension principle and subsequently makes a mathematical framework through FDr operator. Afterwards, we can investigate moving (propagating) a particle (signal) through particular trajectories in fractional dimensional space.
## 2 Dimension Principle
Dimensions of an object will be smaller (larger) with increasing (decreasing) the spatial distance. It is assumed as a principle for fractional dimensional point of view where it is related to a projective space originated from perspective visual effect [7, 8]. Thus, the equivalent statement can be expressed as follows:
let observer in S coordinate and object in \(S^{\prime}\) coordinate, then there is an acute angle between them which is the angle is zero when the coordinate distance is zero and will be 90 degrees when distance goes to infinity (or where disappears from observer). As will show, this rotation can be described through FDr operator. As shown in Fig. 1, an object in \(S^{\prime}\) coordinate will be smaller and smaller when it is moving away from an observer in \(S\) coordinate. This can be realized through a trajectory of \(S^{\prime}\) coordinates where \(S^{\prime}\) is not an orthogonal toward \(S\) coordinate and makes a particular angle in each space-time position. In other words, the projection of coordinate \(S^{\prime}\) on \(S\) will change depend on the distance. As will shown mathematically, the dimension variation is following through a fractional derivative with certain trajectory. Note that, in Fig. 1 the scalar 1 is derived from an integer derivative of x or t.
## 3 Mathematical Formulation
In this section, we make a mathematical formulation for the fractional dimensional basis vector and function. Before further processing, we demonstrate some important FDr definitions and determine the FDr expression for some special functions in this study.
### Fractional Derivative Operator
As we know, there are several definitions for FDr such as direct, Riemann-Liouville (RL), Caputo, Liouville-Caputo and Fourier methods [9, 10]. The definition for RL FDr is expressed as follows
\[{}^{RL}_{a}D^{\alpha}_{t}f(t)=\frac{1}{\Gamma(n-\alpha)}(\frac{d}{dt})^{n}\int _{a}^{t}(t-x)^{n-\alpha-1}f(x)dx \tag{1}\]
Figure 1: Schematic of S and \(S^{\prime}\) coordinates in a movement framework
where \(D^{\alpha}\), a, \(\Gamma\) are the FDr operator, an initial point for integral and Gamma function, respectively. There is an extra term as well as common one for every definitions presented for FDr based on the generalized hypergeometric function [10]. However, all these approaches will converge to a same result in the infinite limit of the argument of the functions. For instance for RL definition, it happens when \(a\) go to minus infinity (\(a\rightarrow-\infty\)) [9].
It can be seen that FDr operator is a non-local operator unlike local integer derivative. This assumption can help us to explain a non-locality effect in physics. For instance, integer time derivative of position give us velocity or momentum, however; from fractional dimensional point of view this derivative is a non-local operator and impact from a temporal or spatial intervals. It is under further investigations and we will add more results about that in the future.
In this study, we need FDr of trigonometric, exponential and power series functions for further investigations. In this regard, we use direct FDr definition or by assuming \(a\rightarrow-\infty\) for other FDr definitions like RL. Therefore, the following results are derived by a normalization of FDr through a division by a normalization factor.
\[D_{\mathrm{N}}^{\alpha}\Bigg{\{}cos(\omega S)\Bigg{\}} =\frac{1}{D_{0}}D^{\alpha}\Bigg{\{}cos(\omega S)\Bigg{\}}=\frac{1 }{D_{0}}(\omega)^{\alpha}cos(\omega S+\frac{\pi}{2}\alpha)=cos(\omega S+\frac {\pi}{2}\alpha),\] \[D_{\mathrm{N}}^{\alpha}\Bigg{\{}sin(\omega S)\Bigg{\}} =\frac{1}{D_{0}}D^{\alpha}\Bigg{\{}sin(\omega S)\Bigg{\}}=\frac{1 }{D_{0}}(\omega)^{\alpha}sin(\omega S+\frac{\pi}{2}\alpha)=\sin(\omega S+\frac {\pi}{2}\alpha), \tag{2}\] \[D_{\mathrm{N}}^{\alpha}\Bigg{\{}e^{(i\,\omega S)}\Bigg{\}} =\frac{1}{D_{0}}D^{\alpha}\Bigg{\{}e^{(i\,\omega S)}\Bigg{\}}= \frac{1}{D_{0}}(\omega)^{\alpha}e^{(\omega S+\frac{\pi}{2}\alpha)}=e^{i\,( \omega S+\frac{\pi}{2}\alpha)},\]
where \(D_{\mathrm{N}}^{\alpha}\) and \(D_{0}\) denote the normalized FDr operator and the normalization factor, respectively. This make confidence the normality of the basis functions (\(|\phi^{\alpha}|=1\)) holds. \(D_{0}\) denotes \((\omega)^{\alpha}\).
Fractional derivative of a basic power function, \(x^{n}\), can be derived from direct or Riemann-Liouville Caputo or other methods with the same result [11, 12, 13]
\[D_{\mathrm{N}}^{\alpha}\Bigg{\{}x^{n}\Bigg{\}}=\frac{1}{D_{0}}D^{\alpha} \Bigg{\{}x^{n}\Bigg{\}}=\frac{1}{D_{0}}\frac{\Gamma(n+1)x^{(n-\alpha)}}{ \Gamma(n-\alpha+1)}=x^{(n-\alpha)}, \tag{3}\]
where \(D_{0}\) is equal to \(\frac{\Gamma(n-\alpha+1)}{\Gamma(n+1)}\).
### Fractional Dimensional Basis Vector
Let \(S\) and \(S_{\alpha}\) are the basis vector of the linear and fractional spaces, respectively, where S as indicated in (4) is a vector of independent variables of space and time and \(\alpha\) is an order of fractional derivative.
\[S=[x_{1},x_{2},x_{3},t], \tag{4}\]
Variable \(x_{i}(i=1,2,3)\) corresponds to space one and t to that of time. Then, the relation between two coordinates regarding (3) is written as follows
\[S_{\alpha}=D_{\mathrm{N}}^{\alpha}\Bigg{\{}S\Bigg{\}}=S^{(1-\alpha)}. \tag{5}\]
In conclusion, the coordinate \(S_{\alpha}\) (red arrow) for a certain \(\alpha\) as shown in Fig. 2 is a rotation of original coordinate S in fractional dimensional space. It can be derived from a FDr operator with a certain order of \(\alpha\). In Fig. 3, it is shown how a basic vector x through a rotation can reach 1. Scalar 1 is chosen because an integer derivative of x is equal to one. This will happen via a fractional derivative with the angle \(\theta=\frac{\pi}{2}\alpha\). We need three orthogonal axes such as \(x\) (representation of \(x_{i}\)), \(t\) and 1 as shown in Fig. 3(a) for depicting the fractional space of space-time,
where two of them including \(x\), \(t\) are vector and 1 denotes scalar one. Red colour arrows depicted in Fig. 3(b) to (d) show that space-time axes for \(\alpha\) near to zero (Fig. 3(b)) mostly has a angle (\(\theta\)) near to zero, since \(\alpha\) goes to one (Fig. 3(d)) the angle (\(\theta\)) is near to \(\pi/2\). As will shown in the next section, there are different scenarios depending on the angle rotation of \(\theta\).
### Fractional Dimensional Basis Function
Let \(\phi(S)\) and \(\phi_{\alpha}(S)\) be the basis functions of the linear and fractional spaces, respectively. Then, there are the following relations between the basis functions of the both spaces:
\[\phi_{\alpha}(S)=\frac{1}{D_{0}}D^{\alpha}\Bigg{\{}\phi(S)\Bigg{\}}=D_{\rm N}^ {\alpha}\Bigg{\{}\phi(S)\Bigg{\}}, \tag{6}\]
Note that \(\alpha\) is a certain order of fractional dimension for each space-time position.
## 4 Preliminary plan
We limit our study to the three scenarios as shown in Table 1 due to having more general cases in physics. The first is called static scenario when \(\alpha\) is a constant value. The latter is a linear dynamic corresponds to an inertial framework in physics. It will happen when \(\alpha\) has a linear trajectory. The last is an accelerated system when the quadratic trajectory function acts. In this section first of all, we discuss about our attitude regrading dimension and then we investigate these three scenarios for both fractional dimensional basis vector and function. The trajectory function coefficients of \(\alpha_{i}\), \(\beta_{ij}\) and \(\gamma_{ij}\) shown in Table 1 can be expressed in the format of a vector or matrix (tensor) depend on the trajectory of the fractional dimension.
Figure 2: Schematic of fractional dimensional basic vector \(S_{\alpha}\) (red arrow) from a rotation of integer basis vector S through FDr operator, a vector rotates from 0 dimension to 1 (\(0\leq\alpha\leq 1\)) or zero to 90 degrees rotation (\(0\leq\theta\leq\frac{\pi}{2}\)).
coupling in static scenario state since it is a steady one. To show a coupling between space and time, one could consider both of them as simple linear movement. Indeed, \(x_{i}\) and t axes move rotationally toward scalar 1 axis. Being independent space and time asserts to us to write as follows
\[\begin{split}&\alpha(x_{i})=\beta_{i}x_{i}+\alpha_{i}\ \ (i=1,2,3),\\ &\alpha(t)=\beta_{4}t+\alpha_{4},\end{split} \tag{7}\]
taking an integer derivative, we have
\[\begin{split}&\frac{d(\alpha(x_{i}))}{dx_{i}}=\beta_{i},\\ &\frac{d(\alpha(t))}{dt}=\beta_{4},\end{split} \tag{8}\]
and dividing derivative
\[\frac{\frac{d(\alpha(x_{i}))}{dx_{i}}}{\frac{d\alpha(t)}{dt}}=\frac{d(\alpha(x_ {i}))}{d(\alpha(t))}\frac{dt}{dx_{i}}=\frac{d(\alpha(x_{i}))}{d(\alpha(t))} \frac{1}{v_{i}}=\frac{\beta_{i}}{\beta_{4}}, \tag{9}\]
therefore, we have
\[\frac{d(\alpha(x_{i}))}{d(\alpha(t))}=v_{i}\frac{\beta_{i}}{\beta_{4}}. \tag{10}\]
The relation (10) says us when moving with a constant speed, movement of rotational variable x-axis toward t one has a relation to speed. In other words, space is related to time when looking at it from dimensional point of view.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Scenario & Static & Linear Dynamic & Accelerated Dynamic \\ \hline Trajectory (\(\alpha_{i}(S)\)) & \(\alpha_{i}\) & \(\beta_{ij}S_{i}+\alpha_{i}\) & \(\gamma_{ijk}S_{j}S_{k}+\beta_{ij}S_{i}+\alpha_{i}\) \\ \end{tabular}
\end{table}
Table 1:Different Observations
Figure 4: Space-time coupling through the dimension called Space-Dimension-Time (SDT)
### Presentation of Static and Moving Coordinate Systems via Fractional Dimensional Basis Vector
Beside static case, there are two important types of dynamic systems in physics either movement of constant velocity or acceleration. These types of system in our study correspond to an angle rotation of coordinate \(S_{\alpha}\) of a linear rotation (constant velocity) or quadratic rotation (constant acceleration). It should be noted that if two coordinates of \(S\) and \(S_{\alpha}\) are lying each other, then \(\alpha\) is equal to zero or \(\theta=0\) and when they are so far from each other \(\alpha\) is going to 1 (\(\theta\rightarrow\frac{\pi}{2}\)). This property is independent of type of dynamic system. It means that S and \(S_{\alpha}\) are lying each other for both an accelerated and inertial framework systems when \(\alpha\) is equal to zero and when two coordinates are faring away for both systems \(\alpha\) is going to 1.
#### 4.2.1 Static scenario
In this case, two coordinates S and \(S_{\alpha}\) are considered to be fixed. Therefore, there is a constant space distance (\(S_{\text{offset}}\)) between a fixed observer and static framework. It should be noted that the space distance offset is equal to \(|S_{\text{offset}}|=\sqrt{x_{1,\text{offset}}^{2}+x_{2,\text{offset}}^{2}+x_{3, \text{offset}}^{2}}\) and time distance offset can be zero for static case, however; it is not valid for dynamic scenario because we have space-time distance or in this aspect SDT distance. To find \(\alpha\), it is needed to define an infinite space parameter (\(|S_{\infty}|\)). Infinite space (time) parameter (\(|S_{\infty}|\)) is a certain value of space (time) where after that the observer (camera) can not measure the observed framework anymore and it disappears from the observer point of view (\(|S_{\infty}|=\sqrt{x_{1,\infty}^{2}+x_{2,\infty}^{2}+x_{3,\infty}^{2}}\)). Therefor, \(\alpha\) for static scenario can be realized as follows
\[\alpha=\frac{|S_{\text{offset}}|}{|S_{\infty}|}=\alpha_{0}. \tag{11}\]
So, the fractional dimensional basis vector is
\[S_{\alpha}=D_{\text{N}}^{\alpha_{0}}\big{\{}S\big{\}}. \tag{12}\]
Different situations for static observation are tabulated in Table 1. When two coordinates are lying each other the \(S_{\text{offset}}\) is equal to zero resulting \(\alpha=0\) (\(S=S_{\alpha}\)). If \(S_{\text{offset}}\) has a value larger than zero and smaller than \(S_{\infty}\), thus we have an \(\alpha_{0}\) in a range between zero and one ( \(0<\alpha_{0}<1\)) and when two coordinates are very far each other \(S_{\text{offset}}\to S_{\infty}\) which leads to \(\alpha\to 1\). For instance, imagine one object is in front of the observer, then \(S_{\text{offset}}=0\) and therefore \(\alpha=\alpha_{0}=0\). Now, we put the same object with a distance of \(S_{\text{offset}}=\frac{1}{2}S_{\infty}\) and then from the same observer the \(\alpha=\alpha_{0}=0.5\). It must be mentioned when \(S_{\text{offset}}>S_{\infty}\) the measurement is not correct due to disappearing the observed framework from the observer and leads to high error tolerance.
#### 4.2.2 Linear and Quadratic Dynamic
A Moving system leads to a rotation of coordinate depends on the function trajectory in space-time from fractional dimension point of view. To sake of simplicity, linear and quadratic trajectories of moving coordinate are chosen. Then, we find an equivalent physical problem for that. In physics, when we have inertial framework (constant speed), special relativity and Minkowski
space are used, and when we have accelerated frames (constant acceleration or gravity), general relativity and Riemann space are used. With a physical counterpart, we should obtain the same result with special relativity when \(\alpha\) has a linear trajectory and general relativity when \(\alpha\) has a quadratic trajectory.
Further investigations need a new metric for fractional dimensional space-time or SDT. It is out of this study and will consider it as a future work, however; we try to express an equivalency of our model with some simplified examples. We can represent a \(S^{\prime}\) coordinate based on the trigonometric properties as following
\[\begin{pmatrix}x_{1}^{\prime}\\ x_{2}^{\prime}\\ x_{3}^{\prime}\\ t^{\prime}\end{pmatrix}=\begin{pmatrix}cos\big{(}\theta_{1}(S)\big{)}&0&0&0\\ 0&cos\big{(}\theta_{2}(S)\big{)}&0&0\\ 0&0&cos\big{(}\theta_{3}(S)\big{)}&0\\ 0&0&0&cos\big{(}\theta_{4}(S)\big{)}\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\\ t\end{pmatrix}+\begin{pmatrix}sin\big{(}\theta_{1}(S)\big{)}\\ sin\big{(}\theta_{2}(S)\big{)}\\ sin\big{(}\theta_{3}(S)\big{)}\\ sin\big{(}\theta_{4}(S)\big{)}\end{pmatrix}, \tag{13}\]
where \(\theta_{i}(S)\) (i=1,2,3,4) is equal to \(\frac{\pi}{2}\alpha_{i}(S)\). As an example we consider a situation for very slow motion of an inertial framework in direction of \(x_{1}\) where \(\theta_{1}\) is very small (\(cos\theta_{1}\approx 1\) and \(sin\theta_{1}\approx\theta_{1}\)) and \(\theta_{2}=\theta_{3}=\theta_{4}=0\). Therefore the coordinate transformation can be approximated to Galilean transformation with \(sin\theta_{1}\approx\pm vt\).
\[\begin{pmatrix}x_{1}^{\prime}\\ x_{2}^{\prime}\\ x_{3}^{\prime}\\ t^{\prime}\end{pmatrix}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\\ t\end{pmatrix}+\begin{pmatrix}\pm vt\\ 0\\ 0\\ 0\end{pmatrix}. \tag{14}\]
Similarly, for an accelerated system in \(x_{1}\) or \(x_{1}^{\prime}\) direction with a small acceleration or gravity and approximation of \(sin\theta_{1}\approx\theta_{1}=\pm\frac{1}{2}gt^{2}\) and \(\theta_{2}=\theta_{3}=\theta_{4}=0\), the coordinate transformation can be expressed as follows
\[\begin{pmatrix}x_{1}^{\prime}\\ x_{2}^{\prime}\\ x_{3}^{\prime}\\ t^{\prime}\end{pmatrix}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\\ t\end{pmatrix}+\begin{pmatrix}\pm\frac{1}{2}gt^{2}\\ 0\\ 0\\ 0\end{pmatrix}. \tag{15}\]
### A Signal Representation of System via Fractional Dimensional Basis Function
In this section, we demonstrate the signal behavior based on the three scenarios. By the use of assumption (6), we can also rewrite the formulation between basis function of transmitter (\(T(S)\)) and receiver (\(R(S)\)) signals as follows
\[R(S)=\frac{1}{2}D_{\mathrm{N}}^{\alpha}\big{\{}T(S)\big{\}}+\frac{1}{2}\bigg{(} D_{\mathrm{N}}^{\alpha}\big{\{}T(S)\big{\}}\bigg{)}^{*}=Re\bigg{\{}D_{\mathrm{N}}^{ \alpha}\big{\{}T(S)\big{\}}\bigg{\}}, \tag{16}\]
where \(\Re\) is real part of basis function. The FDr operator connects the input signal (initial signal) to the output signal of the system through a trajectory in fractional dimension. This assumption denotes that the transmitter itself is in an integer coordinate, however; from receiver point of view has a fractional dimensional coordinate.
Note that for the sake of simplicity we play with two variables \(x_{1}\) (space) and t (time) where \(x_{1}\) is symbolized by \(z\). So hereafter, \(z\) and t denote space and time variables, respectively.
#### 4.3.1 Static scenario
From signal point of view, a constant distance holds between transmitter and receiver as shown in Fig. 5 (a).
This is like that an observer see a standing object from a certain fixed distance. For example, a signal of formula \(A_{0}cos(\omega t+kz)\) is propagating in free space. How is the received signal diagram change (\(R(z=L,t)\)) after a propagation of distance L in a (a) lossless (b) lossy medium? Where \(A_{0}\), \(\omega\) and \(k\) are amplitude, angular frequency and wave vector, respectively.
Hint: when electromagnetic wave is propagating in a lossy medium, wave vector is a complex value as
\[k=\beta+i\alpha_{att.}, \tag{17}\]
where \(\beta\) and \(\alpha_{att.}\) are propagation constant and attenuation decay coefficient.
The answer for part (a) is:
from electrodynamics side, we know that the signal has a phase shift corresponding to \(kL\) where in lossless medium \(k\) is equal to propagation constant \(\beta\).
\[R(z=L,t)=A_{0}cos(\omega t+\beta L)=A_{0}cos(\omega t+\Phi), \tag{18}\]
where \(\Phi\) is equal to \(\beta L\).
However from fractional dimensional point of view, the received signal at \(z=L\) is obtained by a constant fractional dimension of \(\alpha_{0}\). Therefore, we have
\[R(z=L,t)=D_{\rm N}^{\alpha_{0}}\big{\{}A_{0}cos(\omega t)\big{\}}=A_{0}cos( \omega t+\frac{\pi\alpha_{0}}{2})=A_{0}cos(\omega t+\Phi), \tag{19}\]
where \(\Phi=\beta L=\frac{\pi\alpha_{0}}{2}\).
and the answer for part (b) is:
from electrodynamics aspect, the final received signal after a simplification at the distance L is
\[R(z=L,t)=A_{0}e^{-\alpha_{att.}L}cos(\omega t+\beta L), \tag{20}\]
and from fractional dimensional perspective, we have
\[R(z=L,t)=\frac{1}{2}D_{\rm N}^{\alpha_{0}}\big{\{}e^{i\omega t}\big{\}}+\frac {1}{2}\bigg{(}D_{\rm N}^{\alpha_{0}}\big{\{}e^{i\omega t}\big{\}}\bigg{)}^{*}= \frac{1}{2}D_{\rm N}^{\alpha_{0}}\big{\{}e^{i\omega t}\big{\}}+\frac{1}{2} \bigg{(}D_{\rm N}^{\alpha_{0}^{*}}\big{\{}e^{-i\omega t}\big{\}}\bigg{)}, \tag{21}\]
where \(\alpha_{0}\) has a complex value as follows
\[\alpha_{0}=\alpha_{0}^{\rm R}+i\alpha_{0}^{\rm I}. \tag{22}\]
Then, the complex fractional basis function is
\[D_{\rm N}^{\alpha_{0}}\big{\{}e^{i\omega t}\big{\}}=e^{i(\omega t+\frac{\pi \alpha_{0}}{2})}=e^{i(\omega t+\frac{\pi}{2}\alpha_{0}^{\rm R}+i\frac{\pi}{2} \alpha_{0}^{\rm I})}=e^{(-\frac{\pi}{2}\alpha_{0}^{\rm I})}e^{i(\omega t+\frac {\pi}{2}\alpha_{0}^{\rm R})}, \tag{23}\]
Figure 5: Schematic of three scenarios (a) static (\(v=0\)) (b) Linear dynamic (\(v=v_{0}\)) (c) accelerated system (\(a=a_{0}\))
where \(D_{\rm N}^{\alpha_{0}}\left\{e^{i\omega t}\right\}\) is equal to the complex conjugate of \(D_{\rm N}^{\alpha_{0}}\left\{e^{i\omega t}\right\}\). Thus, after a rearrangement we have
\[R(z=L,t)=A_{0}\Re\left\{D_{\rm N}^{\alpha_{0}}(\left\{e^{i\omega t}\right\} \right\})=A_{0}e^{-(\frac{\pi}{2}\alpha_{0}^{1})}cos(\omega t+\frac{\pi}{2} \alpha_{0}^{R}), \tag{24}\]
It can be observed that decaying a signal as loss in linear vector space corresponds to a trajectory of signal in imaginary domain of fractional dimension. Therefore, we have:
\[\frac{\pi}{2}\alpha_{0}^{\rm R}=\beta L \tag{25}\]
and
\[\frac{\pi}{2}\alpha_{0}^{\rm I}=\alpha_{att}.L \tag{26}\]
#### 4.3.2 Dynamic Scenario
In this case, at least one of the transmitter or receiver is moving in fractional dimensions. Therefore, it is called dynamic observation. Here, two types of movement of linear and quadratic trajectories are demonstrated.
##### 4.3.2.1 Linear dynamic
We show that the two important cases in physics such as wave equation and Doppler effect are equivalent to a movement system in fractional dimensional space with a linear trajectory.
##### 4.3.2.1.1 Equivalency for the Wave Equation
We start to reformulate a new representation of the wave equation based on the proposed assumption. As we know, wave is a phenomenon in time which exactly propagates in the space. It can be corresponded to a temporal (spatial) signal moving in spatial (time) domain with a fractional dimensional trajectory of \(\alpha({\bf r})\) (\(\alpha(t)\)). The wave function (\({\bf\Psi}({\bf r},t)\)) can be decomposed into the two independently variable functions of \(t\) and space variables \({\bf r}\) (\({\bf\Psi}\) (\({\bf r},t\))= \(\psi_{1}(t)\psi_{2}({\bf r})\)). Therefore, the wave equation can be realized as follows
\[D_{\rm N}^{\alpha_{1}({\bf r})}\left\{\psi_{1}(t)\right\}=D_{\rm N}^{\alpha_{ 2}(t)}\left\{\left\{\psi_{2}({\bf r})\right\}. \tag{27}\]
The left hand side of (27) means that temporal signal (\(\psi_{1}(t)\)) moves in spatial domain of a trajectory of \(\alpha_{1}({\bf r})\) and has to be equal to the right one of equation where it shows the movement of spatial signal (\(\psi_{2}({\bf r})\) in time domain with trajectory of \(\alpha_{2}(t)\). For sake of simplicity as we know from quantum or electrodynamics, the wave basis function of two variables \(z\) and \(t\) (\({\bf\Psi}({\bf z},t)\)) is equal to \(e^{i(\omega t+kz)}\). Thus, the fractional dimensional wave equation is
\[D_{\rm N}^{\alpha_{1}(z)}\left\{e^{i\omega t}\right\}=D_{\rm N}^{\alpha_{2}( t)}\left\{e^{ikz}\right\}=e^{i(\omega t+\frac{\pi}{2}\alpha_{1}(z))}=e^{i(kz+ \frac{\pi}{2}\alpha_{2}(t))}=e^{i(\omega t+kz)}, \tag{28}\]
where trajectories of \(\alpha_{i}\) (i=1,2) are
\[\begin{split}&\alpha_{1}(z)=\frac{2}{\pi}kz\\ &\alpha_{2}(t)=\frac{2}{\pi}\omega t,\end{split} \tag{29}\]
3.3.2.1.2 Doppler effect
As shown in Fig.3 (b) for an inertial system, an observer has stayed in a certain fixed position (here \(z=L\)) and the transmitter is closing (faring away) to the receiver with a velocity of \(v_{t}\). This phenomena is well known in physic and called Doppler effect. Here, it is shown how movement of signal source with a linear trajectory in fractional dimension is equivalent to Doppler effect. Let us to have a signal, \(T(z=0,t)\), at transmitter of the following expression
\[T(z=0,t)=A_{0}cos(\omega t). \tag{30}\]
Therefore, by considering the trajectory as follows
\[\alpha(z,t)=\beta_{41}z+\beta_{44}t, \tag{31}\]
where \(\beta_{41}\) and \(\beta_{44}\) constant coefficients and letting \(z\) as a constant value vs. t, the received signal is
\[R(z=L,t)=D_{\rm N}^{\alpha(S)}\Bigg{\{}A_{0}cos(\omega t)\Bigg{\}}=A_{0}cos( \omega t+\frac{\pi\alpha(S)}{2})=A_{0}cos(\omega t+\frac{\pi(\beta_{41}z+\beta_ {44}t)}{2}). \tag{32}\]
After a rearrangement, it is
\[R(z=L,t)=A_{0}cos(\omega_{D}t+\Phi), \tag{33}\]
where constant phase is
\[\Phi=\frac{\pi}{2}\beta_{41}L \tag{34}\]
and \(\omega_{D}\) is a new (Doppler) angular frequency.
\[\omega_{D}=\omega+\frac{\pi\beta_{44}}{2}=2\pi(f+\Delta f)=2\pi f_{D}, \tag{35}\]
with Doppler frequency shift [14]
\[f_{D}=\Bigg{(}\frac{c\pm v_{r}}{c\pm v_{t}}\Bigg{)}f=f+\Bigg{(}\frac{\pm v_{r }\mp v_{t}}{c\pm v_{t}}\Bigg{)}f=f+\Delta f, \tag{36}\]
where c is the propagation speed of wave in the medium. When the receiver is moving towards (away) the source, the sign is positive (negative). For this scenario observer velocity (\(v_{r}\)) is zero (\(v_{r}=0\)), and \(\Delta f\) is:
\[\Delta f=\Bigg{(}\frac{\mp v_{t}}{c\pm v_{t}}\Bigg{)}f=\frac{\beta_{44}}{4}. \tag{37}\]
#### 4.3.3 Acceleration dynamic scenario
In this case, trajectory in fractional dimension is a quadratic function. Here, we assumed the following trajectory
\[\alpha(S)=\alpha(z,t)=\gamma_{41}z^{2}+\gamma_{44}t^{2}+\beta_{41}z+\beta_{44 }t, \tag{38}\]
where \(\beta_{41}\), \(\beta_{44}\), \(\gamma_{41}\) and \(\gamma_{44}\) constant coefficients where with zeroing the \(\gamma\) coefficients the Doppler problem will be obtained. Therefore, the basis function for accelerated system can be presented as follows
\[\phi^{\alpha(z,t)}(S)=D_{\rm N}^{(\gamma_{41}z^{2}+\gamma_{44}t^{2}+\beta_{41 }z+\beta_{44}t)}\Bigg{\{}\phi(S)\Bigg{\}}, \tag{39}\]
and the received signal regarding being constant \(z\) vs. t is
\[\begin{split}& R(t)=D_{\text{N}}^{\alpha(z,t)}\left\{A_{0}cos( \omega t)\right\}=A_{0}cos(\omega t+\frac{\pi\alpha(z,t)}{2})\\ &=A_{0}cos(\omega t+\frac{\pi(\gamma_{41}z^{2}+\gamma_{44}t^{2}+ \beta_{41}z+\beta_{44}t)}{2}).\end{split} \tag{40}\]
After a rearrangement, it is
\[R(t)=A_{0}cos(\omega_{Da}(t)t+\Phi), \tag{41}\]
where constant phase at \(z=L\) is
\[\Phi=\frac{\pi}{2}(\gamma_{14}L^{2}+\beta_{14}L) \tag{42}\]
and \(\omega_{Da}(t)\) is a new (Doppler) angular time dependent as follows
\[\omega_{Da}(t)=\frac{\pi\gamma_{44}}{2}t+\omega+\frac{\pi\beta_{44}}{2}=\frac {\pi\gamma_{44}}{2}t+\omega_{D}. \tag{43}\]
It is corresponded with the gravitational red (blue)-shift. As a simple example, the profile of (43) with all coefficients normalized to one is depicted in Fig. 6. It can be observed that its profile shows a gravitational red (blue)-shift behavior [15] and also without consideration of envelope, it has a similar behavior like gravitational wave [16].
## 5 Acknowledgement
The author is very appreciate for fruitful discussion with Ahmad Sabihi and his valuable comments.
Figure 6: The plot of cosine function with quadratic trajectory in fractional dimension. |
2303.04883 | A Comment on "Algebraic approach to the Tavis-Cummings model with three
modes of oscillation" [J. Math. Phys. 59, 073506 (2018)] | Chore\~no et al. [J. Math. Phys. 59, 073506 (2018)] reported analytic
solutions to the resonant case of the Tavis-Cummings model, obtained by mapping
it to a Hamiltonian with three bosons and applying a Bogoliubov transformation.
This comment points out that the Bogoliubov transformation employed is not
unitary, cannot be inverted, and cannot enforce the symmetries of the model. | Viani S. Morales-Guzman, Jorge G. Hirsch | 2023-03-08T20:45:53Z | http://arxiv.org/abs/2303.04883v1 | # A Comment on
###### Abstract
Choreno et al. [J. Math. Phys. 59, 073506 (2018)] reported analytic solutions to the resonant case of the Tavis-Cummings model, obtained by mapping it to a Hamiltonian with three bosons and applying a Bogoliubov transformation. This comment points out that the Bogoliubov transformation employed is not unitary, cannot be inverted, and cannot enforce the symmetries of the model.
## I Introduction
In contrast with the known approximated analytic solutions to the Tavis-Cummings (TC) model [1]; in the article "Algebraic approach to the Tavis-Cummings model with three modes of oscillation", Choreno et al.[2] present, for the first time, exact analytical expressions for the eigenergies and the eigenfunctions. Employing the Schwinger representation of angular momentum operators in terms of boson operators, the TC Hamiltonian is mapped to a Hamiltonian with three bosons. Three exact solutions are obtained employing a Bogoliubov transformation, normal-mode operators and tilting transformation.
In what follows, it is shown that in the three cases the transformation employed are not unitary, cannot be inverted, and cannot be used to obtain meaningful solutions associated with the symmetries of the TC Hamiltonian.
The Tavis-Cummings Hamiltonian, at resonance, with \(\hbar=1\), reads
\[H_{TC}=\omega\hat{c}^{\dagger}\hat{c}+\omega\hat{f}_{z}+\kappa(\hat{c}\hat{f} _{+}+\hat{c}^{\dagger}\hat{f}_{-}). \tag{1}\]
It can be mapped into the three boson Hamiltonian
\[H=\omega\hat{a}^{\dagger}\hat{a}+\omega\hat{b}^{\dagger}\hat{b}+\omega\hat{c} ^{\dagger}\hat{c}+g(\hat{a}^{\dagger}\hat{b}\hat{c}+\hat{a}\hat{b}^{\dagger} \hat{c}^{\dagger}), \tag{2}\]
employing the Schwinger representation
\[\hat{J}_{+}=\hat{a}^{\dagger}\hat{b},\,\hat{J}_{-}=\hat{a}\hat{b}^{\dagger}, \,\hat{J}_{z}=\frac{1}{2}\left(\hat{a}^{\dagger}\hat{a}-\hat{b}^{\dagger}\hat {b}\right) \tag{3}\]
In the above equation \(g=\kappa\) is the coupling constant, and \(\hat{a}\), \(\hat{a}^{\dagger}\), \(\hat{b}\), \(\hat{b}^{\dagger}\), \(\hat{c}\), \(\hat{c}^{\dagger}\) are bosonic annihilation and creation operators. Additionaly, Hamiltonian (1) commutes with the operator \(\hat{\Lambda}\equiv\hat{c}^{\dagger}\hat{c}+\hat{J}_{z}\), with eigenvalues \(\hat{\lambda}\).
There are two constrictions that the number of bosons must satisfy:
\[n_{a}+n_{b}=2j,\,\,\,n_{c}+\frac{1}{2}(n_{a}-nb)=\lambda. \tag{4}\]
The values of \(j\) and \(\lambda\) determine independent subspaces of the Hilbert space.
In section III. A., Choreno et al. introduce a Bogoliubov transformation
\[\begin{split}\hat{b}&=\hat{f}\cosh r+\hat{d}^{ \dagger}e^{-i\theta}\sinh r\\ \hat{c}&=\hat{d}\cosh r+\hat{f}^{\dagger}e^{-i \theta}\sinh r,\end{split} \tag{5}\]
The transformed Hamiltonian presented in the article is,
\[\begin{split} H^{\prime}=&\left[\omega(1+2\sinh^{2 }r)+\frac{g}{2}(\hat{a}e^{i\theta}+\hat{a}^{\dagger}e^{-i\theta})\sinh 2r \right]\\ &(\hat{f}^{\dagger}\hat{f}+\hat{d}^{\dagger}\hat{d}+1)\\ &\left[e^{i\theta}\sinh 2r+g^{*}\hat{a}^{\dagger}\cosh^{2 }r+g\hat{a}e^{2i\theta}\sinh^{2}r\right]\hat{f}\hat{d}\\ &\left[e^{-i\theta}\sinh 2r+g\hat{a}\cosh^{2}r+g^{*}\hat{a}^{ \dagger}e^{-2i\theta}\sinh^{2}r\right]\hat{f}^{\dagger}\hat{d}^{\dagger}\\ &\omega_{1}\hat{a}^{\dagger}\hat{a}-\omega\end{split} \tag{6}\]
The parameters \(\hat{r}\) and \(\hat{\theta}\) are selected to cancel the terms multiplying \(\hat{f}\hat{d}\) and \(\hat{f}^{\dagger}\hat{d}^{\dagger}\) in (6). They can be written in terms of \(\hat{a}\) and \(\hat{v}\) as
\[\begin{split}\hat{u}&\equiv\cosh\hat{r}=\frac{ \omega}{\sqrt{\omega^{2}-|g|^{2}\hat{a}^{\dagger}\hat{a}}}\\ \hat{v}&\equiv e^{-i\hat{u}\hat{b}}\sinh\hat{r}= \sqrt{\frac{\hat{a}}{\hat{a}^{\dagger}}}\frac{|g|^{2}\sqrt{\hat{a}^{\dagger} \hat{a}}}{\sqrt{\omega^{2}-|g|^{2}\hat{a}^{\dagger}\hat{a}}}.\end{split} \tag{7}\]
Using (7) in (6), the Hamiltonian takes the diagonal form
\[H^{\prime}=\omega_{1}\hat{a}^{\dagger}\hat{a}+\sqrt{\omega^{2}-g^{2}\hat{a}^{ \dagger}\hat{a}}(\hat{f}^{\dagger}\hat{f}+\hat{d}^{\dagger}\hat{d}+1)-\omega. \tag{8}\]
with analytical eigenvalues and eigenstates
\[E^{\prime}= \sqrt{\omega^{2}-g^{2}n_{a}}(n_{f}+n_{d}+1)+\omega_{1}n_{a}-\omega \tag{9}\] \[\Psi^{\prime}= \psi_{n_{a}}(x)\otimes\psi_{n_{l},m_{n}}(\rho,\phi); \tag{10}\]
where \(\psi_{n_{a}}(x)\) are the eigenfunctions of the one-dimensional harmonic oscillator and \(\psi_{n_{l},m_{n}}(\rho,\phi)\) are the eigenfunctions of the 2D harmonic oscillator.
The problem with the above deduction is that the transformation (5) is not unitary. It implies that the new operators \(\hat{d},\hat{f}\) do not satisfy bosonic commutation relations.
To probe this point, observe that the transformation coefficients \(\hat{u},\hat{v}\) in (5) are operators, but in Ref[2] where treated as scalars. Given that \(\hat{a}\) and \(\hat{a}^{\dagger}\) do not commute with \(\hat{n}_{a}\), the determinant of the proposed transformation (5) is
\[\hat{u}\hat{u}^{\dagger}-\hat{v}\hat{v}^{\dagger} =1 \tag{11}\] \[\quad-|g|^{2}\left[\sqrt{\frac{\hat{a}}{\hat{a}^{\dagger}}}, \sqrt{\frac{\hat{n}_{a}}{\omega^{2}-|g|^{2}\hat{n}_{a}}}\right]\sqrt{\frac{ \hat{a}^{\dagger}}{\hat{a}}}\sqrt{\frac{\hat{n}_{a}}{\omega^{2}-|g|^{2}\hat{n}_ {a}}}\] \[\neq 1. \tag{12}\]
therefore, **the transformation** (5) **is non-unitary**.
As a consequence, following (5), \(\hat{b}=\hat{u}\hat{f}+\hat{v}\hat{d}^{\dagger}\), the commutator for \(\hat{b}\) and \(\hat{b}^{\dagger}\) is
\[\begin{split}[\hat{b},\hat{b}^{\dagger}]&=\hat{a} \hat{u}^{\dagger}[\hat{f},\hat{f}^{\dagger}]+\hat{a}\hat{v}^{\dagger}[\hat{f}, \hat{d}]+\hat{v}\hat{a}^{\dagger}[\hat{d}^{\dagger},\hat{f}^{\dagger}]+\hat{v} \hat{v}^{\dagger}[\hat{d}^{\dagger},\hat{d}]\\ &=\hat{a}\hat{u}^{\dagger}[\hat{f},\hat{f}^{\dagger}]-\hat{v} \hat{v}^{\dagger}[\hat{d},\hat{d}^{\dagger}].\end{split} \tag{13}\]
From (13) it is noted that **if**\(\hat{f}\), \(\hat{f}^{\dagger}\), \(\hat{d}\) **and**\(\hat{d}^{\dagger}\) **are bosonic operators, the operators**\(\hat{b}\) **and**\(\hat{b}^{\dagger}\) **written in terms of the transformation do not satisfy the bosonic algebra and viceversa**; the same happens for \(\hat{c}\) and \(\hat{c}^{\dagger}\).
It could still be interesting to analyze if the eigenenergies, given in Eq. (9) as simple functions of the number of bosons \(n_{a},n_{d}\) and \(n_{f}\), can be compared with the exact energies obtained by numerical diagonalization. But this task is impossible, because, given that transformation (5) is not invertible, there is no way to map \(n_{d},n_{f}\) to the number of original bosons \(n_{b},n_{c}\). It follows that it is not possible to select the subspaces of the Hilbert space, with fixed values of \(j\) and \(\lambda\), associated with the numbers \(n_{d},n_{f}\).
It is relevant to mention that the same problem occurs for the other two transformations presented in Ref[2]. In the tilting transformation (21) the coherent state parameters \(\hat{\theta}\) and \(\hat{\phi}\) do not commute, and the normal mode operators defined in (34) do not commute, because according to (39) and (41), the "constant" \(X\) is an operator \(\hat{X}\) which does not commute with \(\hat{X}^{\dagger}\).
It is worth to point out that, in another article by the same authors, entitled "Matrix diagonalization and exact solution of the k-photon Jaynes-Cummings model" [3], a similar procedure is used to solve the k-photon Jaynes-Cummings Hamiltonian. But in this case the correct eigensystem is obtained.
Choreno et al. start by using the interaction Hamiltonian for the model[4],
\[H_{I}=\hbar(\frac{\omega_{0}}{2}-\omega)\sigma_{z}+g(\sigma_{+}(\hat{a})^{2}+ \sigma_{-}(\hat{a}^{\dagger})^{2}), \tag{14}\]
where \(\sigma_{z,\pm}\) are the Pauli matrices, \(\omega_{0}\) and \(\omega\) are the transition and field frequency respectively. A transformation is applied,
\[H_{I}^{\prime}=D^{\dagger}(\xi)H_{I}D(\xi), \tag{15}\]
with
\[D(\xi)=exp(\xi\sigma_{+}-\xi^{*}\sigma_{-}), \tag{16}\]
where \(\xi=-\frac{1}{2}\hat{r}e^{-i\hat{\theta}}\).
After expressing the Hamiltonian in matrix form, the non-diagonal terms are set to zero and the equations are solved for \(\hat{r}\) and \(\hat{\theta}\) which result dependent on \(\hat{a}\) y \(\hat{a}^{\dagger}\). By substituting these parameters in the transformed Schrodinger equation,
\[D^{\dagger}(\xi)H_{I}D(\xi)D^{\dagger}(\xi)\Psi=E_{I}D^{\dagger}(\xi)\Psi \tag{17}\]
one can obtain the eigensystem accurately.
Why is it that the same procedure as in Tavis-Cummings led them to the correct eigensystem? Because in this case, **the transformation used to solve the Jaynes-Cummings model is unitary**. In its matrix form,
\[D(\xi)=\begin{pmatrix}\frac{1}{\sqrt{2}}\sqrt{1+\frac{\Delta}{E_{I}}}&-\frac {1}{\sqrt{2}}\sqrt{1-\frac{\Delta}{E_{I}}\sqrt{(n+1)(n+2)}}\\ \frac{1}{\sqrt{2}}\sqrt{1-\frac{\Delta}{E_{I}}\frac{(\hat{a}^{\dagger})^{2}} {\sqrt{(n+1)(n+2)}}}&\frac{1}{\sqrt{2}}\sqrt{1+\frac{\Delta}{E_{I}}}\end{pmatrix}. \tag{18}\]
Expressing it in the appropriate basis,
\[\{|\psi_{1n}\rangle\equiv|n,g\rangle\,,\{|\psi_{2m}\rangle\equiv|m,e\rangle\}, \tag{19}\]
\(D(\xi)\) takes the form,
\[D(\xi)=\begin{pmatrix}\frac{1}{\sqrt{2}}\sqrt{1+\frac{\Delta}{E_{I}}}&-\frac {1}{\sqrt{2}}\sqrt{1-\frac{\Delta}{E_{I}}}\\ \frac{1}{\sqrt{2}}\sqrt{1-\frac{\Delta}{E_{I}}}&\frac{1}{\sqrt{2}}\sqrt{1+ \frac{\Delta}{E_{I}}}\end{pmatrix}, \tag{20}\]
which is clearly unitary.
Although the procedure presented above gives a valid solution, it is useful to remind the reader that the two-photon (k-photon) Jaynes-Cummings Hamiltonian is always a \(2\times 2\) matrix which can be easily diagonalized, expressing the Hamiltonian in the basis (19),
\[H_{I}=\begin{pmatrix}\hbar(\frac{\omega_{0}}{2}-\omega)&g\sqrt{(m)(m-1)} \delta_{n,m-2}\\ g\sqrt{(n+1)(n+2)}\delta_{n+2,m}&-\hbar(\frac{\omega_{0}}{2}-\omega)\end{pmatrix} \tag{21}\]
to obtain the exact eigenvalues
\[E_{\pm}(n)=\hbar\omega(n+1)\pm\sqrt{\Omega^{2}+g^{2}(n+1)(n+2)}, \tag{22}\]
with \(n=0,1,2,...\).
In conclusion, the Bogoliubov transformations employed to solve the Jaynes-Cummings model provides correct results, and can be seen as a cumbersome procedure to diagonalize a \(2\times 2\) matrix. Instead, when the same formalism is generalized for the Tavis-Cummings model, the transformations are not unitary and the "analytic" results are not valid.
## Conflict of interest
The authors have no conflicts of interest to disclose.
## Author's contributions
VSMG made the formal analysis and the original draft writing, JGH contributed to the conceptualization, writing and editing.
###### Acknowledgements.
We acknowledge partial financial support from DGAPA-UNAM project PAPIIT IN109523. |
2304.10851 | What Do GNNs Actually Learn? Towards Understanding their Representations | In recent years, graph neural networks (GNNs) have achieved great success in
the field of graph representation learning. Although prior work has shed light
into the expressiveness of those models (\ie whether they can distinguish pairs
of non-isomorphic graphs), it is still not clear what structural information is
encoded into the node representations that are learned by those models. In this
paper, we investigate which properties of graphs are captured purely by these
models, when no node attributes are available. Specifically, we study four
popular GNN models, and we show that two of them embed all nodes into the same
feature vector, while the other two models generate representations that are
related to the number of walks over the input graph. Strikingly, structurally
dissimilar nodes can have similar representations at some layer $k>1$, if they
have the same number of walks of length $k$. We empirically verify our
theoretical findings on real datasets. | Giannis Nikolentzos, Michail Chatzianastasis, Michalis Vazirgiannis | 2023-04-21T09:52:19Z | http://arxiv.org/abs/2304.10851v1 | # What Do GNNs Actually Learn? Towards
###### Abstract
In recent years, graph neural networks (GNNs) have achieved great success in the field of graph representation learning. Although prior work has shed light into the expressiveness of those models (i. e., whether they can distinguish pairs of non-isomorphic graphs), it is still not clear what structural information is encoded into the node representations that are learned by those models. In this paper, we investigate which properties of graphs are captured purely by these models, when no node attributes are available. Specifically, we study four popular GNN models, and we show that two of them embed all nodes into the same feature vector, while the other two models generate representations that are related to the number of walks over the input graph. Strikingly, structurally dissimilar nodes can have similar representations at some layer \(k>1\), if they have the same number of walks of length \(k\). We empirically verify our theoretical findings on real datasets.
## 1 Introduction
Graphs arise naturally in a wide variety of domains such as in bio- and chemo-informatics [48], in social network analysis [20] and in information sciences [25]. There is thus a need for machine learning algorithms that can operate on graph-structured data, i. e., algorithms that can exploit both the information encoded in the graph structure but also the information contained in the node and edge features. Recently, graph neural networks (GNNs) emerged as a very promising method for learning on graphs, and have driven the rapid progress in the field of graph representation learning [52].
Even though different types of GNNs were proposed in the past years, message passing models undoubtedly seem like a natural approach to the problem. These models, known as message passing neural networks (MPNNs) [24] employ a message passing (or neighborhood aggregation) procedure where each node aggregates the representations of its neighbors along with its own representation to produce new updated representations. For graph-related tasks, MPNNs usually apply some permutation invariant readout function to the node representations to produce a representation for the entire graph. The family of MPNNs has been studied a lot in the past few years, and there are now available dozens of instances of this family of models. A lot of work has focused on investigating the expressive power of those models. It was recently shown that standard MPNNs are at most as powerful as the Weisfeiler-Leman algorithm in terms of distinguishing non-isomorphic graphs [53, 35].
The recent success of GNNs put graph kernels, another approach for graph-based machine learning, into the shade. Unlike GNNs, graph kernels generate representations (implicit or explicit) that consist of substructures of graphs. Such substructures include random walks [27, 23], shortest paths [9] and subgraphs [46, 30]. Therefore, the properties and the graph representations produced by graph kernels are fully-understood. This is not however the case for MPNNs since, despite the great activity in the field, still little is known about the properties of graphs that are captured in the representations learned by those models.
In this paper, we fill this gap by studying the node representations learned by MPNNs, when all nodes are initialized with the same features. We focus on standard models and show that GAT [49] and DGCNN [56] embed all nodes into the same vector, thus they capture no structural properties of the neighborhoods of nodes. Furthermore, we show that the representations that emerge at the \(k\)-th layer of GCN [29] and GIN [53] are related to some notion of walks of length \(k\) over the input graph. We bound the Lipschitz constant of those models with respect to the sum of (normalized) walks. This suggests that MPNNs suffer from the following limitation: structurally dissimilar nodes can have similar representations at some layer \(k\) where \(k>1\). We verify our theoretical analysis in experiments conducted on real-world datasets. Our main contributions are summarized as follows:
* We show that some MPNNs capture no structural properties of graphs, while other MPNNs learn node representations that are related to some notion of walks over the input graph.
* We empirically verify our theoretical findings with experiments on real-world datasets.
* We empirically show that structurally dissimilar nodes can have similar representations at the \(k\)-th layer of a GNN if the have a similar sum of (normalized) walks of length \(k\) emanating from them.
## 2 Related Work
While GNNs have been around for decades [47, 44, 33], it is only in recent years that the scientific community became aware of their power and potential. The increased scientific activity in the field led to the development of a large number of models [11, 31, 18, 3, 17]. Those models were categorized into spectral and spatial approaches depending on which domain the convolutions (neighborhood aggregations) were performed. Later, it was shown that all these models follow the same design principle and can be seen as instances of a single common framework [24]. These models, known as message passing neural networks (MPNNs), use a message passing scheme where nodes iteratively aggregate feature information from their neighbors. Then, to compute a representation for the entire graph, MPNNs typically employ some permutation invariant readout function which aggregates the representations of all the nodes of the graph. The family of MPNNs has been studied a lot in the past few years and there have been proposed several extensions and improvements to the MPNN framework. Most studies have focused on the message passing procedure and have proposed more expressive or permutation sensitive aggregation functions [36, 45, 14, 12], schemes that incorporate different local structures or high-order neighborhoods [26, 2], non-Euclidean geometry approaches [13], while others have focused on efficiency [21]. Fewer works have focused on the pooling phase and have proposed more advanced strategies for learning hierarchical graph representations [54, 22]. Note also that not all GNNs belong to the family of MPNNs [37, 41, 40]
A considerable amount of recent work has focused on characterizing the expressive power of GNNs. Most of these studies compare GNNs against the WL algorithm and its variants [28] to investigate what classes of non-isomorphic graphs they can distinguish. For instance, it has been shown that standard GNNs are not more powerful than the 1-WL algorithm [53, 35]. Other studies capitalized on high-order variants of the WL algorithm to derive new models that are more powerful than standard MPNNs [35, 34]. Recent research has investigated the expressive power of \(k\)-order GNNs in terms of their ability to distinguish non-isomorphic graphs. In particular, it has been shown that \(k\)-order GNNs are at least as powerful as the \(k\)-WL test in this regard [32]. Recently, various approaches have been proposed to enhance the expressive power of GNNs beyond that of the WL test. These include encoding vertex identifiers [51], incorporating all possible node permutations [36, 16], using random features [43, 1], utilizing node features [55], incorporating spectral information [4], utilizing simplicial and cellular complexes [8, 7] and directional information [5]. It has also been shown that extracting and processing subgraphs can further enhance the expressive power of GNNs [39, 57, 6]. For instance, it has been suggested that expressive power of GNNs can be increased by aggregating the representations of subgraphs produced by standard GNNs, which arise from removing one or more vertices from a given graph [15, 42]. The above studies mainly focus on whether GNNs can distinguish pairs of non-isomorphic graph. However, it still remains unclear what kind of structural information is encoded into the node representations learned by GNNs. Some recent works have proposed models that aim to learn representations that preserve some notion of distance of nodes [38], however, they do not shed light into the representations generated by standard models.
## 3 Preliminaries
### Notation
Let \(\mathbb{N}\) denote the set of natural numbers, i. e., \(\{1,2,\ldots\}\). Then, \([n]=\{1,\ldots,n\}\subset\mathbb{N}\) for \(n\geq 1\). Let also \(\{\!\!\{\}\!\}\) denote a multiset, i. e., a generalized concept of a set that allows multiple instances for its elements. Let \(G=(V,E)\) be an undirected graph, where \(V\) is the vertex set and \(E\) is the edge set. We will denote by \(n\) the number of vertices and by \(m\) the number of edges, i. e., \(n=|V|\) and \(m=|E|\). The adjacency matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is a symmetric matrix used to encode edge information in a graph. The elements of the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column is equal to 1 if there is an edge between \(v_{i}\) and \(v_{j}\), and 0 otherwise. Let \(\mathcal{N}(v)\) denote the the neighbourhood of vertex \(v\), i. e., the set \(\{u\mid\{v,u\}\in E\}\). The degree of a vertex \(v\) is \(d(v)=|\mathcal{N}(v)|\). We denote by \(w_{v}^{(k)}\) the number of walks of length \(k\) starting from node \(v\). Finally, let \(\tilde{w}_{v}^{(k)}\) denote the sum of normalized walks of length \(k\) where each walk \((v_{1},v_{2},\ldots,v_{k})\) is normalized as follows \(\nicefrac{{1}}{{\left((1+d(v_{2}))\ldots(1+d(v_{k-1}))\sqrt{(1+d(v_{1}))(1+d( v_{k}))}\right)}}\).
### Message Passing Neural Networks
As already discussed, most GNNs can be unified under the framework MPNN framework [24]. These models follow a neighborhood aggregation scheme, where each node representation is updated based on the aggregation
of its neighbors representations. Let \(\mathbf{h}_{v}^{(0)}\) denote node \(v\)'s initial feature vector. Then, for a number \(K\) of iterations, MPNNs update node representations as follows:
\[\mathbf{m}_{v}^{(k)} =\text{AGGREGATE}^{(k)}\Big{(}\{\openone\mathbf{h}_{u}^{(k-1)}|u \in\mathcal{N}(v)\}\Big{)}\] \[\mathbf{h}_{v}^{(k)} =\text{COMBINE}^{(k)}\Big{(}\mathbf{h}_{v}^{(k-1)},\mathbf{m}_{ v}^{(k)}\Big{)}\]
where \(\text{AGGREGATE}^{(k)}\) is a permutation invariant function. By defining different \(\text{AGGREGATE}^{(k)}\) and \(\text{COMBINE}^{(k)}\) functions, we obtain different MPNN instances. In this study, we consider the neighborhood aggregation schemes of four models, namely (1) Graph Convolution Network (GCN) [29]; (2) Deep Graph Convolutional Neural Network (DGCNN) [56]; (3) Graph Attention Network (GAT) [50]; and (4) Graph Isomorphism Network (GIN) [53]. The aggregation schemes of the four models are illustrated in Table 1.
For node-level tasks, final nodes representation \(\mathbf{h}_{v}^{(K)}\) can be directly passed to a fully-connected layer for prediction. For graph-level tasks, a graph representation is obtained by aggregating its nodes final representations:
\[\mathbf{h}_{G}=\text{READOUT}\Big{(}\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The above result highlights the limitations of the considered models. Specifically, our results imply that the DGCNN and GAT models encode no structural information of the graph into the learned node representations. Furthermore, combined with a sum readout function, these representations give rise to a graph representation that can only count the number of nodes of the graph. If the readout function is the mean operator, then all graphs are embedded into the same vector. With regards to the other two models, we have bounded the Lipschitz constant of GIN-0 and GCN with respect to the number of walks and sum of normalized walks starting from the different nodes, respectively.
To experimentally verify the above theoretical results, we trained the GIN-0 and GCN models on the IMDB-BINARY and the ENZYMES graph classification datasets. For all pairs of nodes, we computed the Euclidean distance of the number of walks (resp. sum of normalized walks) of length 3 starting from them. We also computed the Euclidean distance of the representations of the nodes that emerge at the corresponding (i. e., third) layer of GIN-0 (resp. GCN). We finally computed the correlation of the two collections of Euclidean distances and the results are given in Figure 1. Clearly, the results verify our theoretical results. The distance of the number of walks is perfectly correlated with the distance of the representations generated by GIN-0 with no biases, while the distance of the sum of normalized walks is perfectly correlated with the distance of the representations produced by GCN. We also computed the Euclidean distance of the representations of the nodes that emerge at the third layer of the standard GIN-0 model (with biases), and we compared them against the distances of the number of walks. We can see that on both datasets, the emerging correlations are very high (equal to 0.99). We observed similar values of correlation on other datasets as well, which indicates that the magnitude of the bias terms of the MLPs might be very small and that our assumption of ignoring biases is by no means unrealistic.
Based on the above theoretical and empirical findings, it is clear that two nodes can have dissimilar representations at the \(k\)-th layer, but obtain similar representations at the \((k+1)\)-th layer of some MPNN model. For instance, for GIN-0, this can be the case if the two nodes have different numbers of walks of length \(k\), but similar numbers of walks of length \(k+1\). We give in Figure 2 an example of three nodes (the three red nodes) that have structurally dissimilar neighborhoods, but their representations produced by GIN-0 after two neigh
Figure 1: Euclidean distances of the representations generated at the third layer of the different models vs. Euclidean distances of the number of walks (or sum of normalized walks) of length 3 starting from the different nodes.
Figure 2: The number of walks of length 2 starting from the red nodes of the three graphs is equal to 10. These three nodes could be embedded closely to each other even though they are structurally dissimilar.
borhood aggregation layers are very similar to each other (or identical in the case where biases are omitted). In all three cases, the number of walks of length 2 starting from the red nodes is equal to 10. It is also clear that the three nodes have different values of degree from each other.
## 5 Conclusion
In this paper, we focused on four well-established GNN models and we investigated what properties of graphs these models can capture. We found that two of them capture no structural properties of graphs since they embed all nodes into the same feature vector. The rest of the models learn node representations that capture the number of (normalized) walks emanating from the different nodes. We showed that the GIN-0 model can embed two structurally dissimilar nodes to similar vectors if the numbers of walks of length \(k\) (for \(k>1\)) starting from the two nodes are similar to each other. In the future, we plan to explore what is the impact of node features on the learned representations and how the different neighborhood aggregation schemes handle those features.
|
2304.01784 | The Role of Free Electrons in Ball Lightning Creation | After more than 180 years of research, ball lightning is still an unsolved
problem in atmospheric physics. Since no progress can be expected without a
controlled production of such objects in a laboratory, this report analyses a
carefully selected subset of the observations, focusing on cases where the
creation of ball lightning has been witnessed in order to identify the
circumstances of their creation. Surprisingly, it was possible to establish
that in many cases negative corona was involved as a precursor. Free electrons
produced in a negative corona appear to be required for a number of processes,
especially the creation of the visible plasma of the ball lightning and in
forming a receiving antenna for the electromagnet pulse of the return stroke of
the initiating cloud-ground lightning. In a different line of arguments,
localized electromagnetic structures, being special solutions of Maxwells
equations, were identified as the most likely model for the physical nature of
ball lightning. The antenna required to produce such a structure can also be
due to the free electrons. The free electron hypothesis allows outlining
further actions in terms of data collection, computer simulation and
experiments. | Herbert Boerner | 2023-03-28T10:40:45Z | http://arxiv.org/abs/2304.01784v1 | The Role of Free Electrons in Ball Lightning Creation
###### Abstract
After more than 180 years of research, ball lightning is still an unsolved problem in atmospheric physics. Since no progress can be expected without a controlled production of such objects in a laboratory, this report analyses a carefully selected subset of the observations, focusing on cases where the creation of ball lightning has been witnessed in order to identify the circumstances of their creation. Surprisingly, it was possible to establish that in many cases negative corona was involved as a precursor. Free electrons produced in a negative corona appear to be required for several processes, especially the creation of the visible plasma of the ball lightning and in forming a receiving antenna for the electromagnet pulse of the return stroke of the initiating cloud-ground lightning. In a different line of arguments, localized electromagnetic structures, being special solutions of Maxwell's equations, were identified as the most likely model for the physical nature of ball lightning. The antenna required to produce such a structure can also be due to the free electrons. The free electron hypothesis allows outlining further actions in terms of data collection, computer simulation and experiments.
## Introduction
Ball lighting has been observed since antiquity. Compilations of well described observations contain accounts which go back to the middle of the 17 century [1], and new observations are routinely recorded [2].
Yet after more than 180 years of research [3] ball lightning is still an unsolved problem in atmospheric physics [4, 5, 6]. There is no consensus on the physical nature of these objects, and so far, there are no experiments which have created objects that match the observed characteristics of ball lightning.
Recently, significant progress has been achieved from correlation of ball lightning observations with data from lightning detection networks [6, 7], establishing a cause - effect relation between these two phenomena.
Since no progress can be expected without the controlled production of such objects in the laboratory, it is of importance to define the environmental circumstances that lead to the creation of ball lightning.
The few cases where the start of ball lightning objects was witnessed are the main sources of information, where one can learn something about the physics of the ball lightning creation process.
The aim of this report is the collection of all information that might be relevant, even if only remotely, for the creation of such objects in the laboratory and to outline possible experiments.
## Observations
The creation of ball lightning has been observed in different situations:
* Ball lightning objects can branch off the channel of linear lightning. This is the only case, where photographic evidence is available [6] (case 2).
* The ball lightning object can be produced from a conductor, sometimes a metallic fence, which had been hit by lightning [6] (case 23).
* It can appear "out of thin air", far away from lightning channels and conductors. This behavior is well documented [1], [6] (case 1).
* Near or in aircraft during flight.
Since Brand's analysis of observations [1] it has been established that the creation of ball lightning is usually associated with linear lightning, or at least with thunderstorm conditions. An exception are the observations in aircraft, where the correlation with external causes is less clear.
For the further discussion, it is assumed, that there is only one type of ball lightning and that the observations do not describe several similar phenomena. Considering the current state of understanding of this phenomenon, this appears to be a reasonable starting point, observing the principle of simplicity. This assumption is also supported by the analysis of a European database [8], which indicates the existence of a core phenomenon.
Since the reports by casual observers are the only source for information on this subject, especial care is required to establish an information base that is as dependable as possible. Therefore, only the following types of information have been used in this analysis:
* Characteristics observed repeatedly and independently over a long period of time, which are therefore likely to be correct.
* Single events which are very well documented and provide detailed information.
* Accounts by people with professional training.
Not all situations where ball lightning is created provide useful information on the production process and for the design of experiments. The creation of a lightning channel is one example, and another one is the appearance of ball lightning in aircraft. The lightning channel is of course an ample source of energy, but the actual situation can neither be analyzed properly, nor can it be recreated faithfully in a laboratory. The situation in an aircraft poses different problems: how such an object can be created in the Faraday cage of a modern aircraft is hard to understand.
The situation which imposes the most stringent constraints on the physical causes is the creation of ball lightning in air at a distance from the lightning channel.
A lightning stroke can act at distance only via three mechanisms:
* By the quasi-static electric field, either of the approaching leader or the charge in the thundercloud,
* by the electromagnetic pulse (EMP) of the return stroke,
* or by the induction due to the current change of the return stroke.
Induction can be excluded in the cases where the distance was of the order of kilometers.
An example of production of ball lightning at a large distance from the lightning is the Neuruppin case, where at least 11 ball lightning objects were created by a positive CG flash with an exceptional strength of 370 kA peak current [6], [9]. The lightning was located by the BULDS system more than five kilometers east
of the region, where the ball lightning objects were observed. Especially interesting are two observations, where the appearance of ball lightning objects was seen inside houses. The two objects appeared in a living room and in a work shed. Both objects were very bright, but short-lived. Similar observations have been reported several times:
* A ball lightning object appeared during a thunderstorm inside a room with a large window [10].
* Report by Turner [6] (case 12). One object appeared indoors under a Plexiglas skylight above a round brass table with a diameter roughly matching the diameter of the table. The object was very long-lived. The observer reported a high electric field.
* Various cases are mentioned in Brand's book: cases number 3 and 4 in a room, number 89 over the plate of the stove, 99 from the stove, 117 above a table, 137 along the lamp, 189 in a petroleum lamp.
The electric field of a thundercloud can produce corona discharges at elevated objects like masts, antennas, or spires, which can become visible in low-light conditions if the discharge is strong enough. Such a visible corona is called St. Elmo's fire. In Neuruppin, visible corona was observed on a metallic sieve resting on a wheelarrow [9] (witness 7).
Leaders approaching ground also create a rising electric field, which first leads to corona and then to the creation of streamers, developing into an upward connecting leader that finally establishes the contact to ground.
The resulting return stroke, which discharges the leader's charge, produces the electromagnetic pulse (EMP).
For ball lightning objects created far away from a lightning channel, only the corona and the EMP can be the factors initiating and driving the production process of ball lightning. It is therefore essential to take a closer look at the properties of the corona discharge and its interaction with the EMP.
### Positive lightning and ball lightning
In Neuruppin, the multiple ball lightning objects were created by a very strong positive lightning, whose maximum current of 370 kA was at the upper range of observed lightning currents. Obviously, the conditions created by this super bolt were very favorable to produce ball lightning.
There is now considerable evidence that positive cloud-ground lightning has a much higher probability of creating ball lightning than negative CG lightning [6, 7, 11]. The analysis of Keul and Diendorfer points at roughly a factor of 10 between the probability of positive CG versus negative CG lightning, although negative lightning of rather moderate strength clearly also generates ball lightning objects. The main difference between positive and negative CG lightning is that they create different types of coronas at ground: positive lightning will produce negative corona, whereas negative lightning will produce positive corona. It appears likely, that the different properties of negative and positive corona are the reason behind the difference in ball lightning production probability.
### Negative corona
Corona discharges develop at sharp points, where the electric field is enhanced, and an ionization avalanche can develop. If the point is positively charged, i.e., it is the anode, the electrons created in the ionization region ahead of the point are collected by the anode and the positive ions are moving away in the electric field. If the point is negatively charged, it is acting as the cathode and the free electrons created in the ionization region are moving away from the point. Since oxygen molecules have an electron affinity of about 0.45 eV (Table 1), free electrons become quickly attached within less than 100 nsec [12].
Therefore, their velocity is reduced to the drift velocity of the negative ion, which is about three orders of magnitude lower.
Since the negative ions move slowly, a considerable space charge is built up and, in the case of a negatively charged tip, it reduces the electric field stopping avalanche creation. When this negative space charge diffuses away, the field again becomes strong enough to create a new avalanche, and the whole process repeats itself. This leads to very regular pulses in the case of a negative tip, called Trichel pulses. Two competing processes are involved: ionization by electrons and electron capture, and the balance between them determines if the avalanche grows or not. The important factor is the strength of the electric field. If it is strong enough in the regions further away from the emissive tip, the discharge can propagate and form a tiny filament of conductive plasma which grows into the region of lower electric field, a streamer.
Only negative corona creates Trichel pulses. These pulses produce quite regular bunches of negative charge, which drift away under the influence of the electric field. In the case of the field of a thunderstorm cloud or an approaching leader, this gives the negative space charge a periodic structure in the vertical direction.
Streamers are conductive filaments with field enhancement at their tips, which enables them to grow into regions of lower electric field [13]. Negative and positive streamers are quite different [14]. The crucial difference with respect to ball lightning generation is that negative streamers require a more than two times higher field (about 9 kV/cm versus about 4 kV/cm) than positive ones for propagation. This means that positive CG lightning produces less and smaller streamers starting from the ground, a factor which also adversely influences the capability of lightning rods to attract positive leaders [6]. It is likely that normally streamers are created by an approaching positive leader, which then compete in terms of the available energy with the ball lightning formation. Only in rare cases, where no streamers could form, ball lightning objects are initiated. Case number 184 in Brand's book reports on such an event, where in a village in France a strong lightning created large streamers everywhere, except above a body of water where a ball lightning object was created. The water provided a flat, conducting surface where corona could develop, but which was less suitable for the initiation of streamers.
The negative oxygen ions will further react with other oxygen and nitrogen molecules, forming Ozone and nitrova oxides.
At lower field strengths, free electrons are almost absent in the negative space charge, but they can be created again if the electric field is high enough. Figure 1 shows the fraction of the negative charge that is due to free electrons. At 400 kV/m half of the negative charge is electrons, which were detached from the negative ions. The detachment is much easier than the ionization of neutral molecules, since the energies required are much lower (Table 1,Table 2).
\begin{table}
\begin{tabular}{l|l|l} \hline Anion & Electron & Mobility \\ & affinity [eV] & [cm\({}^{2}\)/Vsec] \\ \hline O\({}^{`}\) & 1.462 & \\ \hline O\({}^{`}\) & 0.448 & \\ \hline O\({}^{`}\) & 1.899 & \\ \hline N\({}_{2}\)O\({}_{2}\)’ & 3.351 & 2.52 \\ \hline NO\({}_{3}\)’ & 3.937 & 2.14 \\ \hline \end{tabular}
\end{table}
Table 1: Electron affinity and mobility of negative ions ([https://webbook.nist.gov/chemistry/](https://webbook.nist.gov/chemistry/))
The sequence of events for a positive CG stroke is:
* The positive leader is approaching ground from the cloud above.
* When the electric field at ground is sufficient, a negative corona with Trichel pulses is produced.
* The negative oxygen ions form a periodic space charge, drifting vertically in the leader's field.
* When the electric field becomes larger, electrons are detached from negative oxygen, now creating a periodic structure of free electrons accelerating under the electric field of the leader.
* When some of the electrons have gained enough energy, they are able to ionize neutral gas molecules. The positive ions are left behind, and the multiplied electron bunch moves on.
* When no streamer discharge could be started, the process stops when the leader from above is connected to the ground, and the return stroke discharges the lightning channel.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Gas & First excitation energy [eV] & First ionization energy [eV] \\ \hline H & 10.2 & 13.6 \\ \hline H\({}_{2}\) & 10.8 & 15.9 \\ \hline N\({}_{2}\) & 6.3 & 15.6 \\ \hline O\({}_{2}\) & 7.9 & 12.1 \\ \hline H\({}_{2}\)O & 7.6 & 12.7 \\ \hline CO\({}_{2}\) & 10.0 & 14.4 \\ \hline SF\({}_{6}\) & 6.8 & 15.6 \\ \hline He & 19 & 24 \\ \hline \end{tabular}
\end{table}
Table 2: Excitation and ionization energy of gases ([https://webbook.nist.gov/chemistry/](https://webbook.nist.gov/chemistry/))
Figure 1: Fraction of free electrons in space charge, data from [12].
The electromagnetic pulse of the return stroke accelerates the free electrons in the space charge created by the corona.
* When the EMP stops, the electrons are attracted back to the positive ions created in their wake during acceleration upwards.
* The accelerated electrons emit electromagnetic radiation.
This scenario is textbook knowledge, containing no speculation. The only open question is how this sequence of events sometimes leads in the end to the creation of a ball lightning object.
The two sources of energy to produce ball lightning in these situations are the electric field and the EMP. Since the electric field is a weak source of energy, providing only about 40 J/m3 at breakdown strength, the EMP is most likely the major source of energy for the ball lightning object. It is known, that the EMP of strong, especially positive lightning, interacts with the free electrons in the ionosphere creating the ring-shaped luminous events called ELVES.
The free electrons of the negative corona are much closer to the source of the EMP than the electrons in the ionosphere and will therefore gain much more energy from it.
### Physical nature of ball lightning: the theory that fits the observations best
To get an idea how the structured electron cloud generated by the negative corona could possibly generate a ball lightning object, one needs to know what type of physical structure ball lightning probably is.
Since there is no agreement on the theory, one must pick the most likely one, which is the one that agrees best with the observed properties. Stenhoff remarks that until 1999 there was only one attempt to correlate observations with the theoretical models1, but in [6] (chapter 11) the conclusion was reached that the theory that fits best is based on special solutions of Maxwell's theory [15]. This conclusion was mainly reached by the observation that ball lightning can pass through dielectric objects, like glass windows. Often the windowpanes are undamaged, but sometimes holes are punched into the panes by "cutting" out a piece of glass, which can be more or less round.
Footnote 1: The actual report could not be obtained by the author: Hubert, P. (1996) Novelle enquête sur la foudre en boule—analyse et discussion de resultats, Rapport PH/SC/96001, Centre d’Etudes Nucleaire, Saclay.
The passage through windows is well documented. The earliest one is case 192 by Brand in 1914, and the most recent one was in 2017 in Devon [6] (case 15), where a negative CG lightning hit the building adjacent to the observer's. Rakov [6] (case 17) lists one account, and [16] provides 43 accounts from Russia. Another report is from the Cavendish laboratory in Cambridge [6] (case 14).
This capability of ball lightning objects demonstrates that these objects cannot be entirely made of matter, they must primarily be composed of electromagnetic radiation, which produces the visible envelope of plasma. These exact solutions of Maxwell's equations are not electromagnetic waves propagating though space, but they have looped electric field lines of finite extent and a localized appearance in all three spatial dimensions [15]. These objects behave in fact more like particles than waves. The energy is stored in the electromagnetic field, and the stability is also provided by the field configuration, so no external reflector is needed. The configuration of the fields is not fixed, there are a huge number of possible structures [15], also including ones that are tangled and look like a "ball of yarn".
The visible plasma envelope is generated by the high electric fields of the EM structure, which accelerate free electrons such that air molecules become ionized and excited. Thus, the energy stored in the EM
structure is slowly depleted, until the structure cannot be maintained anymore and the ball lightning object either just disappears or explodes or implodes with a noise. When such a structure moves through air, new plasma is created ahead of the object, whereas behind it the plasma recombines.
The conclusion that ball lightning objects are not composed of matter but of radiation, is also supported by an observation of the German physicist Walther Gerlach, who estimated the speed of the object he saw to 1200 m/sec [17], which is a speed that rules out any material object.
Cameron calls these objects "electromagnetic disturbances" [15], and proposes antenna configurations to produce them in the laboratory [18].
In the case of ball lightning objects created in open air, there is no metal antenna available, but the structure of the space charge of free electrons can function as such. How exactly the accelerated electrons can produce such an electromagnetic structure is of course an open question, but it is evident that free electrons are a necessary part of the process.
Earlier papers discussing this type of model are [19] and [20].
### Producing a discharge in air without electrodes
Another argument is also leading to the requirement of free electrons in air. Ball lightning objects are luminous, and this shows that at least their outer, visible parts are composed of a thin plasma.
In air at normal pressure, away from any electrodes, it is difficult to start a plasma. The initiation of a plasma requires free electrons, which can be accelerated by electric fields to energies which allow the ionization of more air molecules, starting an electron avalanche. At sea level, free electrons are created by radioactive decay of substances like radon or by secondary particles of cosmic radiation like muons. The production rate is about 10 ion pairs per cubic centimeter and second [21], leading to about 1000 ions per cubic centimeter.
The ionization of nitrogen and oxygen requires more than 10 eV (Table 1 and Table 2), so the electrical field required for electrical breakdown under normal pressure is 3 MV/m. It is nevertheless possible to start a plasma at much lower field strengths, which is demonstrated by the creation of plasmoids in a microwave oven. The electric fields in a microwave oven are about 2-3 kV/m,2 which is three orders of magnitude below the breakdown field.
Footnote 2: Estimated for a power of 1kW radiating into the oven cavity by the formula for Pointing’s vector.
In such a low electric field, breakdown can only be achieved if an additional source of electrons is available, which can be supplied by a burning candle, a burning match, by fine carbon fibers, by a sharp metallic object, or a combination of those3. Flames of burning hydrocarbons have long been known to produce ions. The main process of the chemo-ionization is [22]
Footnote 3: For example: [https://www.angelfire.com/electronic/cwills/microwave.html](https://www.angelfire.com/electronic/cwills/microwave.html)
### Ch + O - CHO\({}^{+}\)+ e\({}^{-}\)
The visible flame is positively charged, as can be seen when, for example, a candle flame is burning in a strong electric field: the flame is drawn towards the negative electrode. Most of the free electrons from this reaction will be rapidly attached to oxygen, but not all. If the number of electrons is sufficiently high, there will be also enough electrons in the high-energy tail of the distribution, which can ionize air molecules, that ionization dominates recombination, and a plasma is started. Once initiated, the plasma is self-sustaining since the free electrons can absorb the microwaves and their full energy is coupled into the plasma. The hot plasmoid will rise to the top of the microwave oven and is therefore usually contained in a glass vessel by the experimenters.
The increase in ionization due to burning hydrocarbons leads to a high concentration of free electrons, which then lowers the threshold for breakdown of air by three orders of magnitudes.
A similar effect is demonstrated by an interesting observation of Brand, whose significance has been overlooked by scientists until now. In his summary of ball lightning properties [1, 23], he states:
"They [ball lightning objects] are 'attracted' by the air in closed spaces; these balls enter the latter via an open window or door, and even through narrow cracks; however, they display a marked preference for entering via the flue gases of chimneys which have a better electrical conductance without self-induction, so that very often they emerge from the hearth and thus gain entrance into the kitchen."
Since chimneys are non-transparent, entering by this way cannot have been observed. Probably, the ball lightning objects were really created with high probability in places where hydrocarbons are burning, which is above the hearth or in an open fireplace.
A possible mechanism which may explain this surprising observation is the amplification of electron bunches. If free electrons move in a gas which contains many negative ions, the electrons can be stripped easily from the molecules, since the binding energies are much lower than the ionization energies. The flue gases work as an amplifying medium, which increases the number of electrons in the bunches of the Trichel pulses.
### Trichel pulses
To create an electromagnetic structure of the type described above, a specially tailored antenna is needed [18]. In open air, where no metallic antenna exists, a spatial distribution of the free electrons must function as an antenna. An unstructured blob of electrons will certainly be inadequate.
As described above, negative corona will run in a mode where Trichel pulses are created. In a homogeneous electric field, they will produce a periodic pattern of bunches of drifting negative ions. The mobility of negative ions in air at normal pressure is between 200 and 250 mm\({}^{2}\)/Vsec [12]. The frequency of the Trichel pulses depends on the current the emissive point of the cathode can supply. For small corona currents, it is in the range of 100 Hz to about 10 kHz. The maximum frequency that could be obtained for Trichel pulses is about 3 MHz [24]. In Figure 2, the spacing of these ion bunches is shown, under the assumption of a constant electric field and for different frequencies. Typically, the spacing is in the submillimeter range, between 0.1 mm and 1 mm. The ion bunches thus form a fine spatial grating. If the electric field jumps to a value where free electrons are created, they will start from the ion bunches, retaining the periodic structure to a certain degree. After the EMP, which will deposit energy in the free electrons, the electron bunches will be attracted by the freshly created positive ions and the electromagnetic radiation from the accelerated, individual electron bunches will be interfering. It is therefore possible, that his unusual periodic arrangement of negative ions is the source of an electromagnetic radiation with wavelengths in the range of 0.1 to 1 mm, with corresponding frequencies of 300 GHz and 3000 GHz. The hypothesis, that Terahertz radiation is the source of ball lightning objects is certainly quite speculative, but it is supported by some observational facts. Ball lightning sometimes moves into keyholes, or it passes metallic mosquito screens [5] (20.2.4), which have a typical mesh size of one millimeter. The wavelengths of the electromagnetic structures must therefore be smaller than the openings in these metallic objects.
It should also be noted that devices like magnetrons, klystrons, or gyrotrons also work with bunches of electrons, but of course in a high vacuum. These devices deliver radiation continuously, whereas the radiation which may form the ball lightning object needs only to be produced in a very short burst, but then continues in a localized form.
In a realistic setting, not only one point will produce a corona, but there will be several emissive points, each running at a different current and frequency. In a case mentioned by Turner [6] (case 10) an unusually large ball lightning object occurred indoors above a metallic table, which was comparable in size to the ball. In this case, the whole surface of the table was acting as a cathode.
### Ball lightning creation in situations not associated with positive CG lightning
The hypothesis that electrons from negative corona are the central mechanism for ball lightning creation explains the preference to positive CG lightning, but often ball lightning is observed to originate in other situations. In particular negative CG lightning of rather normal strength often produced ball lightning [7]. In such cases, the corona produced by the field at ground level of the stepped leader will be a positive one, but there are secondary effects which can lead to a negative corona. Lightning can hit metallic fences, open air telephone lines or power lines, which then will be charged negative. Negative corona can therefore be produced far away from the point where the lightning hits the conductor and ball lightning has often been observed to exit from such conductors, even from power sockets [6] (case 21, 23, 24).
Another possibility to create a negative corona by a negative CG stroke is by electrostatic induction. In the case of the unlucky Professor Richmann, who died during his attempts to duplicate Franklin's experiment, his equipment was an ungrounded lightning rod. The engraver named Sokolov described what happened during a thunderstorm: "Richmann was a foot away from the iron rod, when a pale ball of fire, the size of a fist, came out of the rod without any contact whatsoever. It went straight to the forehead of the professor, who in that instant fell back without uttering a sound." The upper end of the lightning rod will have sent out a positive upward connecting streamer, and therefore the lower end of the rod will have been negatively biased.
The occurrence of lightning channels is well documented, but it is obviously a rare phenomenon. Around the core of the lightning channel, which is highly conductive and usually negatively biased, a corona
Figure 2: Spacing of Trichel pulses (for a mobility of 200 mm\({}^{2}\)/Vsec)
sheath develops, which stores the charge delivered by the lightning channel. In rare cases, ball lightning objects are created in this corona sheath. In these cases, the ball lightning objects often store a large amount of energy [6] (case 2).
Some curious observations of ball lightning become also understandable in the light of the negative charge hypothesis. In Italy, while filming a water fall, a ball lightning like object was accidentally recorded on video [25]. Waterfalls are copious sources of negative ions [26], so it is possible that they were the cause for this rare observation.
Ball lightning is also observed under conditions where the electric field is high, but where no thunderstorm is yet near, and no lightning is observed. Also, in some cases, ball lightning has been produced artificially and accidentally, such as by drawing an arc from a radio transmitter [6] (case 11, 18). In these situations, corona and consequently a space charge was obviously existing, but to what extent free electrons were present is not clear.
The observation of ball lightning inside modern, all-metallic airplanes presents the hardest problems. Sometimes, the ball lightning appears behind the cockpit window [27], or from the pilot's cabin, as in Jennings's report [6] (case 19). Obviously, the metallic body of the airplane acts as a Faraday cage, so no high electric fields can exist inside the plane, but the cockpit windows may be charged by the passage through charged regions in the clouds. Sometimes the plane was hit by lightning, as in the case reported by Jennings, but sometimes this is not the case [6], page 3).
### Summary
Free electrons appear to be fundamental for the creation of ball lightning objects. Because of their small mass, only electrons can easily be accelerated enough by electric fields so that they can start a plasma in air at normal pressure. They are also essential for absorbing the energy of the EMP of the return stroke, and only electrons can function as a transmitting antenna, which is needed to produce the localized electromagnetic structures. This hypothesis also allows understanding observations that hitherto were unexplained, for example the tendency of ball lightning objects to appear from fires or close to hearths. The negative ions in the flue gases provide an amplification medium for electron bunches originating from Trichel pulses of the negative corona. All situations where ball lightning was observed to originate are either associated with negative corona or at least with corona of unknown polarity.
The hypothesis that ball lightning is basically a localized electromagnetic structure based on a special solution of Maxwell's equations, is also compatible with the free electron hypothesis. Both hypotheses - free electrons as the fundamental means to create ball lightning and the localized electromagnetic structures - rest on independent lines of arguments and are well-supported by reliable observations.
The hypothesis that free electrons drive the creation of ball lightning objects offers a clear path for experimental verification. The situation, where ball lightning is created in air, can be recreated in the laboratory by producing a negative corona with Trichel pulses and then irradiating it with an electromagnetic pulse. Also, the situation where ball lightning is emanating from electric conductors, can be recreated in a laboratory. Nevertheless, it may be important to simulate the behavior of the negative space charge in Trichel pulses first in an appropriate model, to gain some insight into the settings of relevant parameters.
## Conclusion
The free electron hypothesis allows drawing several conclusions. For the collection of observational reports, it should become standard to check for circumstances that could provide negative space charge,
like burning hydrocarbons, air ionizers, copying machines, laser printers and the like. So far, this type of information was only rarely reported. Also, the correlation with the type of the initial lightning is essential.
Computer simulations of negative corona, especially of Trichel pulses under pulsed excitation, offer the best chances to understand the processes prior to the self-organization which creates ball lightning objects. Such simulations will also be essential to define appropriate parameter settings for experiments.
For experiments, the most promising approach is probably one that recreates the situation that is produced by positive lightning. Alternatively, one could try to bias a conductor by a negative pulse of high voltage, imitating the cases where ball lightning emanates from conductors. Both approaches are well within the capabilities of normal physics laboratories.
Generally, both the process of self-organization that leads to the formation of ball lightning and the localized electromagnetic structures are of considerable scientific interest. The localized electromagnetic structures, which basically produce an electrodeless discharge in a gas, will probably have several interesting applications, which may also include nuclear fusion.
|
2310.12101 | Design and testing of an affordable desktop wind tunnel | Wind tunnels are a key source of data collection, but their cost and size can
be a significant obstacle to their acquisition and usage, especially for
applications such as instrument calibration, instruction, or in-class
demonstrations. Here we propose a design for a cost-effective, desktop wind
tunnel. This design takes advantage of readily available, inexpensive
materials. Special consideration was taken to allow the wind tunnel to be
serviceable, as well as giving the operator the ability to change key features
without a complete redesign. There are three main sections, the first being a
fan enclosure, which holds seven ducted fans in a hexagonal array. The second
section holds honeycomb flow straighteners, and provides an enclosed volume
suitable for larger, lower-speed experiments. The third section is a
contraction, terminating in a 2in x 2in, higher-speed square section. The wind
tunnel has a footprint of approximately 13.5in x 5.5in, making it small enough
to be portable and to fit on a desk. An off-the-shelf masked stereolithography
apparatus (MSLA) 3D printer was used to prepare the parts. This allows the wind
tunnel to be built for under \$500; even including the cost of a 3D printer,
the overall cost remains under \$1,000. This design is able to produce flow at
up to 44.1 m/s, enabling a variety of aerodynamic demonstrations. | Miguel De La Cruz, Paolo Luzzatto-Fegiz | 2023-10-18T16:43:19Z | http://arxiv.org/abs/2310.12101v1 | # Design and testing of an affordable desktop wind tunnel
###### Abstract
Wind tunnels are a key source of data collection, but their cost and size can be a significant obstacle to their acquisition and usage, especially for applications such as instrument calibration, instruction, or in-class demonstrations. Here we propose a design for a cost-effective, desktop wind tunnel. This design takes advantage of readily available, inexpensive materials. Special consideration was taken to allow the wind tunnel to be serviceable, as well as giving the operator the ability to change key features without a complete redesign. There are three main sections, the first being a fan enclosure, which holds seven ducted fans in a hexagonal array. The second section holds honeycomb flow straighteners, and provides an enclosed volume suitable for larger, lower-speed experiments. The third section is a contraction, terminating in a 2" x 2", higher-speed square section. The wind tunnel has a footprint of approximately 13.5" x 5.5", making it small enough to be portable and to fit on a desk. An off-the-shelf masked stereolithography apparatus (MSLA) 3D printer was used to prepare the parts. This allows the wind tunnel to be built for under $500; even including the cost of a 3D printer, the overall cost remains under $1,000. This design is able to produce flow at up to 44.1 m/s, enabling a variety of aerodynamic demonstrations.
## 1 Introduction
Wind tunnels remain an essential tool for aerodynamics investigations; however, constraints associated with cost and overall size can pose a significant practical obstacle to their access. Here we focus on proposing improvements to small wind tunnels, while laying the groundwork for scaling the design up to reduce the overall footprint of larger setups.
A large body of knowledge has been accumulated on wind tunnel design [1][2][3], including aspects such as contraction shape [4], grids and screens for flow uniformity [4][5], and use of numerical simulation to predict performance [6]. There has been recent interest in improving small wind tunnels, with varied applications ranging from animal experiments [7][8] to testing micro-energy harvesters [9]. Notably, [7] introduced a compact design for closed-loop tunnel, by making innovative use of a large number of turning vanes and honeycomb structures to support smooth flow across turns. This design used a single fan. In [9], a blower fan, and a straight-walled diffuser and contraction, separated by a screen, to produce more uniform flow in a blowing, open-section layout. The use of a single large fan or blower can simplify mechanical design, but can also result in significant nonuniformities in the flow leaving the fan, requiring large distances between the fan and the test section.
The main objective of this paper is to present a method for building a cost effective, portable and reliable wind tunnel that can produce optimal conditions for data collection for different scenarios such as in class demonstrations and sensor calibration. An open-return, blowing-type layout was selected to achieve highest possible portability and flexibility of use. To produce more uniform flow from the outset, and therefore reduce the separation distance needed between fan and test section, flow was generated by replacing the conventional single fan with an array of smaller fans. Resin 3D printing allows inexpensively producing complex parts with smooth surface finish. The tunnel design is described in section 2, together with the flow measurement setup. Flow data are reported in section 3. A brief discussion and next steps are outlined in section 4.
## 2 Materials and Methods
An open return wind tunnel was chosen because not needing a recirculation system results in a reduced overall size; an overall diagram is provided in Figure 1. The material for the wind tunnel body was a curable resin, processed by an Elegoo Saturn 2 MSLA 3D printer. This produces a layer height of 28.5 micron, making a smooth internal surface and producing gradual transitions in the wind tunnel for optimal flow conditions. This printer has a build volume of 219mm x 123mm x 250mm, which dictates the allowable size of each component. The ducted fans (JFtech QF1611(1311) - 14000KV) are 30 mm in diameter and have 6-bladed propellers (see Figure 2). Each fan has a maximum continuous current of 12A with a maximum thrust of 220g. Each fan was paired with a 30A electronic speed controller (ESC, from RC Electronic Parts) for brushless motors. To vary the speed of the ducted fans, an ESC consistency tester was used. This ESC consistency tester has an output signal width ranging from 800-2200\(\mu\)s, which controls the speed of the wind tunnel. The electronics are powered by a 1000W AC/DC converter (SE-1000-12, Mean Well USA Inc.) that has a maximum current output of 83.8A.
The fans were arranged in a hexagonal array, which is the arrangement with the closest possible packing (covering 90% of available space), as opposed to a more conventional rectangular array (which would cover 70% of available space). The hexagonal packing was therefore expected to provide a more uniform flow, reducing the need for a longer settling chamber. Butyl rubber was used to manage wiring inside of the fan housing to minimize interference with the flow. The fan housing is designed as a standalone structure that can be removed and altered independently of the rest of the wind tunnel, as shown in figure 2. The diagonal of the hexagon measures 4.4 in (such that one of the sides is 2.2 in long), implying a cross-sectional area of 12.57 in\({}^{2}\). (i.e. 81.13 cm\({}^{2}\)).
Figure 1: Overall wind tunnel schematic.
The next part of the wind tunnel is the flow conditioning section, which holds two one-inch-thick hexagonal honeycomb flow straighteners, with openings of nominal sizes 1/4" and 1/8" respectively, as shown in figures 3(a) and (b). The honeycomb was cut to size using a table saw to ensure clean cuts to prevent blockages from forming at the wind tunnel perimeter (although a hack saw could also be used). Prior work has shown that honeycomb cells provide the best performance for flow straightening if they have a length-to-diameter ratio between 7 and 10 [Nagib?]. This criterion was met by the second honeycomb cell (which had a ratio of approximately 8). The flow conditioning section was also 3D printed as an independent structure, to allow different flow straighteners to be used without a complete redesign of the system. This leads into the settling chamber where low speed tests can also be performed, shown in figure 4(a). Again this was 3D printed independently to allow for later modifications.
The contraction was printed to match the hexagonal pattern with a smooth transition to a 2" x 2" square high-speed test section (implying a cross-sectional area of 4 in 25.81 cm2), shown in figure 4(b). This again was printed independently to allow for further modifications without affecting the wind tunnel's overall design. The area ratio is 3.14 at the contraction is relatively low, but provides sufficiently
Figure 3: Flow Conditioning Section. **(a)** initial honeycomb, **(b)** final honeycomb.
Figure 2: Fan Enclosure. **(a)** Fan inlet side. **(b)** Fan outlet side. The enclosure is 3D printed; the ducted fans are held in place with butyl rubber.
uniform flow (as shown below) while keeping an exceptionally small footprint for a given test section size, as discussed in Section 4.
Several experiments were performed in this model wind tunnel. First, we tested the uniformity of the flow in the high speed section. This was done using a translating platform that moves incrementally in the \(x\) and \(z\) directions. A streamlined arm was 3D printed to attach to the platform and to mount a pitot-static tube for velocity measurements. Two differential manometers were used. At lower speeds, a Dwyer 2002 Magnehelic Differential Pressure Gauge provided higher resolution, whereas at higher speeds we switched to a JH Gauge DPGJH20005X. The 2" x 2" high speed test section was split up into nine equal parts and the pitot tube was placed in the center of each part using the translating platform; the measurements were repeated at four speed settings, spanning the range between approximately 15 m/s and 45 m/s. In the next test, we measured flow uniformity in the low speed section of the wind tunnel. For this purpose, the contraction can be removed in seconds by loosening three mounting screws, as shown in figure 4(a). The low speed section was split up into 11 sampling locations because of its hexagonal shape.
## 3 Results
### High speed section
As described above, air velocity data was obtained using a pitot-static probe. Once this data was collected, a percent difference calculation was performed to determine the deviation from the average flow velocity at each point. Results can be seen in figure 6. The greatest difference in the low speed data was observed in the center, where readings are up to a maximum of 3.4% below the average. One possible solution for this would be to add independent controllers to each motor. This would allow the user to vary the output until perfectly uniform flow is recorded. This would add complexity to the system and introduce an additional element requiring calibration, and was therefore not pursued here.
Figure 4: **(a)** Low Speed Test Section. **(b)** High Speed Test Section.
### Low speed section
For completeness, mean flow measurements were also taken in the low speed selection. The resulting data can be seen in figure 7. The velocity here is much less uniform when compared to the high speed speed. The velocity here is much less uniform when compared to the high speed speed, but the velocity here is much less uniform when compared to the high speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed speed speed. The velocity here is much less uniform when compared to the high speed speed speed speed speed speed. The velocity here is much less uniform when compared to the high
speed section. When analyzing the percent difference from the average velocity the maximum observed difference is about 45 percent.
Figure 7: Flow uniformity in the low-speed section, determined by measuring velocity **(a–d)** and calculating percent difference from the average value **(e–f)**.
## 4 Discussion
A portable wind tunnel, with a footprint of 13.5" x 5.5", was built and tested. The wind tunnel used an array of seven hexagonally-packed ducted fans (as opposed to a single fan, as for conventional designs) to generate a relatively uniform flow. A moderate contraction area ratio of 3.14 was used. Flow in the test section exhibited a maximum spatial variation of 3.4%, measured over speeds ranging between 15 m/s and 45 m/s. The tunnel was fabricated using an SLA 3D printer, and ducted fans and electronic speed controllers originally intended for RC aircraft were used. This design and construction approach also allowed keeping the overall cost under $1,000. The resulting tunnel can be used for small-scale experiments, instrument calibration, as well as instruction and rapid even in-class demonstrations.
Next steps include developing a procedure to differentially adjust fans to induce more uniform flow, as well as to generate prescribed velocity profiles - such as thick boundary layers. This wind tunnel design can also be scaled up to construct wind and water tunnels with large test sections, which nevertheless have a much more compact overall footprint than conventional designs.
|
2302.04572 | Location and Type of Crimes in The Philippines: Insights for Crime
Prevention and Management | The purpose of this study was to determine the association of location and
types of crimes in the Philippines and understand the impact of COVID-19
lockdowns by comparing the crime incidence and associations before and during
the pandemic. A document review was used as the main method of data collection
using the datasets from the Philippine Statistics Authority- Annual Statistical
Yearbook (PSA-ASY). The dataset contained the volume of index crimes in the
Philippines from 2016 to 2020. The index crimes were broken down into two major
categories: crimes against persons and crimes against property. Incidence of
crime-by-crime type was available for different administrative regions in the
Philippines. Chi-square test and correlation plot of chi-square residual were
used to determine the associations between the locations and types of index
crimes. A correlation plot of the chisquare residual was used to investigate
the patterns of associations. Results suggest that the continuing effort of the
Philippine government to fight against criminality has resulted in a steady
decline in the incidence of index crimes in the Philippines. The pandemic too
contributed to the decline of crime incidence in the country. These results
imply that police surveillance activities in highly populated areas and
specific interventions to address sexual violence must be in place during
community lockdowns. The Philippine National Police should heighten its
campaign in violence against women and increase its workforce visibility
especially in remote and densely populated areas. The results of this study can
be used as input to local government units for developing programs and plans on
crime prevention. For future researches, it is recommended to conduct a
precinct level analysis for a closer look at crime surveillance. | Liene Leikuma-Rimicane, Roel F. Ceballos, Milton Norman Medina | 2023-02-09T11:23:56Z | http://arxiv.org/abs/2302.04572v1 | ## Location and Type of Crimes in The Philippines: Insights for Crime Prevention and Management
###### Abstract
_The purpose of this study was to determine the association of location and types of crimes in the Philippines and understand the impact of COVID-19 lockdowns by comparing the crime incidence and associations before and during the pandemic. A document review was used as the main method of data collection using the datasets from the Philippine Statistics Authority- Annual Statistical Yearbook (PSA-ASY). The dataset contained the volume of index crimes in the Philippines from 2016 to 2020. The index crimes were broken down into two major categories: crimes against persons and crimes against property. Incidence of crime-by-crime type was available for different administrative regions in the Philippines. Chi-square test and correlation plot of chi-square residual were used to determine the associations between the locations and types of index crimes. A correlation plot of the chi-square residual was used to investigate the patterns of associations. Results suggest that the continuing effort of the Philippine government to fight against criminality has resulted in a steady decline in the incidence of index crimes in the Philippines. The pandemic too contributed to the decline of crime incidence in the country. These results imply that police surveillance activities in highly populated areas and specific interventions to address sexual violence must be in place during community lockdowns. The Philippine National Police should heighten its campaign in violence against women and increase its workforce visibility especially in remote and densely populated areas. The results of this study can be used as input to local government units for developing programs and plans on crime prevention. For future researches, it is recommended to conduct a precinct level analysis for a closer look at crime surveillance._
crime type, COVID-19, index crimes, locations, Philippines +
Footnote †: journal: Journal of Criminal Justice Sciences, Under a Creative Commons Attribution-NonCommercial-ShareAnke 4.0 International (CC-BY-NC-SA 4.0)
+
Footnote †: journal: Journal of Criminal Justice Sciences, Under a Creative Commons Attribution-NonCommercial-ShareAnke 4.0 International (CC-BY-NC-SA 4.0)
+
Footnote †: journal: Journal of Criminal Justice Sciences, Under a Creative Commons Attribution-NonCommercial-ShareAnke 4.0 International (CC-BY-NC-SA 4.0)
+
Footnote †: journal: Journal of Criminal Justice Sciences, Under a Creative Commons Attribution-NonCommercial-ShareAnke 4.0 International (CC-BY-NC-SA 4.0)
###### Contents
* 1 Introduction
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.2 The \(\alpha\)-function
* 1.3 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.4 The \(\alpha\)-function
* 1.5 The \(\alpha\)-function
* 1.6 The \(\alpha\)-function
* 1.7 The \(\alpha\)-function
* 1.8 The \(\alpha\)-function
* 1.9 The \(\alpha\)-function
* 1.1 The \(\alpha\)-function
## Introduction
Increasing crime solution efficiency in any nation requires understanding of the crime trends and their associations to specific locations, which will enhance the crime prevention and management of the police and other enforcement agencies. Crime prevention and management are at the forefront of the agenda of the Philippine government under the Duterte administration. One of the administration's goals is to improve the lives of Filipinos by aggressively reducing corruption and crimes (Timberman, 2019). To cite a few of its strategies, the government has implemented the anti-narcotics campaign or most known as the 'War-on-Drugs,' and the continuous fight against criminality, resulting in thousands of drug-peddlers all over the country (Gita-Carlos, 2019).
One step taken by Philippine government towards crime prevention and increase the police visibility was passing of the law to modify the base salary of military and uniformed personnel. This step was also taken to motivate the existing personnel and encourage Filipinos to pursue careers in the Armed Forces of the Philippines. The salary adjustments resulted in a 72.18% increase for all ranks of uniformed personnel (Department of Budget and Management, 2018). As a result, police visibility has increased in different regions as the Philippine National Police (PNP) has also increased the number of police and police stations all over the country. The Philippine National Police have also reported an increase in their overall crime solution efficiency (Gita-Carlos, 2019).
Despite these improvements, the incidence of index crimes remains observable and has not been eliminated in any part of the country. As of February 2022, there were roughly 10 thousand cases of violation of special laws reported in the Philippines. On the other hand, reckless imprudence resulting to damage of property amounted to over seven thousand cases. Physical injury ranks 3rd with 3,753 cases, and reckless imprudence resulting to homicide is the least with 372 cases (Statista, 2022). Patterns and incidence of crimes vary by type and location (Nivette et al., 2021). Hence, there should be a location-specific component in our collective crime prevention and management approach. COVID-19 lockdowns across the globe, mobility restrictions which are collectively called lockdowns, have been implemented to combat the spread of COVID-19 as another component to investigate. Since crime is a social phenomenon, lockdowns have caused shifts in trends and patterns across different locations (Buil-Gil, Zeng, & Kemp, 2021).
Despite the government's effort in reducing the crime rate in the Philippines, there is still numerous recordings of index crimes or crimes against people and property. Since the Philippines is an archipelagic country, it is very important to analyze the type of crimes that occur in specific islands or areas in the country to provide data for efficient police enforcement. Understanding crime trends and their associations to specific locations will enhance the crime prevention and management of the police force. This will lead to an increase in crime solution efficiency. It will also serve as a guide to regional government executives in identifying which crimes to prioritize in their short-term and long-term plans.
Increasing crime solution efficiency requires understanding crime trends and their associations to specific locations, which will enhance the crime prevention and management of the police force. Hence, this study was designed with the objective to examine the patterns of index crimes per location as input to the development plans
of the different Local Government Units (LGUs) in the Philippines. To achieve this objective, this study aimed to analyze the association of location and types of index crimes in the Philippines and understand the impact of COVID-19 lockdowns by comparing the crime incidence and associations before and during the pandemic. Specifically, this study aimed to determine the predominant index crimes before and during the pandemic across different regions in the Philippines.
#### Literature Review
Crime rates vary greatly from country to country; for example, in 2022, Venezuela ranked with the highest crime rate with 83.58% followed by Papua New Guinea 81.19%, South Africa 77.01% amongst the top three in the world while the Philippines ranked in the 80th place with 42.33% crime index (Balmori de la Miyar, Hoehn-Velasco, & Silverio-Murillo, 2021). Some of the world's lowest crime rates are seen in Switzerland, Denmark, Norway, Japan, and New Zealand (Numbeo, n.d.). There are several reasons for this which include but are not limited to poverty (Chiricos, 1987), unemployment (Fowles & Merva, 1996), and income inequality (Blau & Blau, 1982). The effects of poverty and unemployment are not surprising especially in the Philippines.
The Philippines is one of the 11 Southeast Asian nations with one of the most highly populated countries with approximately 110 million people, second after Indonesia and ranks 13th in the world (www.worldometers.info). The poverty rate of the Philippines has reduced significantly since 2015 from 21.6% to 16.6% in 2021 and a rapid decline of crime rate (14% drop of crime rate) in the country since 2017 under the Duterte Administration. Without a doubt, the campaign against poverty has significantly reduced the crime rate in the country. Moreover, to continue this momentum, the Philippine government has implemented countermeasures in various index crimes in the country.
Interestingly, crime prevention policies have been incorporated in national economic development plans of the Philippines. The Medium-Term Philippine Development Plan embodies as one of its policy frameworks for the improvement of law and order, and law enforcement administration of justice. It emphasizes the government's role to guarantee public safety and national security, while ensuring that the rule of law prevails. Thus, ensuring peace and order rests primarily on the ability of the government to curb criminal activities. In this regard, it is vital to strengthen the criminal justice system. Hence research on the trends of the index crimes and their locations is an important data that will support the criminal justice system in the Philippines especially in the regional and local contexts (Lusthaus et al., 1999).
The occurrence of COVID-19 pandemic shifted the global trend of index crimes (Meyer, Prescott, & Sheng, 2022). It was alarming to find that domestic violence increased during the pandemic (Nivette et al., 2021). In fact, in 2020, various media sources in the United States reported an increase of homicide cases (Asher & Horwitz, 2020; Hilsenrath, 2021; McCarthy, 2020; Struct, 2020). Meanwhile, with the limited opportunities of the criminals due to community lockdowns, other crimes, such as burglary and robbery, were reported to have decreased following the start of the COVID-19 pandemic in the United States (Boman & Gallupe, 2020). Despite there being numerous studies on crime during COVID-19 (Nivette et al., 2021), global research on the association of location and types of index crimes is very limited.
**Methodology**
* _Research Framework_ Crimes in the Philippines are reported based on the location where the crime occurs and the specific crime classification or type. Hence, it is not surprising that crime incidence, as a metric monitored by concerned government agencies, is described based on these characteristics. It is also imperative that crime prevention and management programs of the government will be greatly influenced by the insights produced using this information. Many studies found in literature emphasize that understanding crimes as an input to crime prevention and management requires a thorough investigation between the association of types and locations of crimes (Irvin-Erickson & La Vigne, 2015; Leong & Sung, 2015; Newton & Felson, 2015; Zhou et al., 2021). The framework of the study is provided in Figure 1.
* _Research Design and Data collection_ The study used a retrospective quantitative design by utilizing records of crime incidence in the Philippines. The dataset used in this study was obtained from the publication of the Philippine Statistics Authority (PSA), specifically in Chapter 17, which is the Public Order, Safety and Justice Statistics of their Annual Statistical Yearbook. The data was open and free for public use as stipulated in the PSA publication. The dataset contained the volume of index crimes in the Philippines from 2016 to 2020. The index crimes were broken down into two major categories, crimes against persons and crimes against property. Crimes against persons include murder, homicide, rape, and physical injury. On the other hand, crimes against property include theft, robbery, car-napping, and cattle rustling. Incidence of crime by crime type was available for the different administrative regions in the Philippines, namely, National Capital Region (NCR); Cordillera Administrative Region (CAR); Ilocos Region; Cagayan Valley; Central Luzon, Cavite, Laguna, Batangas, Rizal, and Quezon (CALABARZON), among various others.
* _Statistical Analysis_ All categorical variables were presented as numbers (n) and percentages (%). A Chi-square test and correlation plot of chi-square residual was used to determine the associations between the locations and types of index crimes. A correlation plot or dot plot of the chi-square residual was used to investigate the patterns of associations. It is interpreted based on the size and color of dots. Dots are proportional to the magnitude of association. If there is a strong positive association, the dot will appear large dark blue, while a strong negative association will appear to be large dark red. Furthermore, bar charts were used to display the annual rate of change for rape incidence. A two-tailed p-value of < 0.05 was considered statistically significant for all tests. Statistical analysis was carried out using the R Programming language version 4.1.3.
Figure 1: Framework of the Study
## Results
* _Volume of Index Crimes_ Table 1 shows the incidence of index crimes from 2016 to 2020 broken down into two major categories: crimes against persons and crimes against properties. There has been a steady annual decline of 15% to 28% in the volume of crimes against persons during this period. On the other hand, a steady annual decline of at least 11% and as much as 49% has been observed in the volume of crimes against property. The annual decline rate ranges from 16% to 40% in the overall volume of index crimes.
The declining volume of index crimes is a piece of evidence that the government's continuing efforts to fight criminality are working. Furthermore, the ability of the police force to solve cases has caused an increase in the crime solution efficiency rating of 78.62 percent. These marked improvements in the overall crime picture translate to a better security outlook among our people and add to upbeat investor confidence that spurs economic growth despite the ongoing health crisis due to the pandemic (Benter and Cawi, 2021; Galabin, Pallega, and Recapente, 2021; Mark and Sarcena, 2021). In addition, it is notable that in 2020, during which the whole country is placed in lockdowns due to COVID-19, there is around a 49% decline in the incidence of crimes against property while only a 28% decline in crime against persons (Interpol, 2020; Payne, Morgan, and Piquero, 2021).
The United Nations Office of Drugs and Crimes (UNODC) noted in their research report in 2020 that crimes against property such as robbery and theft decreased by 50 percent in many countries, particularly those with stricter lockdowns. Furthermore, crimes against the person, such as homicide during lockdown periods, have declined by as much as 25 percent; however, this was short-lived since there is a noticeable increase in homicide rates when lockdowns are lifted (Chainey and Muggah, 2022). The Philippines was placed in a series of lockdowns in 2020, explaining the considerable decline in index crimes.
* _Index Crimes by location_ The distribution of the annual volume of index crimes according to the different administrative regions in the Philippines is presented in Table 2. The bulk of crime incidence is observed in the National Capital Region (16% to 18%), followed by Central Visayas (11% to 15%) and Western Visayas (6% to 10%) from 2016 to 2020. The Bangsamoro Autonomous Region in Muslim Mindanao (BARMM), Cordillera Administrative Region (CAR), Caraga, and MIMAROPA regions reported a low incidence of index crimes.
Several studies have suggested that population density may explain crime incidence. There are higher opportunities for crime in places where the population density is high (Harries, 2006), particularly in contexts where human crowding influences aggression and hostility (Kvalseth, 1977; Regoeczi, 2003). The National Capital Region is the most densely populated in the Philippines, with a population density of 21,765 persons per square kilometer. The average national population density per square kilometer is only 363 persons. The Cordillera Administrative Region, MIMAROPA, and BARMM have the lowest population densities of 91, 109, and 120 persons per square kilometer.
Furthermore, there are notable annual decreases in the annual volume of index crimes in different administrative regions in the Philippines from 2016 to 2020. The average rate of decline ranges from 12% to as much as 39% in all administrative regions. The Cordillera Administrative Region, Western Visayas Region, and Northern Mindanao tallied a decline rate of at least 30%. Central Visayas and Western Visayas have recorded a decline rate of 12% and 14%, respectively. The government's success in reducing crime incidence in the country is attributed to many factors, mainly due to the effective government policies and programs.
The crime incidence has continued to drop in 2020 during the pandemic. Through the recommendation of the Interagency task force against COVID-19 (IATF), which oversees the overall management of COVID-19 in the country, the government has created a categorization that serves as the general guide in the level of mobility restrictions in different regions and localities. The limitations of human mobility during the pandemic have resulted in a significant decline in crime incidence in the different administrative regions in the Philippines. Sixteen (16) administrative regions have at least a 30% rate of annual decline in crime index, while during the pre-pandemic, only one (3) administrative region has achieved the said rate.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{Annual Volume of Index Crimes (\%)} & Average Annual Rate of Decine \\ \cline{2-5} Administrative Regions & **2016** & **2017** & **2018** & **2019** & **2020** \\ \hline NCR & 21681(16\%) & 17788(17\%) & 14550(18\%) & 12313(18\%) & 7120(17\%) & -17\% & -42\% \\ CAR & 3990(3\%) & 1674(2\%) & 1159(1\%) & 950(1\%) & 579(1\%) & -36\% & -39\% \\ I - Docs Region & 4867(3\%) & 3712(3\%) & 2910(4\%) & 2402(4\%) & 1481(4\%) & -21\% & -28\% \\ I - Cagayan Valley & 3663(3\%) & 2716(3\%) & 2389(3\%) & 1967(3\%) & 1258(3\%) & -19\% & -30\% \\ III - Central Luzon & 10713(8\%) & 8640(8\%) & 6688(9\%) & 5836(3\%) & 3659(9\%) & -18\% & -39\% \\ N-A CALABARZON & 13070(9\%) & 11462(11\%) & 8312(10\%) & 7204(11\%) & 4520(11\%) & -18\% & -37\% \\ N-B MIMAROPA & 2145(2\%) & 1718(2\%) & 1427(2\%) & 1244(2\%) & 831(2\%) & -17\% & -23\% \\ V - Boral Region & 9614(7\%) & 7226(7\%) & 4934(9\%) & 4155(9\%) & 2466(6\%) & -24\% & -41\% \\ VI - Western Visayas & 14077(10\%) & 11053(10\%) & 5030(9\%) & 3960(9\%) & 2429(6\%) & -32\% & -39\% \\ VII - Central Visayas & 17266(12\%) & 12295(11\%) & 13637(17\%) & 11213(16\%) & 6349(15\%) & 1216 & -43\% \\ VIII - Eastern Visayas & 3842(3\%) & 3022(3\%) & 2894(4\%) & 2407(4\%) & 1543(4\%) & -14\% & -30\% \\ IX - Zambourga Peritnis & 6561(5\%) & 4936(5\%) & 4213(5\%) & 3569(5\%) & 2114(5\%) & -16\% & -41\% \\ X - Northern Mindanao & 8838(6\%) & 6855(5\%) & 3266(4\%) & 2703(4\%) & 1681(4\%) & -31\% & -38\% \\ X - Dava Region & 6465(5\%) & 5620(5\%) & 3791(5\%) & 3240(5\%) & 2052(5\%) & -20\% & -37\% \\ XII - SCOCCSARGEN & 8117(5\%) & 6290(5\%) & 3331(4\%) & 2764(4\%) & 172(2\%) & -29\% & -38\% \\ XIII - Caraga & 2595(2\%) & 1800(2\%) & 1856(2\%) & 1547(2\%) & 980(2\%) & -15\% & -35\% \\ BARMM & 2070(1\%) & 1228(1\%) & 926(1\%) & 740(1\%) & 528(1\%) & -28\% & -29\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Rate of Decline of Annual Volume of Index Crimes during the pandemic.
* _Associations of Location and Type of Index Crimes_ There is a significant association between the location and type of index crime in the Philippines before the pandemic, from 2016 to 2019 (Figure 2, all p-values\(<\)0.001), suggesting that crime incidence varies across locations and certain types of crimes do not randomly occur in different locations. In addition, Figure 2 shows the correlation plot between the type of index crimes and the location in which they were committed before the pandemic. Our results revealed that murder is highly associated with BARMM and moderately associated with Caraga, CALABARZON, and Davao Region. Furthermore, homicide is highly associated with SOCCSKSARGEN, while physical injury is found to have strong associations with Western Visayas and Gagayan Valley. Rape is found to have strong associations with four locations, namely, Ilocos, Central Luzon, CALABARZON, and MIMAROPA. Robbery is associated with NCR, while theft is associated with NCR and Central Visayas. Car-napping is associated with Central Luzon, CALABARZON, and SOCCSKSARGEN, while cattle rustling is associated with Ilocos Region (Table 3).
Figure 2: Correlation plot between location and type of index crimes.
A similar analysis was done on the crime incidence in 2020, during which the pandemic hit the whole country and was placed in lockdowns for several periods (Joaquin & Biana, 2021). There is a significant association between location and type of index crimes, which implies that crime incidence of different types during lockdowns varies across different locations. Figure 3 shows the correlation plot between the type and location index crimes during the pandemic. Murder is found to have strong associations with BARMM, Caraga, CALABARZON, and Davao Region, while homicide has a strong association with SOCCSKSARGEN. Physical Injury is strongly associated with Western Visayas, while robbery is associated with NCR and Central Visayas. Furthermore, rape is associated with eleven locations, namely, CAR, Ilocos, Cagayan Valley, Central Luzon, CALABARZON, MIMAROPA, Eastern Visayas, Northern Mindanao, Davao, and CARAGA
Figure 3 also shows the average rate of change for rape incidence from 2016 to 2020 in different locations in the Philippines. This plot shows that there is indeed an increase in the number of rape incidents in these regions, namely, CAR, Ilocos, Cagayan Valley, Central Luzon, CALABARZON, MIMAROPA, Eastern Visayas, Northern Mindanao, Davao, and CARAGA. On the other hand, rape incidence has declined over the years in the following locations, BARMM, SOCCSKSARGEN, Central Visayas, Western Visayas, Bicol Region, and NCR.
It is notable that the pattern of associations between location and type of index crimes before the pandemic and during the period of the pandemic did not change much for most types of index crimes, namely, murder, homicide, robbery, physical injury, theft, car-napping, and cattle rustling (Table 3). Although a consistent decline is observed, the incidence pattern remains the same for these crime types. On the contrary, a different observation is found in the associations between rape and locations where such crime is committed. Rape is associated with ten (10) locations during the pandemic, while it is only associated with four (4) locations before the pandemic. Rape, therefore, had become a significant crime during the pandemic in
Figure 3: Correlation plot between location and type of index crimes (a), Annual rate of change for Rape from 2016 to 2020
many regions of the Philippines. Reports from other countries also stated an increase in sexually related violence during the pandemic. French police reported a nationwide spike of about 30%, while there is an 18% increase in domestic violence and sexual assaults in Spain. The National Alliance to end sexual violence has observed an increase of around 40% of rape cases in most of the rape crisis centers that they have surveyed around the globe.
#### Discussion
Crime prevention and management are at the forefront of the agenda of the Philippine government under the Duterte administration. Its campaign against war on drugs is aimed at reducing criminality and uplifting the lives of the Filipino people. Our study revealed that the continuing efforts of the government to fight criminality have resulted in the increased crime solution efficiency of the police force. The government's success in reducing crime incidence in the country is attributed to many factors, mainly due to the effective government policies and programs. Similar studies have shown that effective government programs significantly reduce crime incidence in different localities (Arvate et al., 2018; Lilley and Boba, 2009; Maguire, Hardy, and Lawrence, 2004).
The crime incidence has continued to drop in 2020 during the pandemic. Through the recommendation of the Interagency task force (IATF) against COVID-19, which oversees the overall management of COVID-19 in the country, the government has created a categorization that serves as the general guide in the level of mobility restrictions in different regions and localities. The limitations of human mobility during the pandemic have resulted in a significant decline in crime incidence in the different administrative regions in the Philippines. Furthermore, the COVID-19 lockdowns in the Philippines have contributed to the decline of crime incidence in the Philippines. Boman and Mowen (2021) noted that global crime trends have declined during the pandemic. According to them, this is expected since during mobility restrictions, opportunities for crime such as robbery, theft and road violence and crimes also decrease.
\begin{table}
\begin{tabular}{l l l} \hline
**Index Crimes** & **Pre-pandemic** & **During pandemic** \\ \hline Murder & BARMM, Caraga, CALABARZON, BARMM, Caraga, CALABARZON, & \\ & Davao Region (4) & Davao Region (4) \\ Homicide & SOCCSKARGEN (1) & SOCCSKARGEN (1) \\ Physical Injury & Western Visayas and Cagayan & Western Visayas and Cagayan \\ & Valley (2) & Valley (2) \\ Rape & Ilocos, Central Luzon, & CAR, Ilocos, Cagayan Valley, \\ & CALABARZON, and MIMAROPA & Central Luzon, CALABARZON, \\ & (4) & MIMAROPA, Eastern Visayas, \\ & & Northern Mindanao, Davao, and \\ & & CARAGA (10) \\ Robbery & NCR (1) & NCR (1) \\ Theft & NCR and Central Visayas (2) & NCR and Central Visayas (2) \\ Carnapping & Central Luzon, CALABARZON, & Central Luzon, CALABARZON, and \\ & and SOCCSKARGEN (4) & SOCCSKARGEN (4) \\ Cattle Rustling & Ilocos (1) & Ilocos (1) \\ \hline \end{tabular}
\end{table}
Table 3: List of locations associated with specific type of index crimes.
The significant association between type and location of index crimes in the Philippines suggests that crimes vary across locations and the occurrences of crimes is not random. Rape has become predominant during the pandemic in 11 regions. The findings are similar to many studies wherein crimes against women such as rape, domestic violence and sexual abuse have become prominent during COVID-19 (Rapee et al., 2022; Rockowitz et al., 2021; Sifat, 2020). All these studies consistently showed that rape had become a significant crime during the pandemic in many regions of the Philippines. Reports from other countries also stated an increase in sexually related violence during the pandemic. French police reported a nationwide spike of about 30%, while there was an 18% increase in domestic violence and sexual assaults in Spain. The National Alliance to end sexual violence observed an increase of around 40% of rape cases in most of the rape crisis centers that were surveyed around the globe. Furthermore, the pattern of associations between location and type of index crimes before and after the pandemic did not change much for most crime types, namely, murder, homicide, robbery, physical injury, theft, car-napping, and cattle rustling (Muldoon et al., 2021; Walker, 2020).
## Conclusion
The continuing effort of the government to fight against criminality has resulted in a steady decline in the incidence of index crimes in the Philippines. The pandemic has also contributed to the decline of crime incidence in the country. In terms of location, the incidence of index crimes is high in densely populated areas. Hence, it is recommended to increase police presence and surveillance activities in highly populated areas. Furthermore, our analysis revealed that crime type is indeed associated with the location. Crime types that should be prioritized by location are identified in this study. Hence, regional government executives can use our results as input to policy and programs aimed at crime prevention and management in their localities. Moreover, rape had become a severe issue in many locations during the pandemic. Thus, there must be a specific intervention to address sexual violence during community lockdowns.
## Acknowledgement
This study was developed with ESF Project No. 8.2.2.0/20/1/003 "Strengthening of Professional Competence of Daugavpils University Academic Personnel of Strategic Specialization Branches 3rd Call". The authors thank Analyn Cabras, director of the Coleoptera Research Center of the University of Mindanao for the support and cooperation.
|
2308.11642 | Gesture Recognition based on Long-Short Term Memory Cells using
Smartphone IMUs | Over the last few decades, Smartphone technology has seen significant
improvements. Enhancements specific to built-in Inertial Measurement Units
(IMUs) and other dedicated sensors of the smartphones(which are often available
as default) such as- Accelerometer, Gyroscope, Magnetometer, Fingerprint
reader, Proximity and Ambient light sensors have made devices smarter and the
interaction seamless. Gesture recognition using these smart phones have been
experimented with many techniques. In this solution, a Recurrent Neural Network
(RNN) approach, LSTM (Long-Short Term Memory Cells) has been used to classify
ten different gestures based on data from Accelerometer and Gyroscope.
Selection of sensor data (Accelerometer and Gyroscope) was based on the ones
that provided maximum information regarding the movement and orientation of the
phone. Various models were experimented in this project, the results of which
are presented in the later sections. Furthermore, the properties and
characteristics of the collected data were studied and a set of improvements
have been suggested in the future work section. | Yuvaraj Govindarajulu, Raja Rajeshwari Raj Kumar | 2023-08-16T15:37:27Z | http://arxiv.org/abs/2308.11642v1 | # Gesture Recognition based on Long-Short Term Memory Cells (LSTM) using Smartphone IMUs
###### Abstract
Over the last few decades, Smartphone technology has seen significant improvements. Enhancements specific to built-in Inertial Measurement Units (IMUs) and other dedicated sensors of the smartphones(which are often available as default) such as -Accelerometer, Gyroscope, Magnetometer, Fingerprint reader, Proximity and Ambient light sensors have made devices smarter and the interaction seamless. Gesture recognition using these smart phones have been experimented with many techniques. In this solution, a Recurrent Neural Network (RNN) approach, LSTM (Long-Short Term Memory Cells) has been used to classify ten different gestures based on data from Accelerometer and Gyroscope. Selection of sensor data (Accelerometer and Gyroscope) was based on the ones that provided maximum information regarding the movement and orientation of the phone. Various models were experimented in this project, the results of which are presented in the later sections. Furthermore, the properties and characteristics of the collected data were studied and a set of improvements have been suggested in the future work section.
Smartphone Motion Sensors; Gesture Recognition; Recurrent Neural Networks; LSTM; Smart phone IMUs +
Footnote †: 2018: This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of developmental interactive systems (FS’18).
+
Footnote †: 2018: This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of developmental interactive systems (FS’18). |
2301.03995 | Efficiently unquenching QCD+QED at O($α$) | We outline a strategy to efficiently include the electromagnetic interactions
of the sea quarks in QCD+QED. When computing iso-spin breaking corrections to
hadronic quantities at leading order in the electromagnetic coupling, the
sea-quark charges result in quark-line disconnected diagrams which are
challenging to compute precisely. An analysis of the variance of stochastic
estimators for the relevant traces of quark propagators helps us to improve the
situation for certain flavour combinations and space-time decompositions. We
present preliminary numerical results for the variances of the corresponding
contributions using an ensemble of $N_\mathrm{f}=2+1$ domain-wall fermions
generated by the RBC/UKQCD collaboration. | Tim Harris, Vera Gülpers, Antonin Portelli, James Richings | 2023-01-10T14:40:44Z | http://arxiv.org/abs/2301.03995v1 | # Efficiently unquenching QCD+QED at O(\(\alpha\))
###### Abstract:
We outline a strategy to efficiently include the electromagnetic interactions of the sea quarks in QCD+QED. When computing iso-spin breaking corrections to hadronic quantities at leading order in the electromagnetic coupling, the sea-quark charges result in quark-line disconnected diagrams which are challenging to compute precisely. An analysis of the variance of stochastic estimators for the relevant traces of quark propagators helps us to improve the situation for certain flavour combinations and space-time decompositions. We present preliminary numerical results for the variances of the corresponding contributions using an ensemble of \(N_{\rm f}=2+1\) domain-wall fermions generated by the RBC/UKQCD collaboration.
## 1 Introduction
Several lattice QCD predictions which form important input for precision tests of the Standard Model have uncertainties at or below the 1% level, for example the HVP contribution to \((g-2)_{\mu}\), \(f_{K}/f_{\pi}\), \(g_{\rm A}\) or the Wilson flow scale \(\sqrt{t_{0}}\) to name a few [1, 2]. However, to further improve such predictions, QCD with iso-spin symmetry is not a sufficiently accurate effective description of the low-energy dynamics and QED, which contributes one source of iso-spin breaking due to the different up- and down-quark electric charges, must be included. Recent efforts have been successful at including iso-spin breaking corrections, and some of which fully account for the effects of the sea-quark electric charges [3, 4, 5, 6, 7]. Nevertheless, many computations of iso-spin breaking effects still neglect to incorporate these dynamical effects in an approximation known as electroquenching. As the FLAG report notes in Section 3.1.2[2], computations using the electroquenched approximation might feature an uncontrolled systematic error.
In this work we aim to include the effects of the electric charge of the sea quarks in the perturbative method known as the RM123 approach. This amounts to computing at least two additional Wick contractions. In order to sum the vertices in the resulting diagrams over the lattice volume, some approximations must be used which often introduce additional fluctuations, for example due to the auxiliary fields of a stochastic estimator. Here we investigate some simple decompositions which may avoid large contributions to the variance, so that sufficiently precise results can be obtained to systematically include all sources of iso-spin breaking without incurring a large computational cost.
## 2 Sea-quark effects in the RM123 method
Due to the smallness of the fine-structure constant \(\alpha\sim 1/137\) and the renormalized light-quark mass difference \((m_{\rm u}^{\rm R}-m_{\rm d}^{\rm R})/\Lambda\sim 1\%\), it is natural to expand physical observables (i.e. in QCD+QED) in these parameters to compute iso-spin breaking corrections, as was first outlined in Refs. [8, 9]. In the resulting expansion of an observable \(O\)
\[\langle O\rangle=\langle O\rangle\Big{|}_{e=0}+\tfrac{1}{2}e^{2}\Big{[}\frac{ \partial}{\partial e}\frac{\partial}{\partial e}\langle O\rangle\Big{]}_{e=0}+\ldots \tag{1}\]
the leading corrections in the electric charge \(e=\sqrt{4\pi\alpha}\) are parameterized in terms of the correlation function
\[\frac{\partial}{\partial e}\frac{\partial}{\partial e}\langle O\rangle=(-{ \rm i})^{2}\int\,{\rm d}^{4}x\,\int\,{\rm d}^{4}y\,\langle J_{\mu}(x)A_{\mu}(x )J_{\nu}(y)A_{\nu}(y)O\rangle_{\rm c} \tag{2}\]
where the electromagnetic current for \({\rm u},{\rm d},{\rm s}\) quark flavours is defined
\[J_{\mu}=\sum_{f={\rm u},{\rm d},{\rm s}}Q_{f}\bar{\psi}_{f}\,\gamma_{\mu}\psi_ {f}\,,\qquad Q_{\rm u}=\tfrac{2}{3},\quad Q_{\rm d}=Q_{\rm s}=-\tfrac{1}{3}. \tag{3}\]
By choosing the expansion point to be a theory with \(\alpha=0\) and iso-spin symmetry \(m_{\rm u}=m_{\rm d}\), only correlation functions in the \(N_{\rm f}=2+1\) theory need to be evaluated, which we denote with \(e=0\) in Eq. (1). The precise definition of such a theory using an additional set of renormalization conditions is necessary to fix the meaning of the leading-order term on the right-hand side (and
conversely the iso-spin breaking corrections themselves). Otherwise the predictions of QCD+QED are unambiguously defined, up to its intrinsic accuracy, by fixing \(N_{\rm f}\) quark masses and the QCD coupling as the electric coupling does not renormalize at this order. In the above, the ellipsis stands for the mass counterterms which are needed to make physical predictions due to the contribution to the quark self-energy induced by QED.
After integrating out the fermion and photon fields, the resulting Wick contractions \(W_{i}\) are shown in Fig. 1, which contribute to the derivative with respect to the electric charge through the connected correlation function
\[\frac{\partial}{\partial e}\frac{\partial}{\partial e}\langle O \rangle=\sum_{i=1}^{4}\langle OW_{i}\rangle_{\rm c}. \tag{4}\]
The first two subdiagrams, which arise soley from the electric charges of the sea quarks, can be expressed in terms of a convolution with the photon propagator (in some fixed gauge) \(G_{\mu\nu}(x)=\langle A_{\mu}(x)A_{\nu}(0)\rangle\)
\[W_{1,2}=-a^{8}\sum_{x,y}H_{1,2}^{\mu\nu}(x,y)G_{\mu\nu}(x-y), \tag{5}\]
where \(H_{1,2}\) are the traces of quark propagators \(S_{f}\left(x,y\right)=\langle\psi_{f}\left(x\right)\bar{\psi}_{f}\left(y\right)\rangle\)
\[H_{1}^{\mu\nu}(x,y) =\sum_{f\,.g}Q_{f}Q_{g}\;{\rm tr}\{\gamma_{\mu}S_{f}\left(x,x \right)\}\;{\rm tr}\{\gamma_{\nu}S_{g}\left(y,y\right)\}, \tag{6}\] \[H_{2}^{\mu\nu}(x,y) =-\sum_{f}Q_{f}^{2}\;{\rm tr}\{\gamma_{\mu}S_{f}\left(x,y\right) \gamma_{\nu}S_{f}\left(y,x\right)\}. \tag{7}\]
These two diagrams are the main subject of these proceedings, and the techniques advocated for the first can be effectively reused for the third diagram, \(W_{3}\). In the following sections we introduce stochastic estimators only for the quark lines and compute the subdiagrams by convoluting with the exact photon propagator which avoids introducing additional stochastic fields for the U(1) gauge potential. The final diagram \(W_{4}\), which only contributes if the observable \(O\) depends explicitly on the (charged) fermion fields, is the only one surviving the electroquenched approximation, and, can in most cases be computed efficiently provided that the leading-order diagram is already under control.
Figure 1: Wick contractions which appear at leading order in the expansion of a hadronic observable \(O\) in the electromagnetic coupling. Each closed fermion line has contributions from all of the quark flavours \({\rm u,d,s,\ldots}\) with the appropriate charge factors.
We note that the variance of the contributions to the connected correlation functions on the r.h.s. of Eq. (4) crudely factorizes
\[\sigma^{2}_{OW_{1,2}} \approx\langle O\rangle_{\rm c}^{2}\langle W_{1,2}\rangle_{\rm c}^{ 2}+\langle OW_{1,2}\rangle_{\rm c} \tag{8}\] \[\approx\sigma^{2}_{O}\sigma^{2}_{W_{1,2}}, \tag{9}\]
where in the first line we have made the Gaussian approximation, and in the second line we have assumed that the fluctuations are much larger than the signal \(\langle OW_{1,2}\rangle_{\rm c}\). Thus, in the following sections we will analyse the variance of individual subdiagrams \(W_{1,2}\) in order to gain a rough insight into the fluctuations of the total correction, in a similar fashion to the analysis of Ref. [10]. In that case, however, the correction to the factorization of the variance is exponentially suppressed in the separation between the vertices of the subdiagrams.
## 3 Quark-line disconnected subdiagram \(W_{1}\)
We begin by noting that the hadronic part of the diagram factorizes into two traces,
\[H_{1}^{\mu\nu}(x,y)=T_{\mu}(x)T_{\nu}(y), \tag{10}\]
each of which, with the current defined in Eq. (3) and in the \(N_{\rm f}=2+1\) theory with iso-spin symmetry, is the difference of the light- and strange-quark propagators
\[T_{\mu}(x)=\tfrac{1}{3}\operatorname{tr}\{\gamma_{\mu}[S_{\rm ud}(x,x)-S_{\rm s }(x,x)]\}. \tag{11}\]
It is convenient to rewrite this difference as a product [10]
\[S_{\rm ud}-S_{\rm s}=(m_{\rm s}-m_{\rm ud})S_{\rm ud}S_{\rm s} \tag{12}\]
which makes the explicit suppression of \(T_{\mu}\) in the \(\mathrm{SU}(3)\)-symmetry breaking parameter \(m_{\rm s}-m_{\rm ud}\) explicit. This additionally results in a suppression of the variance of \(W_{1}\) by \((m_{\rm s}-m_{\rm ud})^{4}\). This suppression results in a cancellation of a quartic short-distance divergence in the variance of the contribution of each individual flavour to \(W_{1}\), explaining this favourable flavour combination.
While the identity in Eq. (12) is easily derived for Wilson-type fermions, here we sketch that it holds exactly for the domain-wall fermion valence propagator \(S_{f}=\tilde{D}_{f}^{-1}\) which (approximately) satisfies the Ginsparg-Wilson relation [11]. Recalling the definition of \(\tilde{D}_{f}\) in terms of the 5D Wilson matrix \(D_{5,f}\) (see Ref. [12] for unexplained notation)
\[\tilde{D}_{f}^{-1}=(\mathcal{P}^{-1}D_{5,f}^{-1}\,R_{5}\mathcal{P})_{11}, \tag{13}\]
where the matrix indices indicate the coordinate in the fifth dimension, the result is obtained immediately from
\[\tilde{D}_{\rm ud}^{-1}-\tilde{D}_{\rm s}^{-1}=(m_{\rm s}-m_{\rm ud})( \mathcal{P}D_{5,\rm ud}^{-1}R_{5}D_{5,s}^{-1}R_{5})_{11} \tag{14}\]
by noting that the following matrix projects on the physical boundary
\[(R_{5})_{..}=(R_{5}\mathcal{P})_{..1}(\mathcal{P}^{-1})_{1}.. \tag{15}\]
The preceding identity is easily demonstrated using the explicit representations
\[R_{5}=\begin{pmatrix}&P^{+}\\ P^{-}&\end{pmatrix},\qquad\mathcal{P}^{-1}=\begin{pmatrix}P_{-}&&P_{+}\\ P_{+}&\ddots&&\\ &\ddots&&\\ &&P_{+}&P_{-}\end{pmatrix}, \tag{16}\]
where \(P_{\pm}=1\pm\gamma_{5}\).
Using the identity for the difference, there are two independent estimators for the trace
\[\Theta_{\mu}(x) = \tfrac{1}{3}(m_{\rm s}-m_{\rm ud})\,\frac{1}{N_{\rm s}}\sum_{i=1} ^{N_{\rm s}}\eta_{i}^{\dagger}(x)\gamma_{\mu}\{S_{\rm ud}S_{\rm s}\eta_{i}\}(x), \tag{17}\] \[\mathcal{T}_{\mu}(x) = \tfrac{1}{3}(m_{\rm s}-m_{\rm ud})\,\frac{1}{N_{\rm s}}\sum_{i=1} ^{N_{\rm s}}\{\eta_{i}^{\dagger}S_{\rm s}\}(x)\gamma_{\mu}\{S_{\rm ud}\eta_{i} \}(x), \tag{18}\]
where the auxiliary quark fields \(\eta_{i}(x)\) have zero mean and finite variance. The properties of both estimators were investigated in detail in Ref. [10], where it was shown that the contribution to the variance from the auxiliary fields for the second split-even estimator was in the region of a factor O(100) smaller than the first standard estimator, which translates into the same factor reduction in the cost. The split-even estimator has since been used extensively for disconnected current correlators [13, 14, 15], while in the context of the twisted-mass Wilson formulation similar one-end trick estimators have often been employed for differences of twisted-mass propagators [16].
In this work we propose an estimator for the first diagram \(W_{1}\) using
\[\mathcal{W}_{1}\approx\Big{(}a^{4}\sum_{x}\mathcal{T}_{\mu}(x)\Big{)}\Big{(}a ^{4}\sum_{y}\mathcal{T}_{\nu}(y)G_{\mu\nu}(x-y)\Big{)} \tag{19}\]
where independent estimators are used for the two traces to avoid incurring a bias with a finite sample size. The convolution in the second parentheses can be efficiently computed using the Fast Fourier Transform (FFT). With a minor modification, an estimator using all possible unbiased combinations of samples can be written at the cost of performing O(\(N_{\rm s}\)) FFTs. The standard estimator is obtained by replacing both occurances of \(\mathcal{T}_{\mu}\) with \(\Theta_{\mu}\) in Eq. (19).
We performed an analysis of the variance for the standard and split-even estimators for \(\mathcal{W}_{1}\) using the domain-wall ensemble generated by the RBC/UKQCD collaboration whose parameters are listed in Tab. 1. The photon propagator is computed in the QED\({}_{L}\) formulation [18] in the Feynman gauge. The results for the variances, which are dimensionless numbers, are shown in Fig. 2. In addition, we plot the variance for the contribution of a single flavour \(\mathcal{W}_{1}^{\rm u}\) using the
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(L/a\) & \(T/a\) & \(m_{\pi}\) & \(m_{\pi}L\) & \(a\) & \(N_{\rm cfg}\) \\ \hline
24 & 64 & 340 MeV & 4.9 & 0.12 fm & 50 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The parameters of the C1 ensemble of \(N_{\rm f}=2+1\) Shamir domain-wall fermions used in the numerical experiments in this work, see Ref. [17] for details.
standard estimators for the traces. We note that all the variances are dominated by the fluctuations of the auxiliary fields for small \(N_{\rm s}\), and in particular scale like \(1/N_{\rm s}^{2}\) in that region.
As expected, the standard estimator including the light-quark and strange-quark contributions (blue circles) is suppressed with respect to the contribution of a single flavour (red squares). Furthermore, the variance of the split-even estimator (green triangles) is reduced by a factor of \(10^{4}\) with respect to the standard one (blue circles). This reduction is commensurate with the reduction in the variance observed for the disconnected contribution to the current correlator [10], which suggests the same mechanisms are present here. For \(N_{\rm s}\sim 100\), the variance is independent of the number of auxiliary field samples which indicates that it is dominated by the fluctuations of the gauge field. In this case no further variance reduction is possible for a fixed number of gauge configurations. Finally we note that the convolution of the second parentheses of Eq. (19) can be simply inserted sequentially in any of the diagrams of type \(W_{3}\).
## 4 Quark-line connected subdiagram \(W_{2}\)
In contrast to the quark-line disconnected subdiagram, there is no cancellation in the variance in the connected subdiagram \(W_{2}\) between the light and strange-quark contributions. In this case, power counting suggests that the variance diverges with the lattice spacing like \(a^{-4}\) as \(a\to 0\) and is expected to be dominated by short-distance contributions. Translation averaging should therefore be very effective and one way to implement it is to use an all-to-all estimator [19] for the quark propagator
\[{\cal S}_{f}\left(x,x+r\right)=\frac{1}{N_{\rm s}}\sum_{i=1}^{N_{ \rm s}}\{S_{f}\eta_{i}\}(x)\eta_{i}^{\dagger}(x+r), \tag{20}\]
Figure 2: Left: Comparison of the variance versus the number of sources for the \(W_{1}\) quark-line disconnected diagram, using a single flavour (red squares), the standard estimator for \({\rm u,d,s}\) flavours (blue circles) and the split-even estimator (green triangles). The dashed line shows \(1/N_{\rm s}^{2}\) scaling. In this figure, the (local) currents are not renormalized and the charge factors are not included.
using independent fields for each propagator in the trace
\[\mathcal{H}_{2}^{\mu\nu}(r)=a^{4}\sum_{x}\sum_{f}Q_{f}^{2}\operatorname{tr}\{ \gamma_{\mu}\mathcal{S}_{f}\left(x,x+r\right)\gamma_{\nu}\mathcal{S}_{f}\left(x+ r,x\right)\}. \tag{21}\]
As written, the estimator is feasible to compute for a small number of separations \(r\) between the vertices and, although it introduces a (mild) signal-to-noise ratio problem at large \(r\), should be efficient at small \(|r|\leq R\) given the leading extra contribution vanishes like \(N_{\mathrm{s}}^{-2}\), c.f. Sec. 3.
For the remainder \(|r|>R\), we propose using \(N_{X}\) randomly selected point sources \(X_{n}\)[20]
\[\bar{H}_{2}^{\mu\nu}(r)=\frac{L^{3}T}{N_{X}}\sum_{n=1}^{N_{X}}H_{2}^{\mu\nu}(X _{n},X_{n}+r) \tag{22}\]
so that the total is split between short- and long-distance contributions
\[\mathcal{W}_{2}=a^{4}\sum_{|r|\leq R}\mathcal{H}_{2}(r)G_{\mu\nu}(r)+a^{4}\sum _{r>R}\bar{H}_{2}^{\mu\nu}(r)G_{\mu\nu}(r), \tag{23}\]
using the efficient stochastic estimator for the noisy short-distance contribution. Ref. [21] introduced an importance sampling based on current separations for higher-point correlation functions, whereas in this case we make the separation based on the expected contributions to the variance. This approach avoids completely factorizing the trace which would require either \(\mathrm{O}(V)\) contractions or \(\mathrm{O}(N_{\mathrm{s}}^{2})\) FFTs to include the photon line which we deemed unfeasible.
In Fig. 3 (left) we illustrate the variance of each of the terms in Eq. (23) for the sum over a fixed separation \(|r|\) between the currents, for the case \(N_{\mathrm{s}}=N_{X}=1\). As expected, the variance from the contribution around \(|r|\sim 0\) dominates both the stochastic (red squares) and point source estimator (blue circles), and we observe the mild signal-to-noise ratio problem in the stochastic
Figure 3: Left: the variance for the stochastic estimator (red squares) and point source estimator (blue circles) for the minimum number of inversions required, for the contribution with fixed separation between the currents \(|r|\). The green triangle indicates the gauge variance for the point \(r=0\). Right: the variance for the short-distance (red squares) and long-distance (blue circles) for the choice \(R/a=4\), versus the number of inversions. The green band indicates the gauge variance for the contribution from \(r=0\) only. The dashed lines indicate the expected leading \(N_{\mathrm{inv}}^{-2}\) and \(N_{\mathrm{inv}}^{-1}\) scaling for the short- and long-distance components.
estimator. The green triangle denotes the gauge variance for the case \(r=0\), which is approximately suppressed by \((L^{3}T)/a^{4}\) compared to \(N_{X}=1\) indicating translation averaging is very effective for the short-distance contribution. In the right-hand panel, we see variance of the short- and long-distance contributions with the choice \(R/a=4\) as a function of the number of inversions (where \(N_{X}=1\) corresponds to 12 inversions). The variance is dominated by the short-distance contribution (red squares) which however scales favourably like \(N_{\rm inv}^{-2}\), while the long-distance contribution (blue circles) which scales only like \(N_{\rm inv}^{-1}\) is much suppressed. Deviations from the former scaling indicate that the gauge variance may be reached with just \(N_{\rm inv}\sim 1000\), which although is larger than required for \(W_{1}\) is still achievable with modern computational resources, and universal for all observables.
## 5 Conclusions
In this work we have examined the Wick contractions which arise due to the charge of the sea quarks in the RM123 method. Such diagrams contribute, in principle, even to observables constructed from neutral fields and are therefore ubiquitous in the computation of iso-spin breaking corrections. We have proposed stochastic estimators for the quark lines in such diagrams which completely avoids the need to sample the Maxwell action stochastically, thus eliminating one additional source of variance. As for the case of disconnected contributions to current correlators, we have shown it is beneficial to consider certain flavour combinations which have greatly suppressed fluctuations. We have shown that the split-even estimators generalize also to domain-wall fermions and perform well compared with naive estimators. Thus the frequency-splitting strategy of Ref. [10] should generalize appropriately for this fermion formulation. In the second topology, however, there is no cancellation of the short-distance effects in the variance by considering multiple flavours. In this case, we propose decomposing the diagram into a short-distance part to be estimated stochastically and a long-distance part estimated using position-space sampling. The variance is reduced sufficiently so that the gauge variance can be reached with a reasonable computational cost. Given their short-distance nature, these estimators should also succeed with smaller quark masses, and furthermore as the diagrams are universal to all iso-spin breaking corrections we anticipate that these simple decompositions ought to be beneficial in large-scale simulations. In particular we are developing these methods for refinements of our computations of iso-spin breaking corrections within the RBC/UKQCD collaboration, for example to meson (leptonic) decay rates [22, 23].
AcknowledgmentsWe use the open-source and free software Grid as the data parallel C++ library for the lattice computations [24]. The authors warmly thank the members of the RBC/UKQCD collaboration for valuable discussions and the use of ensembles of gauge configurations. T.H., A.P. and V.G. are supported in part by UK STFC 1039 grant ST/P000630/1. A.P. and V.G. received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 757646 and A.P. additionally under grant agreement No 813942. This work used the DiRAC Extreme Scaling service at the University of Edinburgh, operated by the Edinburgh Parallel Computing Centre on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BEIS capital funding via STFC capital grant ST/R00238X/1 and STFC DiRAC Operations grant ST/R001006/1. DiRAC is part of the National e-Infrastructure. |
2310.04763 | Orbital diffusion, polarization and swapping in centrosymmetric metals | We propose a general theory of charge, spin, and orbital diffusion based on
Keldysh formalism. Our findings indicate that the diffusivity of orbital
angular momentum in metals is much lower than that of spin or charge due to the
strong orbital intermixing in crystals. Furthermore, our theory introduces the
concept of spin-orbit polarization by which a pure orbital (spin) current
induces a longitudinal spin (orbital) current, a process as efficient as spin
polarization in ferromagnets. Finally, we find that orbital currents undergo
momentum swapping, even in the absence of spin-orbit coupling. This theory
establishes several key parameters for orbital transport of direct importance
to experiments. | Xiaobai Ning, A. Pezo, Kyoung-Whan Kim, Weisheng Zhao, Kyung-Jin Lee, Aurelien Manchon | 2023-10-07T09:53:47Z | http://arxiv.org/abs/2310.04763v2 | # Orbital diffusion, polarization and swapping in centrosymmetric metals
###### Abstract
We propose a general theory of charge, spin and orbital diffusion based on Keldysh formalism. Our findings indicate that the diffusivity of orbital angular momentum in metals is much lower than that of spin or charge due to the strong orbital intermixing in crystals. Furthermore, our theory introduces the concept of "spin-orbit polarization" by which a pure orbital (spin) current induces a longitudinal spin (orbital) current, a process as efficient as spin polarization in ferromagnets. Finally, we find that orbital currents undergo momentum swapping, even in the absence of spin-orbit coupling. This theory establishes several key parameters for orbital transport of direct importance to experiments.
_Introduction -_ The interconversion between charge and spin currents[1] is one of the central mechanisms of spintronics and possibly its most instrumental. This mechanism is at the source of spin-orbit torque[2] and charge currents induced by spin pumping[3]. At the core of these phenomena lies the spin-orbit interaction that couples the spin to the orbital angular momentum in relatively high-Z materials (\(5d\) metals, topological materials etc.). In recent years, it has been proposed that the interconversion between charge and orbital currents, via orbital Hall[4; 5; 6] and orbital Rashba effects[7; 8; 9] for instance, might in fact be much more efficient than its spin counterpart because it arises from the orbital texture imposed by the crystal field rather than from spin-orbit coupling. Therefore, corresponding phenomena such as orbital torque[10; 11; 12; 13] and orbital magnetoresistance[14] have been proposed and experimentally reported. In these experiments, the scenario is based on a two-step process: orbital Hall or Rashba effect takes place in a light metal and the resulting orbital current is converted into a spin signal once in the adjacent ferromagnet. Consequently, it is expected that the supposedly large charge-to-orbital conversion taking place in the low-Z metal compensates for the relatively low spin-orbit coupling of the ferromagnet, which seems to be confirmed by the experiments[11; 12; 14; 15; 16; 17].
An important rational behind the promotion of orbitronics is twofold. First, as mentioned above, since orbital transport is governed by the crystal field, orbital Hall and Rashba effects do not necessitate spin-orbit coupling and occur in relatively low-Z metals (e.g., \(3d\) metals). Second, orbital currents are expected to propagate over much longer distances than spin current because they are immune to spin scattering. On the other hand, several questions remain open. To start with, as observed by Ref. [4], the atomic orbital moment is never a good quantum number and therefore it remains unclear how it diffuses from one metal to another. Recent phenomenological models of orbital diffusion have been recently proposed[18; 15] but lack quantitative predictability by overlooking microscopic details. In addition, several recent works have pointed out that the orbital moment arises not only from intra-atomic spherical harmonics (\(p\), \(d\)) but also possesses substantial inter-atomic contribution[19; 20; 21]. Understanding the way orbital currents and densities propagate in metals and accumulate at interfaces requires determining transport coefficients such as orbital conductivity or diffusivity, as well as the ability to interconvert spin currents into orbital currents via spin-orbit coupling. Indeed, when injecting an orbital density \(\mathbf{l}=\langle\mathbf{L}\rangle\) in a metal, it diffuses and produces an orbital current \(\mathcal{J}_{l}=-\mathcal{D}_{l}\partial_{\mathbf{r}}\mathbf{l}_{l}\), \(\mathcal{D}_{l}\) being the orbital diffusion coefficient (typically a tensor). In the presence of spin-orbit coupling \(\xi_{s\alpha}\mathbf{\hat{\sigma}}\cdot\hat{\mathbf{L}}\), this orbital current can convert into a spin current \(\mathcal{J}_{s}\).
In this Letter, we derive a theory of spin and orbital diffusion in metals, and uncover several mechanisms governing orbital torque and magnetoresistance phenomena, illustrated in Fig. 1. First, we find that whereas charge and spin diffusion are of about the same order of magnitude, the orbital diffusion is much lower. This due to the fact that the orbital moment is never a good quantum number in crystals (rotational invariance is broken). Second, we find that in the presence of spin-orbit coupling,
Figure 1: (Color online) Spin and orbit interconversion mechanisms: (a) spin-to-orbit polarization and (b) orbit-to-spin polarization mediated by spin-orbit coupling. (c) Spin and (d) orbital swapping. The former requires spin-orbit coupling whereas the latter occurs even without it.
an orbital current is systematically accompanied by a spin current that is collinear to it (and vice versa) [Fig. 1(a,b)]. This "spin-orbit polarization" can be sizable, comparable to spin polarization in \(3d\) ferromagnets. Finally, the third class of effects uncovered by our theory is the "angular momentum swapping", i.e., the interchange between the propagation direction and angular momentum direction upon scattering [Fig. 1(c,d)]. Whereas the spin swapping was predicted by Lifshits and Dyakonov [22] in the presence of spin-orbit coupling, orbital swapping arises naturally even without it. When turning on spin-orbit coupling, not only spin swapping emerges, but also spin-to-orbit and orbit-to-spin swapping.
_Theory -_ The objective of the present theory is to determine the diffusive current induced by a gradient of particle density, \({\cal J}=-{\cal D}\partial_{\bf r}\rho\). In this expression, \({\cal J}\) can be the charge current \(J_{c}\), or the spin (orbital) current \({\cal J}_{s(\!I)}\), whereas \(\rho\) can be the charge density \(\rho_{c}\), or the spin (orbital) density \(\mathbf{s}\left(\mathbf{l}\right)\). In the language of nonequilibrium Green's function, the particle current density is obtained by computing the quantum statistical expectation value of the trace of the particle current operator \(\hat{j}\) taken over the lesser Green's function \(G^{<}\),
\[{\cal J}=\int\frac{d^{3}{\bf k}}{(2\pi)^{3}}\int\frac{d\varepsilon}{2i\pi}{ \rm Tr}\left[\hat{j}G^{<}\right]. \tag{1}\]
The philosophy of the present theory is to express the lesser Green's function to the first order in the density gradient \(\partial_{\bf r}\rho\). We start from Keldysh-Dyson equation [23]
\[G^{<}=G^{R}\otimes\Sigma^{<}\otimes G^{A}, \tag{2}\]
where \(G^{<}=G^{<}({\bf r},{\bf r}^{\prime};t,t^{\prime})\) (\(\Sigma^{<}\)) is the lesser Green's function (self-energy), \(G^{R(A)}=G^{R(A)}({\bf r},{\bf r}^{\prime};t,t^{\prime})\) is the retarded (advanced) Green's function and \(\otimes\) is the convolution product on both time and space. In Eq. (2), we omitted the explicit time and space dependence for simplicity. In the linear response regime, we first express \(G^{<}\) to the first order in spatial gradients using Wigner transform (see, e.g., Ref. [24]), i.e., we rewrite Eq. (2) in the frame of the center-of-mass, \(({\bf r},{\bf r}^{\prime};t,t^{\prime})\rightarrow({\bf r}-{\bf r}^{\prime},{ \bf r}_{c};t-t^{\prime},t_{c})\), with \(({\bf r}_{c},t_{c})=(({\bf r}+{\bf r}^{\prime})/2,(t+t^{\prime})/2)\), Fourier transform the small space and time coordinates \(({\bf r}-{\bf r}^{\prime},t-t^{\prime})\rightarrow({\bf k},\omega)\), and expand Keldysh-Dyson equation to the first order in space and time gradients \((\partial_{{\bf r}_{c}},\partial_{t_{c}})\). In the following, the subscript \({}_{c}\) is dropped for the sake of readability. Under Wigner transform, the convolution product becomes
\[A\otimes B=AB+\frac{i}{2}(\partial_{\bf r}A\partial_{\bf k}B-\partial_{\bf k }A\partial_{\bf r}B), \tag{3}\]
and finally, the part of the lesser Green's function that is linear in spatial gradient reads
\[\delta G^{<}=\frac{i}{2}\left(G_{0}^{R}\partial_{\bf r}\Sigma^{<}\partial_{ \bf k}G_{0}^{A}-\partial_{\bf k}G_{0}^{R}\partial_{\bf r}\Sigma^{<}G_{0}^{A} \right). \tag{4}\]
Here \(G_{0}^{R(A)}=(\hbar\omega-{\cal H}_{0}\pm i\Gamma)\) is the unperturbed retarded (advanced) Green's function and \({\cal H}_{0}\) is the crystal Hamiltonian.
Since we are interested in the diffusion coefficients that connect angular momentum densities (odd under time-reversal \({\cal T}\)) with angular momentum currents (even under \({\cal T}\)), the diffusion coefficients in _nonmagnetic_ materials are themselves odd under \({\cal T}\). The same is true for the charge diffusivity that connects the charge density (even under \({\cal T}\)) with the charge current (odd under \({\cal T}\)). As a result, the charge, spin and orbital diffusion coefficients must be dissipative, proportional to the scattering time. In the language of quantum transport, these phenomena are driven by Fermi surface electrons akin to the charge conductivity. This is in stark contrast with the spin and orbital Hall diffusivities, which connect charge densities (even under \({\cal T}\)) with spin and orbital currents (even under \({\cal T}\)): they are even under \({\cal T}\), independent on scattering in the limit of weak disorder, and associated with the Berry curvature [25; 26]. Since we focus on angular momentum diffusion and spin-orbit interconversion, Eq. (4) is limited to transport at the Fermi level and disregards Fermi sea contributions. The present analysis applies to nonmagnetic materials and must be revised in the case of magnetic systems [27] as new terms are allowed.
Considering point-like impurities, \(H_{\rm imp}=\sum_{i}V_{0}\delta({\bf r}-{\bf R}_{i})\), the lesser self-energy reads
\[\Sigma^{<} = \frac{1}{V}\sum_{i,j}\int\frac{d^{3}{\bf k}}{(2\pi)^{3}}V_{0}G_{ \bf k}^{<}V_{0}e^{i{\bf k}\cdot({\bf R}_{i}-{\bf R}_{j})}=\frac{n_{i}V_{0}^{2 }}{V}\langle G_{\bf k}^{<}\rangle. \tag{5}\]
Here, \(n_{i}\) is the impurity concentration and \(\langle...\rangle=V\int\frac{d^{3}{\bf k}}{(2\pi)^{3}}\) stands for momentum integration over the Brillouin zone. Noting that \(\partial_{\bf k}G_{0}^{R}=\hbar G_{0}^{R}{\bf v}G_{0}^{R}\), we obtain
\[\delta G_{\bf k}^{<}=i\hbar\frac{n_{i}V_{0}^{2}}{V}{\rm Re}\left[G_{0}^{R} \partial_{\bf r}\langle G_{\bf k}^{<}\rangle G_{0}^{A}{\bf v}G_{0}^{A}\right]. \tag{6}\]
Inserting Eq. (6) into Eq. (1), we obtain the general expression of nonequilibrium properties induced by spatial gradients. Now, as argued above, diffusive effects are associated with Fermi surface electrons, which suggests \((1/V)\partial_{\bf r}\langle G_{\bf k}^{<}\rangle=2i\pi\partial_{\bf r}\hat{ \rho}\delta(\varepsilon-\varepsilon_{F})\), \(\hat{\rho}\) being the density matrix at Fermi level. As a result, the particle current density reads
\[{\cal J}=-\hbar n_{i}V_{0}^{2}{\rm Re}{\rm Tr}_{\bf k}\left[\hat{j}{\rm Im}[G_{0 }^{R}\partial_{\bf r}\hat{\rho}G_{0}^{A}{\bf v}G_{0}^{A}]\right]_{\varepsilon_ {F}}. \tag{7}\]
For the sake of compactness, we defined \({\rm Tr}_{\bf k}=\int\frac{d^{3}{\bf k}}{(2\pi)^{3}}{\rm Tr}\). Equation (7) is the central result of this work and can be used to compute the diffusive charge, spin and orbital currents induced by density gradients. For instance, substituting \(\hat{\rho}\) by the charge density \(\rho_{c}=-e{\rm Tr}[\hat{\rho}]\), and the charge current operator \(\hat{j}=-e\hat{\bf v}\), one obtains the charge diffusivity
\[{\cal D}_{ij}=\hbar n_{i}V_{0}^{2}{\rm Re}{\rm Tr}_{\bf k}\left[\hat{v}_{j}{\rm Im }[G_{0}^{R}G_{0}^{A}\hat{v}_{i}G_{0}^{A}]\right]. \tag{8}\]
The validity of Eq. (8) is readily assessed by comparing \({\cal D}_{ij}\) with the conductivity \(\sigma_{ij}\) obtained from Kubo's
formula [26]. In the relaxation time approximation, \(n_{i}V_{0}^{2}=\Gamma/(\pi\mathcal{N}_{F})\), where \(\mathcal{N}_{F}\) is the density of states at Fermi level and \(\Gamma\) is the disorder broadening. Using this relation, we confirm the Einstein relation \(\mathcal{D}_{ij}=\sigma_{ij}/(e^{2}\mathcal{N}_{F})\) (not shown). In the rest of this work, we express the spin and orbital diffusivity in the units of a conductivity \((e^{2}\Gamma/(\pi n_{i}V_{0}^{2}))\mathcal{D}_{ij}\), i.e., in \(\Omega^{-1}\cdot m^{-1}\) rather than in \(m^{2}\cdot s^{-1}\).
To obtain the spin and orbital diffusivities, the respective densities are defined \(\mathbf{s}=(\hbar/2)\mathrm{Tr}[\hat{\mathbf{\sigma}}\hat{\rho}]\) and \(\mathbf{l}=\hbar\mathrm{Tr}[\hat{\mathbf{L}}\hat{\rho}]\), \(\hat{\mathbf{\sigma}}\) and \(\hat{\mathbf{L}}\) being the dimensionless spin and orbital operators. Therefore, substituting the current operator \(\hat{j}\) by either the spin current operator \(\hat{j}_{s,j}^{\alpha}=(\hbar/4)\{\hat{v}_{j},\hat{\sigma}_{\beta}\}\) or the orbital current operator \(\hat{j}_{l,j}^{\beta}=(\hbar/2)\{\hat{v}_{j},\hat{L}_{\beta}\}\) in Eq. (7), and \(\partial_{i}\hat{\rho}\) by \(\hat{\sigma}_{\alpha}\partial_{i}s_{\alpha}\) or \(\hat{L}_{\alpha}\partial_{i}l_{\alpha}\), we obtain the general relation
\[\begin{pmatrix}\mathcal{J}_{sj}^{\beta}\\ \mathcal{J}_{l,j}^{\beta}\end{pmatrix}=-\begin{pmatrix}\mathcal{D}_{s_{\alpha} \hat{s}}^{s_{\beta}}&\mathcal{D}_{l_{\alpha}i}^{s_{\beta}j}\\ \mathcal{D}_{s_{\alpha}i}^{l_{\beta}}&\mathcal{D}_{l_{\alpha}i}^{l_{\beta}j} \end{pmatrix}\begin{pmatrix}\partial_{i}s_{\alpha}\\ \partial_{i}l_{\alpha}\end{pmatrix} \tag{9}\]
with the diffusion coefficients
\[\mathcal{D}_{s_{\alpha}i}^{s_{\beta}j}=2n_{i}V_{0}^{2}\mathrm{Re }\mathrm{Tr}_{\mathbf{k}}\left[\hat{j}_{s,j}^{\beta}\mathrm{Im}[G_{0}^{R}\hat{ \sigma}_{\alpha}G_{0}^{A}\hat{v}_{i}G_{0}^{A}]\right], \tag{10}\] \[\mathcal{D}_{l_{\alpha}i}^{l_{\beta}j}=n_{i}V_{0}^{2}\mathrm{Re }\mathrm{Tr}_{\mathbf{k}}\left[\hat{j}_{l,j}^{\beta}\mathrm{Im}[G_{0}^{R}\hat{L}_ {\alpha}G_{0}^{A}\hat{v}_{i}G_{0}^{A}]\right],\] (11) \[\mathcal{D}_{s_{\alpha}i}^{l_{\beta}j}=2n_{i}V_{0}^{2}\mathrm{Re }\mathrm{Tr}_{\mathbf{k}}\left[\hat{j}_{l,j}^{\beta}\mathrm{Im}[G_{0}^{R}\hat{ \sigma}_{\alpha}G_{0}^{A}\hat{v}_{i}G_{0}^{A}]\right],\] (12) \[\mathcal{D}_{l_{\alpha}i}^{s_{\beta}j}=n_{i}V_{0}^{2}\mathrm{Re }\mathrm{Tr}_{\mathbf{k}}\left[\hat{j}_{s,j}^{\beta}\mathrm{Im}[G_{0}^{R}\hat{L}_ {\alpha}G_{0}^{A}\hat{v}_{i}G_{0}^{A}]\right]. \tag{13}\]
\(\mathcal{D}_{s_{\alpha}i}^{s_{\beta}j}\) represents a spin current \(\mathcal{J}_{s,j}^{\beta}\) induced by the gradient of a spin density \(\partial_{i}s_{\alpha}\), whereas \(\mathcal{D}_{l_{\alpha}i}^{l_{\beta}j}\) represents an orbital current \(\mathcal{J}_{l,j}^{\beta}\) induced by the gradient of an orbital density \(\partial_{i}l_{\alpha}\). In addition, the diffusivities \(\mathcal{D}_{s_{\alpha}i}^{l_{\beta}j}\) and \(\mathcal{D}_{l_{\alpha}i}^{s_{\beta}j}\) represent the spin-to-orbital interconversion phenomena, i.e., spin-orbit polarization (\(\alpha=\beta\)) and spin-orbit swapping (\(\alpha\neq\beta\)).
_Spin and orbital diffusion -_ To quantitatively estimate the magnitude of these effects, we consider a bcc crystal with \((p_{x},p_{y},p_{z})\) orbitals. The tight-binding Hamiltonian is obtained using Slater-Koster parameterization, with \(V_{\sigma}=0.2\) eV and \(V_{\pi}=0.05\) eV. Since the structure has cubic symmetry, we assume that the (charge, spin or orbital) gradient is along \(x\).
We first compute the charge, spin and orbital diffusivities in Fig. 2. Since the current diffuses in the same direction as the density gradient, \(i=j\), its angular momentum is necessarily aligned on that of the density, \(\alpha=\beta\). In the absence of spin-orbit coupling, the charge diffusivity \(\mathcal{D}_{xx}\) and the spin diffusivities \(\mathcal{D}_{s_{xx}^{x}}^{s_{xx}}\), \(\mathcal{D}_{s_{y}x^{y}}^{s_{yx}^{x}}\) (\(=\mathcal{D}_{s_{x}x^{x}}^{s_{x}}\)) are all equal [black line in Fig. 2(a)]. Turning on the spin-orbit coupling (\(\xi_{\mathrm{so}}=0.05\) eV) slightly reduces the charge diffusivity and breaks the symmetry between the spin diffusion coefficients, \(\mathcal{D}_{l_{x}x}^{s_{x}x}\neq\mathcal{D}_{s_{y}x}^{s_{yx}}\)(\(=\mathcal{D}_{s_{x}x}^{s_{x}}\)). Interestingly, as reported on Fig. 2(b), the orbital diffusivities \(\mathcal{D}_{l_{x}x}^{l_{x}x}\) and \(\mathcal{D}_{l_{y}x}^{l_{y}x}\) (\(=\mathcal{D}_{l_{x}x}^{l_{x}x}\)) are in fact very small (dashed blue lines), which we attribute to the strong orbital mixing that naturally governs the band structure of our bcc crystal. Turning on the spin-orbit coupling (solid lines) again breaks the symmetry between the diffusion coefficients, \(\mathcal{D}_{l_{x}x}^{l_{x}x}\neq\mathcal{D}_{l_{y}x}^{l_{y}x}\)(\(=\mathcal{D}_{l_{x}x}^{l_{x}x}\)) and, remarkable, enhances the overall orbital diffusivity. This can be understood qualitatively by the fact that spin-orbit interaction couples the highly conductive spin channel with the weakly conductive orbital channel, thereby reducing the spin diffusivity while enhancing the orbital one, as shown in Fig. 2(c).
The low orbital diffusivities reported here do not necessarily contradict the idea that orbital momentum could be transported over much longer distance than spin momentum [12; 16; 17]. Indeed, a comprehensive theory of orbital transport requires a microscopic modeling of orbital relaxation mechanisms, which remains out of the scope of the present work. Notice that in ferromagnets, spin dephasing severely limits the spin propagation, such that orbital diffusion naturally dominates [12; 13]. This effect is absent in nonmagnetic metals.
_Spin-orbit polarization -_ The next question we wish to address is how much orbital current can one obtain upon injecting a spin current in a heavy metal. This mechanism underlies the phenomena of orbital torque and orbital magnetoresistance [10; 11; 12; 14; 15; 16; 17] where a primary orbital current generated in a light metal is injected in a spin-orbit coupled material and converted into a spin current. To answer this question, we compute the so
Figure 2: (Color online) (a) Charge (black and gray) and spin (red) diffusivities as a function of the energy. For \(\xi_{\mathrm{so}}=0\), the spin and charge diffusivities fall into one single curve (black), whereas for \(\xi_{\mathrm{so}}=0.05\), the charge diffusivity is reduced (gray) and the spin diffusivity becomes anisotropic (light and dark red lines). (b) Orbital diffusivities as a function of the energy for \(\xi_{\mathrm{so}}=0\) (dashed) and \(\xi_{\mathrm{so}}=0.05\) eV (solid). The orbital diffusivities are intrinsically anisotropic. (c) Dependence of the spin (red) and orbital (blue) diffusivities as a function of the spin-orbit coupling \(\xi_{\mathrm{so}}\) at transport energy \(\varepsilon=0.5\) eV.
called "spin-orbit polarization". Let us assume that a gradient of, say, spin density \(\partial_{i}s_{\alpha}\) diffuses in the system. This gradient induces _both_ spin and orbital currents, \(\mathcal{J}^{\alpha}_{s,i}\) and \(\mathcal{J}^{\alpha}_{l,i}\), producing a current of total angular momentum \(\mathcal{J}^{\alpha}_{t,i}=\mathcal{J}^{\alpha}_{l,i}+\mathcal{J}^{\alpha}_{s,i}\). To quantify the relative proportion of spin and orbital currents, we define the spin-to-orbit polarization \(\mathcal{P}^{\alpha}_{l,i}=\mathcal{D}^{l_{s}\,i}_{s_{\alpha}}(\mathcal{D}^{s \alpha i}_{s_{\alpha}}+\mathcal{D}^{l_{s}\,i}_{s_{\alpha}})\), and similarly, the orbit-to-spin polarization, \(\mathcal{P}^{\alpha}_{s,i}=\mathcal{D}^{\alpha\,i}_{l_{\alpha}}/(\mathcal{D}^{ l_{s}\,i}_{l_{\alpha}}+\mathcal{D}^{s_{\alpha}\,i}_{l_{\alpha}})\).
The spin-to-orbit (\(s_{\alpha}\to l_{\alpha}\)) and orbit-to-spin (\(l_{\alpha}\to s_{\alpha}\)) longitudinal diffusivities as well as the corresponding polarization are given in Fig. 3(a,b). Again, the diffusivities are anisotropic due to the presence of spin-orbit coupling. The orbital diffusivity being much smaller than the spin diffusivity, the orbit-to-spin polarization is generally smaller than the spin-to-orbit polarization. The polarization increases steadily with spin-orbit coupling, as expected, and saturates at large spin-orbit coupling strength. It is worth noting that the spin-orbit polarization is comparable to the spin polarization found in conventional \(3d\) metal compounds (typically 50-70%). This observation is consistent with Ref. [5] that suggests an orbit-to-spin polarization of about 50% in Pt and Pd for Hall currents. The sizable spin-orbit polarization given in Fig. 3(c) is a crucial ingredient for the orbital torque and magnetoresistance.
_Spin, orbital and spin-orbit swapping_ - We finally consider the last class of effects, the spin and orbital swapping. For these effects, the directions of injection and collection are perpendicular to each other, as well as the direction of the incoming and outgoing (spin/orbit) polarization [see Fig. 1(c,d)]. The orbital diffusivity tensor has the following form
\[\begin{pmatrix}\mathcal{J}^{x}_{l,x}\\ \mathcal{J}^{y}_{l,x}\\ \mathcal{J}^{x}_{l,y}\\ \mathcal{J}^{y}_{l,y}\end{pmatrix}=-\begin{pmatrix}\mathcal{D}^{l_{x}x}_{l,x} &0&0&\mathcal{D}^{l_{x}y}_{l,y}\\ 0&\mathcal{D}^{l_{y}x}_{l,x}&\mathcal{D}^{l_{y}x}_{l,y}&0\\ 0&\mathcal{D}^{l_{y}y}_{l,x}&\mathcal{D}^{l_{y}y}_{l,y}&0\\ 0&\mathcal{D}^{l_{y}y}_{l,x}&0&\mathcal{D}^{l_{y}y}_{l,y}\end{pmatrix}\begin{pmatrix} \partial_{x}l_{x}\\ \partial_{x}l_{y}\\ \partial_{y}l_{x}\\ \partial_{y}l_{y}\end{pmatrix}, \tag{14}\]
and Onsager reciprocity imposes that \(\mathcal{D}^{l_{y}x}_{l_{x}y}=\mathcal{D}^{l_{x}y}_{l_{y}x}\) and \(\mathcal{D}^{l_{x}x}_{l_{y}y}=\mathcal{D}^{l_{y}y}_{l_{x}x}\). Importantly, the orbital swapping does not necessitate spin-orbit coupling as it is solely governed by the orbital overlap (and hence the crystal field symmetry) of the crystal. These coefficients are reported in Fig. 4(a) in the absence of spin-orbit coupling. Turning on the spin-orbit coupling triggers spin swapping [22], whose diffusivity tensor has the same form as in Eq. (14). Figure 4(b) displays the spin (red) and orbital (blue) swapping efficiencies defined as \(\mathcal{D}^{s_{y}y}_{s_{x}x}/\mathcal{D}^{s_{x}x}_{s_{x}x}\) and \(\mathcal{D}^{l_{y}y}_{l_{x}x}/\mathcal{D}^{l_{x}x}_{l_{x}x}\), as a function of spin-orbit coupling, showing that orbital swapping is generally larger than spin swapping, which seems reasonable given the minor role of spin-orbit coupling in the former.
In addition, spin-orbit coupling also enables the transfer between spin and orbital angular momenta that results in spin-to-orbit (red) and orbit-to-spin (blue) swapping, displayed in Fig. 4(c). The diffusivity tensor has the form
\[\begin{pmatrix}\mathcal{J}^{x}_{s,x}\\ \mathcal{J}^{y}_{s,x}\\ \mathcal{J}^{x}_{s,y}\\ \mathcal{J}^{y}_{s,y}\end{pmatrix}=-\begin{pmatrix}\mathcal{D}^{s_{x}x}_{l_{x} x}&0&0&\mathcal{D}^{s_{x}x}_{l_{y}y}\\ 0&\mathcal{D}^{s_{y}x}_{l_{x}x}&\mathcal{D}^{s_{y}x}_{l_{y}y}&0\\ 0&\mathcal{D}^{s_{y}y}_{l_{x}x}&\mathcal{D}^{s_{y}y}_{l_{x}y}&0\\ \mathcal{D}^{s_{y}y}_{l_{x}x}&0&0&\mathcal{D}^{s_{y}y}_{l_{y}y}\end{pmatrix} \begin{pmatrix}\partial_{x}l_{x}\\ \partial_{x}l_{y}\\ \partial_{y}l_{x}\\ \partial_{y}l_{y}\end{pmatrix}, \tag{15}\]
and Onsager reciprocity imposes \(\mathcal{D}^{s_{y}x}_{l_{x}y}=\mathcal{D}^{s_{x}y}_{l_{y}x}\) and \(\mathcal{D}^{s_{x}x}_{l_{y}y}=\mathcal{D}^{s_{y}y}_{l_{x}x}\). From Fig. 4(c), it appears that spin-to-orbit swapping is larger than orbit-to-spin swapping, a feature already observed in Fig. 3 for the spin-orbit polarization. In the context of spin-orbit torque [2], spin swapping, being of bulk [22; 28] or interfacial origin [29], is responsible for additional torque components in magnetic multilayers. The large orbital swapping efficiencies reported here suggest that in systems displaying orbital torque, large deviations from the conventional field-like and damping-like torques can be expected [28].
_Conclusion_ - Advancing research in orbitronics re
Figure 3: (Color online) (a) Spin-to-orbit (red) and orbit-to-spin (blue) diffusivities as a function of the energy for \(\xi_{\rm so}=0.05\) eV. (b) Corresponding spin-orbit polarizations as a function of the spin-orbit coupling for \(\varepsilon=0.5\) eV.
Figure 4: (Color online) (a) Orbital swapping as a function of energy for \(\xi_{\rm so}=0\). (b) Spin (red) and orbital (blue) swapping as a function of the spin-orbit coupling strength. (c) Spin-to-orbit (red) and orbit-to-spin (blue) swapping as a function of the spin-orbit coupling. In (b) and (c), we set \(\varepsilon=0.5\) eV.
quires a proper description of spin and orbital diffusion in metals. As stated previously, whereas the vast majority of theoretical studies to date focus on orbital and spin currents generated by electric currents, our theory allows us to compute the orbital and spin currents induced by diffusive gradients of angular momenta. It reveals that although orbital currents do not experience "orbital-flip" _per se_, their diffusivity in metals is much weaker than that of spin currents. This result seems at odds with recent experiments suggesting a long orbital diffusion length in transition metals [13; 15; 16; 30]. In diffusive transport though, the (spin or orbital) diffusion length is related to the product between the (spin or orbital) diffusivity \(\mathcal{D}\) and the (spin or orbital) relaxation time \(\tau_{r}\), \(\lambda\propto\sqrt{\mathcal{D}\tau_{r}}\). Therefore, to model the orbital diffusion length in transition metals, the present theory must be completed by a theory of orbital relaxation which remains an open question. Our theory also quantifies the spin-to-orbit and orbit-to-spin polarization, i.e., the ability for a spin (orbital) current to generate a longitudinal orbital (spin) current, and finds that this effect is very efficient, potentially as efficient as conventional spin polarization in \(3d\) magnets. Finally, we show that orbital currents are subject to angular moment swapping even in the absence of spin-orbit coupling and can be as large as spin swapping.
We point out that the Green function theory proposed in this Letter is well adapted to multiband systems and in particular to realistic heterostructures computed from first principles. Indeed, systematic investigation of orbital Hall conductivity and orbital Edelstein effects in transition metals have been recently performed [31; 32; 33] and extending the present work to realistic materials of interest to experiments could open appealing perspectives for the design of orbital devices.
###### Acknowledgements.
A.P. was supported by the ANR ORION project, grant ANR-20-CE30-0022-01 of the French Agence Nationale de la Recherche, A. M. was supported by from the Excellence Initiative of Aix-Marseille Universite - A*Midex, a French "Investissements d'Avenir" program, K.-W. K. was supported by the KIST Institutional Program and K.-J. L. was supported by the NRF (NRF-2022M3I7A2079267).
|
2301.12640 | Reweighted Interacting Langevin Diffusions: an Accelerated Sampling
Methodfor Optimization | We proposed a new technique to accelerate sampling methods for solving
difficult optimization problems. Our method investigates the intrinsic
connection between posterior distribution sampling and optimization with
Langevin dynamics, and then we propose an interacting particle scheme that
approximates a Reweighted Interacting Langevin Diffusion system (RILD). The
underlying system is designed by adding a multiplicative source term into the
classical Langevin operator, leading to a higher convergence rate and a more
concentrated invariant measure. We analyze the convergence rate of our
algorithm and the improvement compared to existing results in the asymptotic
situation. We also design various tests to verify our theoretical results,
showing the advantages of accelerating convergence and breaking through
barriers of suspicious local minimums, especially in high-dimensional
non-convex settings. Our algorithms and analysis shed some light on combining
gradient and genetic algorithms using Partial Differential Equations (PDEs)
with provable guarantees. | Junlong Lyu, Zhitang Chen, Wenlong Lyu, Jianye Hao | 2023-01-30T03:48:20Z | http://arxiv.org/abs/2301.12640v1 | # Reweighted Interacting Langevin Diffusions: an Accelerated Sampling Method for Optimization
###### Abstract
We proposed a new technique to accelerate sampling methods for solving difficult optimization problems. Our method investigates the intrinsic connection between posterior distribution sampling and optimization with Langevin dynamics, and then we propose an interacting particle scheme that approximates a Reweighted Interacting Langevin Diffusion system (RILD). The underlying system is designed by adding a multiplicative source term into the classical Langevin operator, leading to a higher convergence rate and a more concentrated invariant measure. We analyze the convergence rate of our algorithm and the improvement compared to existing results in the asymptotic situation. We also design various tests to verify our theoretical results, showing the advantages of accelerating convergence and breaking through barriers of suspicious local minimums, especially in high-dimensional non-convex settings. Our algorithms and analysis shed some light on combining gradient and genetic algorithms using Partial Differential Equations (PDEs) with provable guarantees.
Machine Learning, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin,, Langevin Dynamics,, Langevin Dynamics, Langevin,,, Langevin Dynamics, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin,, Langevin Dynamics, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin,, Langevin Dynamics, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin Dynamics,, Langevin
information in current steps, see Eq. (16).
However, the EKS method has little improvement when solving highly non-convex optimizations. Besides, as our purposes is to find the global minimum of \(V\), restricting the stationary distribution of the underlying dynamic to be precisely \(\nu_{\sigma}(\mathbf{x})\) is not necessary. Suppose we have another \(\sigma\)-dependent dynamic with limit distribution approximate \(\delta_{\mathbf{x}^{*}}(\mathbf{x})\), the Delta distribution at the minimum point \(\mathbf{x}*\), then this dynamic can also be chosen to approximate the global minimum. This inspires us to further modify the Langevin dynamics for faster convergence. Specifically, we modify the Fokker-Planck Equation related to Langevin Dynamics by adding a linear source term, which can be proven, by the spectral approach (Pankov, 2001), to improve the convergence rate and the mass concentration near the global minimum of the invariant measure. The new process belongs to a type of nonlinear operators called Feynman-Kac Semigroup which was developed in Large Deviation Theory for calculating generating functions (Varadhan, 2010) and also used in important practical applications such as the Diffusion Monte Carlo (DMC) method (Foulkes et al., 2001). To design a practical algorithm, we use the Interacting Particle methods (Moral & Miclo, 2000; Moral, 2013), introducing the reweighting and resampling technique to simulate the effect of this source term and get the so-called Reweighted Interacting Langevin Diffusion (RILD) algorithm. This is, as far as we know, the first time using Interacting Particle methods to solve optimization tasks. Many numerical experiments are tested to show the effect in accelerating convergence and flatten the potential barriers.
The main contributions are summarized as follows:
* We provide a simple but effective way to modify the Langevin Dynamics in Section 3.1 for faster convergence and flatter potential barriers.
* We provide a feasible discretization way to design algorithms in Section 3.2, and compare the new algorithm Alg. 1 with several existing methods, showing its advantages.
* We provide theoretical results in Section 4 based on spectral analysis to guarantee the advantages.
## 2 Preliminary
### Overdamped Langevin Dynamics
In this section, we introduce the classical Langevin Dynamics which is related to GLD and SGLD algorithms.
Suppose \(V:\mathbb{R}^{d}\to\mathbb{R}\) that is twice differentiable and has a simple global minimum \(\mathbf{x}^{*}=\arg\min_{x\in\mathbb{R}^{d}}V(\mathbf{x})\) for simplicity.
The overdamped Langevin Dynamics is defined as in 1. Such a dynamic has plenty of relationships with the Fokker-Planck equation. Denote
\[\mathcal{L}\cdot:=-\langle\nabla V,\nabla\cdot\rangle+\frac{\sigma^{2}}{2}\Delta. \tag{2}\]
the infinitesimal generator (Oksendal, 1987) of the Markov process \((\mathbf{x}_{t})_{t\geq 0}\), and its \(L^{2}\) adjoint operator:
\[\mathcal{L}^{\dagger}\cdot:=\text{div}(\nabla V\cdot+\frac{\sigma^{2}}{2} \nabla\cdot) \tag{3}\]
Let us assume the law of \(\mathbf{x}_{t}\) at time \(t\) has a density \(p_{t}(x)\) for Lebesgue measure. Then \(p_{t}(\mathbf{x})\) satisfies the Fokker-Planck equation:
\[\frac{\partial}{\partial t}p_{t}(\mathbf{x})=\mathcal{L}^{\dagger}p_{t}(\mathbf{x})= \text{div}\big{(}p_{t}(\mathbf{x})\nabla V(\mathbf{x})+\frac{\sigma^{2}}{2}\nabla p_{ t}(\mathbf{x})\big{)} \tag{4}\]
We may denote \(p_{t}(\mathbf{x})=e^{\mathcal{L}^{\dagger}t}p_{0}(\mathbf{x})\) where we denote \(p_{0}(\mathbf{x})\) the initial distribution of overdamped Langevin dynamics, and it is well-known that the Markov operator \(e^{\mathcal{L}^{\dagger}t}\) admits a unique invariant probability measure \(\nu(\mathbf{x})=Z_{\nu}^{-1}e^{-2\sigma^{-2}V(\mathbf{x})},Z_{\nu}=\int_{\mathbb{R}^{ d}}e^{-2\sigma^{-2}V(\mathbf{x})}dx\). The rate converging to \(\nu(\mathbf{x})\) has been wildly studied,
**Proposition 2.1** (Proposition 2.3 in Lelievre & Stoltz (2016)).: _Under specific condition (Assumption 4.2), 1for all \(p_{0}\) such that \(p_{0}/\nu\in L^{2}(\nu)\), and for all \(t\geq 0\),_
Footnote 1: We define \(L^{2}(\mu):=\{f:\|f\|_{L^{2}(\mu)}<\infty\}\), where \(\langle f,g\rangle_{L^{2}(\mu)}:=\int_{\mathbb{R}^{d}}f(x)g(x)(\mu)x\), \(\|f\|_{L^{2}(\mu)}=\langle f,f\rangle^{\frac{1}{2}}_{L^{2}(\mu)}\)
\[\|e^{\mathcal{L}^{\dagger}t}p_{0}/\nu-1\|_{L^{2}(\nu)}\leq\|p_{0}/\nu-1\|_{L^{ 2}(\nu)}e^{-\delta t},\]
_where \(\delta\) is the first non-zero eigenvalue \(-\lambda_{1}\) of the operator \(-\mathcal{L}\) on \(L^{2}(\nu)\)._
Note that \(\lambda_{1}\) is real because \(\mathcal{L}\) is self-adjoint2 on \(L^{2}(\nu)\).
Footnote 2: We say a linear operator \(A\) is self-adjoint on a Hilbert space \(\{\mathcal{H},\langle\cdot\rangle_{\mathcal{H}}\}\), if for any \(f,g\in\mathcal{H},\langle f,Ag\rangle_{\mathcal{H}}=\langle Af,g\rangle_{ \mathcal{H}}\)
When for sampling purposes, individual samples are sampled from independent paths by Monte Carlo Markov Chain (MCMC, (Berg, 2004)) corresponding to the discretized Langevin dynamics. This method is somehow slow for large dimensional, non-convex problems. Such a weakness make it less attractive for optimization purposes. Fortunately, we can explore the connection between individual samples, as they did in Garbuno-Inigo et al. (2019). We will introduce their work in the next subsection.
### Interacting Langevin Diffusion
In Garbuno-Inigo et al. (2019), the authors introduced a modified Langevin dynamics and analyzed its property, proving its superiority in designing sampling algorithms. We now introduce the main result of their work.
The convergence rate of classical Langevin dynamic based algorithm can be really slow when \(V\) varies highly anisotropic,. A common approach for canceling out this effect is to introduce a \(d\times d\) preconditioning positive semi-definite matrix \(C\) in the corresponding gradient scheme,
\[d\mathbf{x}_{t}=-C\nabla V(\mathbf{x}_{t})dt+\sigma\sqrt{C}d\mathcal{B}_{t}. \tag{5}\]
Here \(\sqrt{C}\) can be any \(d\times r\) real matrix \(U\) such that \(UU^{T}=C\) (Note that in this case, \(W_{t}\) can be reduced to \(r\)-dimensional standard Brownian motion as the essential rank of \(C\) is no larger than r).
The corresponding Fokker-Planck equation now becomes
\[\frac{\partial}{\partial t}p_{t}(\mathbf{x})=\text{div}\big{(}p_{t}(\mathbf{x})C\nabla V (\mathbf{x})+\frac{\sigma^{2}}{2}C\nabla p_{t}(\mathbf{x})\big{)} \tag{6}\]
and the infinitesimal generator
\[\mathcal{L}_{C^{\cdot}} :=-\langle C\nabla V,\nabla\cdot\rangle+\frac{\sigma^{2}}{2}\text {div}(C\nabla\cdot), \tag{7}\] \[\mathcal{L}_{C^{\cdot}}^{\dagger} :=\text{div}\big{(}C\nabla V\cdot+\frac{\sigma^{2}}{2}C\nabla \cdot\big{)}. \tag{8}\]
One can easily verify that \(e^{-2\sigma^{-2}V(\mathbf{x})}\) is the invariant measure to the above system for all semi-definite \(C\), and the only one positive invariant measure if \(C\) is strictly positive definite.
To find a suitable \(C\) is of general interest. One of the best choices is to let \(C=\text{Hess}V\), as a counterpart of Newton's scheme in optimization, which is unfriendly for computation. One intrinsic benefit of choosing \(\text{Hess}V\) is its affine-invariant property when designing numerical schemes.
Such a property can also be preserved if we take \(C=\mathcal{C}(p)\), the covariance matrix under the probability measure \(p(\mathbf{x})\),
\[\mathcal{C}(p):=\int_{\mathbb{R}^{d}}\big{(}\mathbf{x}-m(p)\big{)} \otimes\big{(}\mathbf{x}-m(p)\big{)}p(\mathbf{x})d\mathbf{x}, \tag{9}\] \[m(p):=\int_{\mathbb{R}^{d}}\mathbf{x}p(\mathbf{x})dx. \tag{10}\]
The dynamic now becomes a nonlinear flow
\[d\mathbf{x}_{t} =-\mathcal{C}(p_{t})\nabla V(\mathbf{x}_{t})dt+\sigma\sqrt{\mathcal{C }(p_{t})}d\mathcal{B}_{t}, \tag{11}\] \[\frac{\partial}{\partial t}p_{t} =\text{div}\big{(}p_{t}\mathcal{C}(p_{t})\nabla V+\frac{\sigma^{ 2}}{2}\mathcal{C}(p_{t})\nabla p_{t}\big{)} \tag{12}\]
To simulate from such a mean-field model, the finite interacting particle system \(\mathcal{X}_{t}=\{\mathbf{x}_{t}^{i}\}_{i=1}^{N}\) is introduced:
\[d\mathbf{x}_{t}^{i}=-\mathcal{C}(\mathcal{X}_{t})\nabla V(\mathbf{x}_{t}^{i})dt+ \sigma\sqrt{\mathcal{C}(\mathcal{X}_{t})}d\mathcal{B}_{t}^{i}, \tag{13}\]
where \(\mathcal{B}_{t}^{i}\) are i.i.d. standard Brownian motions, and
\[\mathcal{C}(\mathcal{X}_{t}):=\mathbf{X}_{t}\mathbf{X}_{t}^{T},\sqrt{ \mathcal{C}(\mathcal{X}_{t})}:=\mathbf{X}_{t},\] \[\mathbf{X}_{t}:=\frac{1}{\sqrt{N}}(\mathbf{x}_{t}^{1}-\bar{\mathbf{x}}_{t}, \cdots,\mathbf{x}_{t}^{N}-\bar{\mathbf{x}}_{t}),\bar{\mathbf{x}}_{t}=\frac{1}{N}\sum_{i=1}^ {N}\mathbf{x}_{t}^{i}\]
Particles in this system are then no longer independent to each other, in contrast to independent simulations in SGLD algorithm.
Now suppose \(V\) is a least squares functional3 with Tikhonov-Phillips regularization (Engl et al., 1996):
Footnote 3: For any positive-definite matrix \(A\), we define \(\langle a,a^{\prime}\rangle_{A}=\langle a,A^{-1}a^{\prime}\rangle=a^{T}A^{-1}a^{ \prime}\), and \(\|a\|_{A}=\|A^{-\frac{1}{2}}a\|\).
\[V(x)=\frac{1}{2}\|y-\mathcal{G}(x)\|_{\Gamma}^{2}+\frac{1}{2}\|x\|_{\Gamma_{0} }^{2}, \tag{14}\]
where \(y\in\mathbb{R}^{k},\mathcal{G}:\mathbb{R}^{d}\to\mathbb{R}^{k}\) is the forward map, \(\Gamma\) and \(\Gamma_{0}\) are two positive definite matrixs. In this situation, the system (13) can be re-written as
\[d\mathbf{x}_{t}^{i}= -\frac{1}{N}\sum_{j=1}^{N}\Big{\langle}D\mathcal{G}(\mathbf{x}_{t}^{ i})(\mathbf{x}_{t}^{j}-\bar{\mathbf{x}}_{t}),\mathcal{G}(\mathbf{x}_{t}^{i})-y\Big{\rangle}_{ \Gamma}\,\mathbf{x}_{t}^{j}dt\] \[-\mathcal{C}(\mathcal{X}_{t})\Gamma_{0}^{-1}\mathbf{x}_{t}^{i}dt+ \sigma\sqrt{\mathcal{C}(\mathcal{X}_{t})}dW_{t}^{i}. \tag{15}\]
Using the \(1^{\text{st}}\) order Taylor approximation
\[D\mathcal{G}(\mathbf{x}_{t}^{i})(\mathbf{x}_{t}^{j}-\bar{\mathbf{x}}_{t})\approx\mathcal{G} (\mathbf{x}_{t}^{j})-\bar{\mathcal{G}}_{t},\quad\bar{\mathcal{G}}_{t}:=\frac{1}{N} \sum_{k=1}^{N}\mathcal{G}(\mathbf{x}_{t}^{k}),\]
one may approximate Eq. 15 in a derivative-free manner as
\[d\mathbf{x}_{t}^{i}= -\frac{1}{N}\sum_{j=1}^{N}\Big{\langle}\mathcal{G}(\mathbf{x}_{t}^{j })-\bar{\mathcal{G}}_{t},\mathcal{G}(\mathbf{x}_{t}^{i})-y\Big{\rangle}_{\Gamma} \mathbf{x}_{t}^{j}dt\] \[-\mathcal{C}(\mathcal{X}_{t})\Gamma_{0}^{-1}\mathbf{x}_{t}^{i}dt+ \sigma\sqrt{\mathcal{C}(\mathcal{X}_{t})}dW_{t}^{i}. \tag{16}\]
## 3 Reweighted Interacting Langevin Diffusion
### Continuous process analysis
Keeping the invariant measure to be exactly \(e^{-2\sigma^{-2}V(\mathbf{x})}\) is not necessary for optimization. It would be preferable if a new process can converge faster to its invariant measure with its invariant measure still concentrating on the global optimum as \(\sigma\to 0\).
Now we introduce an additional source term to modify Eq. (6) into
\[\frac{\partial}{\partial t}\tilde{p}_{t} =\text{div}\big{(}\tilde{p}_{t}C\nabla V+\frac{\sigma^{2}}{2}C \nabla\tilde{p}_{t}\big{)}+W\tilde{p}_{t} \tag{17}\] \[p_{t}(\mathbf{x}) =\frac{\tilde{p}_{t}(\mathbf{x})}{\int_{\bar{\mathbf{x}}\in\mathbb{R}^{d}} \tilde{p}_{t}(\tilde{\mathbf{x}})d\tilde{\mathbf{x}}} \tag{18}\]
Here we mainly consider the situation when \(C=I\) corresponding to Eq. 4, or \(C=\mathcal{C}(p_{t})\) corresponding to Eq. 12, but the analysis remains valid for all positive-definite \(C\). We assume in this paper that the function \(W\) is smooth and upper bounded.
Let us look inside what the role \(W\) plays in the evolution of this process. If we take \(W\) as a function to \(f\) such that \(W(\mathbf{x})\) becomes larger when \(f(\mathbf{x})\) becomes smaller, intuitively the ratio of the mass in the better-fitness region (closer to the global minimum) becomes larger, thus we expect the invariant measure concentrates more on the global optimum.
Note that such a process (17), (18) is no longer a gradient flow structure, which does not preserve total mass, thus a normalization is added to keep the total mass equal to 1. Such a normalizing process has been well-studied as the so-called Feynman-Kac Semigroup when considering the linear case: \(C\) is a constant matrix. As the spectral analysis is wildly studied for linear operators (Kato, 1966), and for numerical discretization the \(C\) is fixed at each time step, we thus conduct analysis for any fixed matrix \(C\).
We introduce the solution operator corresponding to Eq. 17 and Eq. 18: recall the infinite generator \(\mathcal{L}_{C}^{\dagger}\), the solution in Eq. 17 can be represented by \(\tilde{p}_{t}=e^{t(\mathcal{L}_{C}^{\dagger}+W)}p_{0}\). We denote the corresponding reweighted operator, which is also well-known as Feynman-Kac Semigroup, as \(\Phi_{\mathcal{L}_{C}+W}^{t}:\)
\[\Phi_{\mathcal{L}_{C}+W}^{t}(p_{0}):=\frac{e^{t(\mathcal{L}_{C}^{\dagger}+W)}p _{0}}{\int_{\mathbb{R}^{d}}e^{t(\mathcal{L}_{C}^{\dagger}+W)}p_{0}(\mathbf{\tilde{ x}})d\mathbf{\tilde{x}}},\]
then \(p_{t}=\Phi_{\mathcal{L}_{C}+W}^{t}(p_{0})\) is exactly the solution to Eq. 18.
The operator \(\Phi_{\mathcal{L}_{C}+W}^{t}\) is nonlinear in general, but it has many similarity with the linear operator \(e^{t(\mathcal{L}_{C}^{\dagger}+W)}\). In specific, the unique positive fixed-point \(p_{\infty}\) of \(\Phi_{\mathcal{L}_{C}+W}^{t}\) is proportional to the principle eigenfunction of \((\mathcal{L}_{C}^{\dagger}+W)\), and the convergence rate of \(\Phi_{\mathcal{L}_{C}+W}^{t}(p_{0})\) to \(p_{\infty}\) is controlled by the spectral gap of \((\mathcal{L}_{C}+W)\). For the Feynman-Kac semigroup, one can refer to Ferre and Stoltz (2017) for time-invariant case, Lyu et al. (2021) for time-periodic case, and Moral (2004) for systematical details.
We are interested in what benefits the source term \(W\) can contribute. We show that, when taking \(W=\varepsilon m(V)\) for some small \(\varepsilon>0\) where \(m:\mathbb{R}\rightarrow\mathbb{R}\) is a monotonic decreasing function of \(V\), this brings mainly two benefits:
* the spectral gap to the operator \(\mathcal{L}_{C}+W\) is larger than \(\mathcal{L}_{C}\) considered in same functional space;
* the invariant measure of \(\Phi_{\mathcal{L}_{C}+W}^{t}\) concentrates more on the global minimum compared to the invariant measure of \(e^{t(\mathcal{L}_{C}^{\dagger})}\).
These two benefits show that, in the same noise level, process (18) can converge faster and have more concentrated invariant measure than process (4) or (27). We will further explain the benefits, giving proof in Section 4.1.
### Discrete algorithm design
Now let us use interacting particle methods (Moral and Micolo, 2000; Moral, 2013) and mean field theory (Kac et al., 1960) to design algorithms. We introduce the so-called Reweighted Interacting Langevin Diffusion algorithm, by simply approximating \(p_{t}\) with a population of particles, and discretize the evolution of time by operator-splitting technique and forward-Euler scheme.
First, we convert Eq. 17 and 18 into a discrete-time version: let \(\tau>0\) be a fixed timestep:
\[p_{(n+1)\tau}=\frac{e^{\tau(\mathcal{L}_{C}^{\dagger}+W)}p_{n\tau}}{\int_{ \mathbb{R}^{d}}e^{\tau(\mathcal{L}_{C}^{\dagger}+W)}p_{n\tau}(\mathbf{\tilde{x}} )d\mathbf{\tilde{x}}}\]
then we use the operator splitting technique4: approximate \(e^{\tau(\mathcal{L}_{C}^{\dagger}+W)}\) by \(e^{\tau W}\circ e^{\tau\mathcal{L}_{C}^{\dagger}}\) with splitting error \(O(\tau^{2})\).
Footnote 4: Note that higher order splitting technique can be applied, such as Strange splitting (Strang, 1968) \(e^{\frac{\tau}{2}W}\circ e^{\tau\mathcal{L}_{C}^{\dagger}}\circ e^{\frac{\tau} {2}W}\) with splitting error \(O(\tau^{3})\). However, the approximation error can still be induced in later steps, thus we only choose a simplest one here.
Let's now approximate \(e^{\tau\mathcal{L}_{C}^{\dagger}}p_{n\tau}\). Suppose \(p_{n\tau}\) is approximated by a weighted empirical measure that can be expressed as \(\hat{p}_{n\tau}(\mathbf{x})=\sum_{i=1}^{N}w_{n}^{i}\delta_{\mathbf{x}_{n}^{i}}(\mathbf{x})\) generated from a sort of sample-weight pair \(\{\mathbf{x}_{n}^{i},w_{n}^{i}\}_{i=1}^{N}\). We use the simplest Euler-Maruyama (Faniran, 2015) approximation,
\[e^{\tau\mathcal{L}_{C}^{\dagger}}\hat{p}_{n\tau}(\mathbf{x})\approx \frac{1}{N}\sum_{i=1}^{N}w_{n}^{i}\delta_{\mathbf{x}_{n+1}^{i}}(\mathbf{x}), \tag{19}\] \[\mathbf{x}_{n+1}^{i}=\mathbf{x}_{n}^{i}-C\nabla V(\mathbf{x}_{n}^{i})\tau+ \sqrt{\tau\sigma^{2}C}\xi_{n}^{i}\] (20) \[e^{\tau W}e^{\tau\mathcal{L}_{C}^{\dagger}}\hat{p}_{n\tau}(\mathbf{x })\approx\frac{1}{N}\sum_{i=1}^{N}w_{n}^{i}e^{\tau W(\mathbf{x}_{n+1}^{i})} \delta_{\mathbf{x}_{n+1}^{i}}(\mathbf{x}), \tag{21}\]
Here \(C\) can be \(I\) or the covariance matrix of current step: if we take the matrix \(C\) to be the covariance matrix, it is computed as (denote \(\Lambda=\text{diag}(w_{n}^{i}),\quad\bar{\mathbf{x}}_{n}=\frac{1}{N}\sum_{i=1}^{N}w _{n}^{i}\mathbf{x}_{n}^{i}\))
\[C=\mathcal{C}(\hat{p}_{n\tau})=\mathbf{X}_{n}\Lambda\mathbf{X}_{n}^{T}, \sqrt{C}=\mathbf{X}_{n}\Lambda^{\frac{1}{2}}, \tag{22}\] \[\mathbf{X}_{n}:=\frac{1}{\sqrt{N}}(\mathbf{x}_{n}^{1}-\bar{\mathbf{x}}_{n}, \cdots,\mathbf{x}_{n}^{N}-\bar{\mathbf{x}}_{n}). \tag{23}\]
Then, after normalizing, we get the natural approximation
\[p_{(n+1)\tau}(\mathbf{x})\approx\sum_{i=1}^{N}w_{n+1}^{i}\delta_{\bm {x}_{n+1}^{i}}(\mathbf{x}), \tag{24}\] \[w_{n+1}^{i}=\frac{w_{n}^{i}e^{\tau W(\mathbf{x}_{n+1}^{i})}}{\sum_{j =1}^{N}w_{n}^{j}e^{\tau W(\mathbf{x}_{n+1}^{j})}} \tag{25}\]
Note that elements in \(\{w_{n}^{i}\}_{i=1}^{N}\) may be polarized, we use a resampling technique for better approximation: If \(\frac{\max_{i}w_{n}^{i}}{\min_{i}w_{n}^{i}}\)
reaches to a threshold, we resample the replicas \(\{\mathbf{x}_{n}^{i}\}_{i=1}^{N}\) according to the multinomial distribution associated with \(\{w_{n}^{i}\}_{i=1}^{N}\), which defines a new set of replicas \(\{\tilde{\mathbf{x}}_{n}^{i}\}_{i=1}^{N}\) and the empirical distribution
\[\tilde{p}_{n\tau}(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\delta_{\tilde{\mathbf{x}}_{n}^{i }}(\mathbf{x}).\]
Then we replace \(\hat{p}_{n\tau}\) by \(\tilde{p}_{n\tau}\) in Eq. 21 and conduct further computation.
When \(V\) is a least square functional with Tikhonov-Phillips regularization like in Eq. 14, we can follow the same analysis, approximating \(\mathcal{C}(\hat{p}_{n\tau})\nabla V(\mathbf{x}_{n}^{i})\) in a similar derivative-free manner as in Eq. 16: ( \(\tilde{\mathcal{G}}_{n}:=\frac{1}{N}\sum_{k=1}^{N}w_{n}^{i}\mathcal{G}(\mathbf{x}_ {n}^{k})\) )
\[\mathcal{C}(\hat{p}_{n\tau})\nabla V(\mathbf{x}_{n}^{i}) \tag{26}\] \[\approx \frac{1}{N}\sum_{j=1}^{N}w_{n}^{j}\left\langle\mathcal{G}(\mathbf{x}_ {n}^{j})-\tilde{\mathcal{G}}_{n},\mathcal{G}(\mathbf{x}_{n}^{i})-y\right\rangle_{ \Gamma}\mathbf{x}_{n}^{j}+\mathcal{C}(\hat{p}_{n\tau})\Gamma_{0}\mathbf{x}_{n}^{i}.\]
We now conclude in algorithm 1. Note that we fix the population \(N\), stepsize \(\tau\), and noise level \(\sigma\) for simple, but these can be adjusted dynamically for faster convergence.
Note that our methods can also be interpreted as a mutation-selection genetic particle algorithm with MCMC mutations: the update rule for \(\mathbf{x}_{n}^{i}\to\mathbf{x}_{n+1}^{i}\) can be regarded as mutation, and the resampling step can be regarded as selection. Such a type of algorithm acts like a ridge connecting Genetic Algorithm and PDEs, which can conduct better convergence analysis.
### Gradient-Free variants
In practice, many problems are hard to get the exact gradients for optimizing: gradient information is infeasible, or computationally expensive. We thus suggest a gradient-free variant, which corresponds to a process only with diffusion and source term. We will prove that such a process can exponentially converge to its invariant measure, and it also has an invariant measure that generally concentrates on the global minimum as \(\sigma\to 0\).
We introduce the formula of the modified process as
\[\frac{\partial}{\partial t}\tilde{p}_{t} =\text{div}\big{(}\frac{\sigma^{2}}{2}C\nabla\tilde{p}_{t}\big{)} +W\tilde{p}_{t} \tag{27}\] \[p_{t}(\mathbf{x}) =\frac{\tilde{p}_{t}(\mathbf{x})}{\int_{\tilde{\mathbf{x}}\in\mathbb{R}^ {d}}\tilde{p}_{t}(\tilde{\mathbf{x}})d\tilde{\mathbf{x}}} \tag{28}\]
Note that we only delete the term related to \(\nabla V\), the term \(W\) that can still be chosen to relate to \(V\). We will state in Thm. 4.6 that, the system will finally converge to a distribution concentrating on the maximum of \(W\).
The discrete algorithm is designed similarly as in Algorithm 1, but just ignores the gradient term \(C\nabla V\). This can be seen by taking \(V(\mathbf{x})\equiv\text{const}\).
## 4 Theoretical properties
In this section, we state the main theoretical results associated with our RILD algorithm 1. The proofs are offered in Appendix A.
### Spectral Gap enhancement of the reweighting modification
Now we analyze how \(W\) helps in improving the convergence rate, as well as the sharpness of the invariant measure.
First let us restrict the comparison in the \(L^{2}(\nu)\), where \(\nu(x)=e^{-\frac{2V(x)}{\sigma^{2}}}\). The reason to choose this space is mainly because the following property,
**Lemma 4.1**.: _For any positive definite matrix \(C\), \(L_{C}+W\) and \(L_{C}\) are self-adjoint over \(L^{2}(\nu)\)._
To continue our analysis, we need to make following assumption for \(V\) and \(W\), which is necessary for \(L_{C}+W\) and \(L_{C}\) have a discrete and up-bounded spectrum (see Pankov (2001)).
**Assumption 4.2**.: _The functions \(V\) and \(W\) are assumed to satisfy:_
\[\lim_{|\mathbf{x}|\to\infty}V=+\infty,W<A\text{ for a constant }A\in\mathbb{R},\]
_and_
\[\lim_{|\mathbf{x}|\to\infty}\frac{|\nabla V|^{2}}{2\sigma^{2}}-\frac{\Delta V}{2} -W=\lim_{|\mathbf{x}|\to\infty}\frac{|\nabla V|^{2}}{2\sigma^{2}}-\frac{\Delta V }{2}=+\infty\]
Next, we need to prove that the Feynman-Kac semigroup \(\Phi_{\mathcal{L}_{C}+W}^{t}\) does map any initial density \(p_{0}\in L^{2}(\nu)\) to a limit density. Such a result is concluded as follows:
**Theorem 4.3**.: _Under Asmp. 4.2, For any \(C\) that is positive-definite, there exists a principle eigenvalue \(\lambda_{0}\) of \(\mathcal{L}_{C}+W\) over \(L^{2}(\nu)\) with the corresponding normalized eigenfunction \(\phi(\mathbf{x})\). Furthermore, for any positive density \(p_{0}\), the normalized probability \(p_{t}:=\Phi_{\mathcal{L}_{C}+W}^{t}(p_{0})\) has a limit equal to \(\phi(\mathbf{x})\nu(\mathbf{x})\), that is, \(\lim_{t\to\infty}||p_{t}/\nu-\phi||_{L^{2}(\nu)}=0\)._
**Theorem 4.4**.: _Under the same assumption as in Lem. 4.3, the convergence rate of the system (17), (18) can be evaluated by the spectral gap of \(\mathcal{L}_{C}+W\) over \(L^{2}(\nu)\): let \(\lambda_{0}\) and \(\lambda_{1}\) be the first two eigenvalues of \(\mathcal{L}_{C}+W\), then \(||p_{t}/\nu-\phi||_{L^{2}(\nu)}\leq C||p_{0}/\nu-\phi||_{L^{2}(\nu)}e^{-( \lambda_{0}-\lambda_{1})t}\)._
Next, let us analyze how the convergence speed of \(\Phi_{\mathcal{L}_{C}+W}^{t}p_{0}\) is improved compared to the original process \(e^{t\mathcal{L}_{C}}(p_{0})\), which is heavily related to the spectral gap. To
exactly analyze the spectral gap to a differential operator is hard in general. Many existing analysis of spectral gap only consider the simplest case: \(\nabla V\) is a constant matrix. In contrast to these existing techniques, we analyze the contribution of \(W\) to the spectral gap by perturbation theory: we will prove that the new process has a better convergence rate compared to the old one, if \(W=\varepsilon m(V)\) for a small \(\varepsilon>0\), where \(m:\mathbb{R}\rightarrow\mathbb{R}\) is a monotonic decreasing function to \(V\).
**Theorem 4.5**.: _Suppose in addition \(V\) satisfies the condition the same as in Nier (2004). If we take \(W=\varepsilon m(V)\), consider under the space \(L^{2}(\nu)\), where \(\nu(x)=e^{-\frac{2\mathcal{V}(x)}{\sigma^{2}}}\), then when \(\sigma\) small enough, the spectral gap of \(\mathcal{L}_{C}+\varepsilon m(V)\) is locally increasing v.s. \(\varepsilon\) for small enough \(\varepsilon>0\). Besides, the principle eigenfunction of \(\mathcal{L}_{C}+\varepsilon m(V)\) concentrates more on global minimum than the principle eigenfunction of \(\mathcal{L}_{C}\) for small enough \(\varepsilon>0\)._
### Convergence of the Gradient-Free variants
In the gradient-free situation, the original operator \(\mathcal{D}_{C}:=\text{div}\big{(}\frac{\sigma^{2}}{2}C\nabla\cdot\big{)}\) has a trivial invariant measure: uniform distribution, thus no optimization property can be expected when \(W\) is not included. We thus turn our results to another way, showing that \(\Phi^{t}_{\mathcal{D}_{C}+W}p_{0}\) exponentially converges to a distribution that concentrates on the neighborhood of the global minimum, and as \(\sigma\to 0\), the invariant distribution gets more and more concentrate on the global minimum. We now state the result as follows:
**Theorem 4.6**.: _Consider under the space \(L^{2}\), assuming \(W\) is bounded on \(\mathbb{R}^{d}\), then \(\Phi^{t}_{\mathcal{D}_{C}+W}p_{0}\) converges exponentially to the principle normalized eigenfunction \(\mu_{\sigma}(\mathbf{x})\) of the operator \(\mathcal{D}_{C}+W\). In addition, for any \(f\in L^{2}\) that's smooth compactly supported, \(\lim_{\sigma\to 0}\langle\mu_{\sigma}(\mathbf{x}),f\rangle=f(\mathbf{x}^{*})\), where \(\mathbf{x}^{*}\) is the global maximum of \(W\), or to say, \(\mu_{\sigma}(\mathbf{x})\) generally converges to \(\delta_{\mathbf{x}^{*}}(\mathbf{x})\)._
## 5 Numerical experiments
In this section, we first present Fourier Spectral analysis to verify our theoretical analysis of the proposed method. Then, we conduct two inverse problem tests, showing the positive effect as a sampling method when introducing the reweighting/resampling procedure. Finally, we test an optimization task, showing that the resampling technique can help escaping from the local minimum.
### Fourier spectral method analysis for enhancing spectral gap and concentrating invariant measure
Let us consider a 1 \(d\) periodic problem. We first verify our results in Thm. 4.5. Let \(x\in[0,1)\)5, we use Fourier Spectral method (Shen et al., 2011) to discretize the differential operator \(\mathcal{L}\) and \(\mathcal{L}-\varepsilon V\),where
Footnote 5: Although our analysis considers the space \(\mathbb{R}\), the results can be easily transferred to the periodic case with Asmp. 4.2 be removed.
\[\mathcal{L}f(x):=-\frac{d}{dx}V(x)\frac{d}{dx}f(x)+\frac{d^{2}}{dx^{2}}f(x) \tag{29}\]
for any periodic \(f\in C^{\infty}([0,1))\), and
\[V(x)=\cos(9\pi x)-\cos(11\pi x) \tag{30}\]
with boundary smoothly modified.
In Fig. 1(a), We plot the graphs of the principal eigenfunction \(\nu\) to the operator \(\mathcal{L}\), the principal eigenfunction \(\mu\) to the
operator \(\mathcal{L}-0.1V\), function \(V\) and \(\frac{\mu}{\nu}\) respectively. As we expected, \(\mu\) concentrates more on the global minimum of \(V\) than \(\nu\). In Fig. 1(b), we can see \(\lambda_{0}(\varepsilon)-\lambda_{1}(\varepsilon)\) increasing as \(\varepsilon\) increasing, showing the enhancement of spectral gap. This experiment gives a visual explanation to Thm. 4.5: specifically, the eigenfunction to \(\mathcal{L}\) is the Gibbs measure \(\nu(\mathbf{x})=\exp(-2V(\mathbf{x})/\sigma^{2})\), getting larger when \(V\) gets smaller; the eigenfunction \(\mu\) to \(\mathcal{L}-0.1V\) has no doubt more mass than \(\nu\) that concentrating nearby the global minimum of \(V\), and the staircase-like graph of the quotient of \(\mu/\nu\) gives a direct explanation to (42).
Next, let us verify our result in Thm. 4.6. We use the same space \(x\in[0,1)\) and the source term \(W(\mathbf{x})=-V(\mathbf{x})\) where \(V(x)\) is the same as in (30), and test the analytical property of the following operator
\[(\mathcal{D}_{\sigma}+W)f(x):=\frac{\sigma^{2}}{2}\frac{d^{2}}{dx^{2}}f(x)+W( x)f(x) \tag{31}\]
for any periodic \(f\in C^{\infty}([0,1))\). We plot in Fig. 2(a) the principal eigenfunction, or invariant distribution density \(\mu_{\sigma}(x)\) of \(\mathcal{D}_{\sigma}\) with different \(\sigma\). We also calculate the mass nearby the global minimum: we take the interval \(I=[0.44,0.68]\), and calculate \(\int_{I}\mu_{\sigma}(x)dx\). The results is plotted in Fig. 2(b), showing the concentrating tendency when \(\sigma\to 0\). This coincides with the result in Thm. 4.6: the invariant measure concentrates more and more on the global maximum of \(W\) as \(\sigma\to 0\).
### Numerical tests for inverse problem derivative-free solving and sampling
In this section, we design some numerical tests in inverse problem solving fields. We compare our RILD algorithm with EKS (Garbuno-Inigo et al., 2019) and EKI (Kovachki and Stuart, 2018). These problems can be seen as an optimization problem of a least square functional with Tickhonov-Phillips regularization (14), thus a derivative-free approximation to the updating rule ((16) for EKS and EKI, or (26) for RILD) can be used to design derivative-free schemes.
We first try to solve a low-dimensional inverse problem. The numerical experiment considered here is the example originally presented by Ernst et al. (2015), and also used in Herty and Visconti (2018). We compare with the result from Garbuno-Inigo et al. (2019), and the experimental settings are exactly the same. The forward map is given by the solution of a one-dimensional elliptic boundary value problem as defined in Garbuno-Inigo et al. (2019),
\[-\frac{d}{du}\left(\exp(x_{1})\frac{d}{du}f(u)\right)=1,u\in[0,1] \tag{32}\]
with \(f(0)=0,f(1)=x_{2}\). The explicit solution is given by
\[f(u)=x_{1}u+\exp(-x_{2})\left(-\frac{u^{2}}{2}+\frac{u}{2}\right). \tag{33}\]
Thus we define the forward map
\[\mathcal{G}(\mathbf{x})=\left(f(u_{1}),f(u_{2})\right)^{T}. \tag{34}\]
Here \(\mathbf{x}=(x_{1},x_{2})^{T}\) is a constant vector we want to solve, and we have noisy measurements \(\mathbf{y}=(27.5,79.7)^{T}\) of \(f(\cdot)\) at locations \(u_{1}=0.25,u_{2}=0.75\). This can be solved by minimizing the least-square functional defined as in Eq. 14. We assume observation noise \(\Gamma=0.1^{2}I_{2}\), prior matrix \(\Gamma_{0}=10^{2}I_{2}\), and initial ensemble drawn from \(N(0,1)\times U(90,110)\). The ensemble size is \(N=10^{3}\). We fix \(\sigma=\sqrt{2}\). The stepsize \(\tau\) is updated adaptively as in Garbuno-Inigo et al. (2019). We take \(W(\mathbf{x})=-\|\mathcal{G}(\mathbf{x})-\mathbf{y}\|_{\Gamma}^{2}\).
We compare our RILD algorithm 1 with EKS and EKI algorithms. The key difference between our RILD and EKS in this situation is the use of reweighting/resampling. The results are plotted in Fig. 3(a), 3(b), and 3(c). From the figure, we can see that our RILD algorithm converges much faster than EKI or EKS algorithms using the same super-parameters, and as our problem is settled as a posterior sampling problem, the ensemble of RILD stops shrinking to the minimum point after some iterations, unless one decrease the diffusion parameter \(\sigma\). One could expect, using a decay schedule to \(\sigma\), RILD algorithm will perform much better than EKS or EKI algorithms for optimizations.
We also tested in a high-dimensional case. Specifically, we define the map \(\mathcal{G}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{2d-2}\)
\[\mathcal{G}(\mathbf{x})=\left(10(x_{2}-x_{1}^{2}),\cdots,10(x_{d}-x_{d-1}^{2}),x_ {1},\cdots,x_{d}\right)^{T},\]
\(\mathbf{y}=(0,\cdots,0,1,\cdots,1)^{T}\) with \(0\) repeats \(d-1\) times and \(1\) repeats \(d-1\) times. One can verify that
\[\|\mathcal{G}(\mathbf{x})-\mathbf{y}\|^{2}=\sum_{i=1}^{d-1}\left(100(x_{i+1}-x_{i}^{2})^{ 2}+(x_{i}-1)^{2}\right)\]
is exactly a \(\mathit{Rosenbrock}\) function. We choose \(d=100\), observation noise matrix \(\Gamma=0.1^{2}I_{198}\), prior distribution matrix \(\Gamma_{0}=10^{2}I_{100}\), and initial ensemble drawn from \(N(2,0.3^{2}I_{100})\). The global solution of \(\mathcal{G}(\mathbf{x})=\mathbf{y}\) is \(\mathbf{x}=(1,\cdots,1)^{T}\). The ensemble size is fixed to \(N=10^{3}\). We take \(\sigma=\sqrt{2}\) and the stepsize \(\tau\) is updated adaptively as before. For RILD algorithm, we choose \(W(\mathbf{x})=-5*10^{-3}\|\mathcal{G}(\mathbf{x})-\mathbf{y}\|^{2}\). Similar to the test before, one can find RILD converges much faster than EKI or EKS in these high-dimensional sampling tasks, and thus perform better in optimization situations.
### Numerical tests for highly nonconvex high-dimensional optimization
We now test our RILD algorithm in a highly nonconvex high-dimensional situation. we test with 100 dimensional \(\mathit{Ackley}\) function, which is defined as follows:
\[V(\mathbf{x})=-ae^{-b\sqrt{\frac{4}{2}\sum_{i=1}^{d}x_{i}^{2}}}-e^{\frac{1}{2}\sum _{i=1}^{d}\cos(x_{i})}+a+e, \tag{35}\]
where \(\mathbf{x}=(x_{1},\cdots,x_{d})^{T},d=100,a=20,b=0.2,c=2\pi\). As the difficulties mainly raise from the numerious local minimum, covariance modification that is designed for ill-posed problems is not suitable here, we just take \(C=I\).
We compare our algorithm RILD in Alg. 1 with the classical Gradient Langevin Dynamics (GLD) algorithm. GLD algorithm discretizes a single path of the Langevin Dynamics:
\[\mathbf{x}_{n+1}=\mathbf{x}_{n}-\nabla V(\mathbf{x}_{n})\tau+\sqrt{\tau\sigma^{2}}\xi_{n}.\]
The difference between RILD and GLD is: RILD maintains an ensemble of size \(N\) while GLD only maintains 1 individual, and at each step, RILD calculates a weight associated with each individual, then resamples the ensemble according to the weight, see Alg. 1. Now we test if RILD has better ability getting out of local minimums, compared to GLD.
For RILD, we take ensemble size \(N=50\), and randomly pick the initial ensemble6 from \(N(0,30^{2}I_{100})\). For GLD, the initial point is randomly chosen from the initial ensemble of RILD. We test a wide range of the stepsize \(\tau\in[2,32]\), \(\sigma\in[1,16]\), and for each fixed \(\tau\) and \(\sigma\), we repeat 10 trials to calculate the pass rate: we say one trial is passed, if the RILD or GLD algorithm can find a point \(\mathbf{x}\) that \(V(\mathbf{x})<17\) in \(5*10^{4}\) evaluations7. All trials in all super-parameter settings begin with the same initial ensemble. In 5(a) One can see that RILD has a wide range of super-parameter settings to find the true decay directions, while GLD algorithm cannot make it in any tested settings. We also tested GA and PSO under the same initial condition with different super-parameters, and reported the best searching result in Fig. 5(b) together with RILD and GLD.
Footnote 6: Such an initial setting creates many difficulties to find the global minimum, as the first term in the \(\mathit{Ackley}\) function becomes quickly dominated when \(\mathbf{x}\) gets away from the origin.
Footnote 7: If one finds a point smaller than 17, the remaining task will be trivial as the first term in the \(\mathit{Ackley}\) function gets dominant.
## 6 Conclusion
In this work, we have demonstrated a methodology for accelerating Langevin Dynamics based algorithms by the addition of the source term \(W\) and the use of reweighting/resampling technique - the RILD algorithm. Our algorithm and analyses shed some light on combining gradient algorithms and genetic algorithms using Partial Differential Equations (PDEs) with provable guarantees.
In the future, we will combine the reweighting technique with higher-order optimization schemes such as momentum accelerated gradient. We will also conduct a finer analysis for convergence with finite particles, which is somehow more important as asymptotic results are only suitable for a large enough ensemble. We expect these studies will bring some insights to design new numerical algorithms.
Figure 4: The comparison between RILD, EKS, and EKI algorithms. 4(a) is the mean loss v.s. iterations, 4(b) is the ensembles at \(150^{th}\) iteration, 4(c) is the ensembles at \(400^{th}\) iteration.
Figure 5: 5(a): pass rates heat map for RILD(left) and GLD(right). 5(b): decay graph for PSO, GA, RILD and GLD where \(\tau=10\), \(\sigma=5\) for RILD and GLD.
Figure 3: The convergence comparison between RILD, EKS and EKI algorithms. 3(a) is the mean loss versus iterations, 3(b) is the ensembles at \(15^{th}\) iteration, 3(c) is the ensembles at \(30^{th}\) iteration. |
2308.01451 | Identifiability in Functional Connectivity May Unintentionally Inflate
Prediction Results | Functional magnetic resonance (fMRI) is an invaluable tool in studying
cognitive processes in vivo. Many recent studies use functional connectivity
(FC), partial correlation connectivity (PC), or fMRI-derived brain networks to
predict phenotypes with results that sometimes cannot be replicated. At the
same time, FC can be used to identify the same subject from different scans
with great accuracy. In this paper, we show a method by which one can
unknowingly inflate classification results from 61% accuracy to 86% accuracy by
treating longitudinal or contemporaneous scans of the same subject as
independent data points. Using the UK Biobank dataset, we find one can achieve
the same level of variance explained with 50 training subjects by exploiting
identifiability as with 10,000 training subjects without double-dipping. We
replicate this effect in four different datasets: the UK Biobank (UKB), the
Philadelphia Neurodevelopmental Cohort (PNC), the Bipolar and Schizophrenia
Network for Intermediate Phenotypes (BSNIP), and an OpenNeuro Fibromyalgia
dataset (Fibro). The unintentional improvement ranges between 7% and 25% in the
four datasets. Additionally, we find that by using dynamic functional
connectivity (dFC), one can apply this method even when one is limited to a
single scan per subject. One major problem is that features such as ROIs or
connectivities that are reported alongside inflated results may confuse future
work. This article hopes to shed light on how even minor pipeline anomalies may
lead to unexpectedly superb results. | Anton Orlichenko, Gang Qu, Kuan-Jui Su, Anqi Liu, Hui Shen, Hong-Wen Deng, Yu-Ping Wang | 2023-08-02T21:59:42Z | http://arxiv.org/abs/2308.01451v1 | # Identifiability in Functional Connectivity May Unintentionally Inflate Prediction Results
###### Abstract
Functional magnetic resonance (fMRI) is an invaluable tool in studying cognitive processes in vivo. Many recent studies use functional connectivity (FC), partial correlation connectivity (PC), or fMRI-derived brain networks to predict phenotypes with results that sometimes cannot be replicated. At the same time, FC can be used to identify the same subject from different scans with great accuracy. In this paper, we show a method by which one can unknowingly inflate classification results from 61% accuracy to 86% accuracy by treating longitudinal or contemporaneous scans of the same subject as independent data points. Using the UK Biobank dataset, we find one can achieve the same level of variance explained with 50 training subjects by exploiting identifiability as with 10,000 training subjects without double-dipping. We replicate this effect in four different datasets: the UK Biobank (UKB), the Philadelphia Neurodevelopmental Cohort (PNC), the Bipolar and Schizophrenia Network for Intermediate Phenotypes (BSNIP), and an OpenNeuro Fibromyalgia dataset (Fibro). The unintentional improvement ranges between 7% and 25% in the four datasets. Additionally, we find that by using dynamic functional connectivity (dFC), one can apply this method even when one is limited to a single scan per subject. One major problem is that features such as ROIs or connectivities that are reported alongside inflated results may confuse future work. This article hopes to shed light on how even minor pipeline anomalies may lead to unexpectedly superb results.
fMRI, functional connectivity, identifiability, fingerprinting, replicability, UKB, PNC, BSNIP, OpenNeuro Further author information: (Send correspondence to Anton Orlichenko)
Anton Orlichenko: E-mail: [email protected]
## 1 Introduction
Functional magnetic resonance is a non-invasive imaging modality that uses the blood oxygen level dependent (BOLD) signal to infer the level of neural activity in different regions of the brain [1]. fMRI has been used to localize visual processing [2], attention [34], emotional processing [56], and language [8] to specific locations in the cortex. It has also been used to identify hemispheric dominance for, e.g., language [9]. Functional connectivity is the Pearson correlation between the time-varying BOLD signal of different regions of the brain [10]. It has recently been used to predict age [111, 112], sex [131, 132], general fluid intelligence [153, 154], pre-clinical Alzheimer's disease [16], and schizophrenia [172]. Classification based on 4D fMRI images, not FC, is also an active area of research [19]. Naturally, the ability to predict cognition-related endophenotypes or pre-clinical disease status is an exciting avenue for translational applications.
Although fMRI offers unmatched ability to observe neural activity in vivo in human subjects, there are two questions that must be addressed when interpreting the results of predictive studies. First, are these studies meant to establish a groundwork for a clinical system such as, e.g., an AI-based breast cancer screening tool [20]. If so, then these studies must be validated in a randomized trial with thousands of subjects [21]. By contrast, fewer than 1% of fMRI studies in 2017 and 2018 enrolled more than 100 subjects, with most recruiting less than 30 [22]. A large number of subjects is needed partly because, in the past, fMRI has faced several replicability crises [23]. For example, Bennett et al. (2010) made the case that multiple comparison correction was indispensable in
fMRI by revealing emotion-associated voxels in a dead salmon [24]. It is suspicious that fMRI-based predictions of schizophrenia status achieve above 90% accuracy with fewer than 100 training subjects [17, 18], whereas genome-wide association studies find single-nucleotide polymorphisms (SNPs) explain only 23% of schizophrenia variance [25], and prediction studies based on SNPs in the UK Biobank report a maximum AUC of 0.71 [26]. In fact, recent studies [27] and many recent posters at OHBM 2023 give a classification accuracy for schizophrenia diagnosis using FC (often times cited as the best metric) of 70-80% [28, 29, 30].
If these pipelines are not meant to be introduced clinically, are they meant to provide mechanistic insights into human cognition? This is more likely to be the case, but there is sometimes a very loose interpretation of what FC actually is. For example, the UKB description of fMRI processing [31] makes the point that, compared to PC, FC "has various practical and interpretational disadvantages including an inability to differentiate between directly connected nodes and nodes that are only connected via an intermediate node." [32]. Many recent studies also implicitly assume that connectivity in the context of fMRI implies physical connections [33].
In reality, there is no signal traveling from node to node: fMRI essentially measures blood flow [1], and any correlation-based metric is only looking at how much the bandpass-filtered BOLD signals between two regions are in sync [10]. Thus, at first order, fMRI is not measuring the electrical activity of neurons or the release of neurotransmitters, although reframing the problem as connectivity or graph edges may be a useful construct [34]. On the other hand, fMRI has identified several robust characteristics of BOLD signal. In particular, it has been shown that, on average, FC intensity decreases progressing from children to young adults [35], and that females have greater relative intra-default mode network (DMN) connectivity compared to males [36, 37]. FC has also shown some ability to predict race, even between different datasets [38], and it has been used in mechanistic studies of aggression related to olfactory stimulus [39].
One thing that fMRI-based FC is very good at is identifying the same subject from different scans, referred to as fingerprinting [40]. FC can easily achieve 60% fingerprinting accuracy [40], and with post-processing, fingerprinting accuracy can become greater than 95% [41, 42]. Some studies have explicitly aimed to improve prediction of Alzheimer's disease by maximizing identifiability after processing with PCA [43]. At the same time, recent work shows that confounder elimination [44] or intentional but undetectable data manipulation [45] can greatly improve prediction performance.
In this work, we present a procedure by which FC-based prediction results may be unintentionally inflated. By including different scans of the same subject in both train and test sets, the machine learning algorithm learns to memorize subjects from different scans rather than to select task-specific features. Connectivities or regions that are reported in such studies may confuse other researchers when surveying the literature. Abu-Mostafa et al. (2012) gives the example of a machine learning algorithm that was able to predict exchange rate direction 52.1% of the time, resulting in a theoretical profit of 22% over 2 years [46]. In live trading, the program actually lost money. The reported problem was that the training set was normalized using the statistics of the entire cohort, and this was enough to poison the results [46]. If these problems show up in finance, where money is on the line, then we posit it may be prudent to look for them in scientific procedures as well.
## 2 Methods
We first present the procedure for exploiting identifiability by treating independent scans as independent subjects. Second, we describe what we mean by identifiability or fingerprinting. Third, we give a brief review of how we derive FC or dFC from 4D fMRI volumes. Finally, we list revelant characteristics of the four datasets used in this study.
### Procedure for Exploiting Identifiability
The procedure for exploiting identifiability is simple, and example code demonstrating exploitation of identifiability is provided in the link in the footnote.* When multiple longitudinal or contemporaneous scans of the same subject are available, treat these scans as independent subjects when creating training and test sets. If only one subject scan is available, use dynamic functional connectivity to create FC from multiple non-overlapping windows, and treat these FC matrices as independent subjects. In our experiments, to highlight the maximum
possible gap in prediction, each subject has one scan in the training set and one scan in the test set. A random distribution of scans will achieve an accuracy somewhere between the double-dipping and legitimate results.
### Identifiability
We define successful identification (identifiability) of a subject as a same-subject, different-scan FC pair having a higher cosine similarity (Equation 1) compared to all other scan pairs in the cohort, where \(\mathbf{a}\) and \(\mathbf{b}\) are vectorized subject FCs. An alternative is to use Euclidean distance as the similarity metric; the numbers we present, however, are based on cosine similarity.
\[\text{sim}(\mathbf{a},\mathbf{b})=\frac{\mathbf{a}^{\text{T}}\mathbf{b}}{\| \mathbf{a}\|_{2}\|\mathbf{b}\|_{2}} \tag{1}\]
We see in Figure 1 that plain FC has 62.5% detinability among 3,843 subjects in the PNC dataset. With preprocessing, this number can be increased to 97.3%.[40, 41, 42]
### Functional and Dynamic Functional Connectivity
First, we register 4D fMRI volumes into MNI space using SPM12.2 Second, we identify regions of interest and extract the BOLD signal from those regions. These regions may be defined either using ICA [47] or a template. We use the Power264 template in this work.3 Third, we bandpass filter these timeseries within a 0.01 to 0.15 Hz envelope. This removes both low-frequency scanner drift and high-frequency noise as well as heartbeat and breathing signal. Finally, we calculate the Pearson correlation between the timeseries of each region to find the region-to-region FC. This symmetric matrix is reduced to the upper right triangle and vectorized. The entire procedure is illustrated in Figure 2.
Footnote 2: [http://www.fil.ion.ucl.ac.uk/spm/software/spm12/](http://www.fil.ion.ucl.ac.uk/spm/software/spm12/)
When no longitudinal or contemporaneous scans are available for a subject, we create multiple FC matrices from the same scan using windowing in time. Non-overlapping windows of the bandpass-filtered timeseries are used to create multiple FC matrices. In this study we use a window size of \(N=50\) repetition times (TRs).
### Predictive Models
We use simple logistic and ridge regression models for all predictive tasks. The scikit-learn implementation [49] is used in all cases.3 All prediction tasks are performed with an 80/20 training/test split, over 20 bootstrap iterations. The optimal hyperparameter (there is only one for both logistic and ridge regression) is chosen via grid search with grid locations at powers-of-ten intervals, i.e., on a logarithmic grid.
Figure 1: Demonstration of identifiability/fingerprinting with plain FC (62.5% left) vs with FC factor analysis residual (84.9% middle) vs with FC angle basis residual (97.3% right). Among 1,529 subjects having 3,843 scans, same-subject, different-scan FC has the highest cosine similarity among all scan pairs 62.5% of the time. Scans from the PNC dataset. Reproduced from Orlichenko et al. (2023).[42]
### Datasets
We verify the potential of identiability to skew results in a favorable manner on four different datasets: the UK Biobank, the Philadelphia Neurodevelopmental Cohort, the Bipolar and Schizophrenia Network for Intermediate Phenotypes, and an OpenNeuro Fibromyalgiagia dataset.
#### 2.5.1 UK Biobank (UKB)
We have processed the scans of more than 40,000 UK Biobank[50] subjects using SPM12. Of these, 2,722 subjects have two longitudinal scans, taken approximately two years apart. An additional 154 subjects have the second scan but not the first, resulting from quality control or a failure in our pipeline during pre-processing. We use the longitudinal subjects to predict age and genetic sex. We also predict age and sex on the non-longitudinal cohort in order to provide a baseline for model performance without double-dipping.
#### 2.5.2 Philadelphia Neurodevelopmental Cohort (PNC)
The Philadelphia Neurodevelopmental Cohort is a dataset of 9,267 children and young adults aged 8-23 years old containing demographics, cognitive battery, questionnaire responses, and SNP data[51]. Among the cohort, 1,529 subjects have fMRI scans with up to 3 scanner tasks: resting state, working memory (nback), and emotion identification (emoid)[52]. The data includes Wide Range Achievement Test (WRAT) scores[53] that have had the effects of age regressed out. It has previously been shown that the ability to predict WRAT score from FC was mostly due to the different distribution of WRAT scores among races and the ability to predict race from FC[38].
#### 2.5.3 OpenNeuro Fibromyalgiagia Dataset (Fibro)
We include a 66-subject dataset of 33 female fibromyalgia patients and 33 female healthy controls from the OpenNeuro repository[54], study identifier ds004144[55]. Out of the entire cohort, 65 subjects have two different scans: resting state and epr. A variety of medication, demographic, and questionnaire data are available.
#### 2.5.4 Bipolar and Schizophrenia Network for Intermediate Phenotypes (BSNIP)
The Bipolar and Schizophrenia Network for Intermediate Phenotypes is a large study of schizophrenia, bipolar, and schizoaffective disorder patients; relatives of patients; and healthy controls from several sites[56]. Our data contains 199 schizophrenia patients and 243 healthy controls. Patient sex is slightly skewed toward males in the schizophrenia group. Only one scan is available per subject, necessitating use of dFC in order to exploit identifiability.
## 3 Results
We present a summary of our results in Table 3, before highlighting results in each individual dataset.
Figure 2: Illustration of the pipeline for creating FC matrices from fMRI data.
### Ukb
The accuracy of UKB predictions with and without unintentional identifiability enhancement (double-dipping) is shown in Figure 3. We find that misusing identifiability in the longitudinal cohort leads to prediction performance with 50 subjects not matched by 10,000 training subjects in the full cohort.
### Pnc
Figure 4 (top) shows the possibility of incorrectly attributing achievement score prediction to fMRI because of a race confound[38]. In fact, an even greater prediction accuracy can be achieved by treating independent scans as different subjects, as seen in Figure 4 (bottom).
### Fibromyalgiagia
We see in Figure 5 that in cohorts with a limited number of subjects, such as the Fibromyalgia dataset, the difference between proper and improper placement of scans in training and test sets in term of prediction accuracy is maximized.
### Bsnip
As the BSNIP dataset only provides one scan per subject, identifiability enhancement must rely on the use of dynamic functional connectivity, with different windows treated as independent subjects. Identifiability enhancement (Figure 6) leads to predictive accuracy results in keeping with some of the larger values found in the literature.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Dataset & Task & Null Model & Best Prediction & Double-Dipping & Unintentional \\ & & & & Prediction & Improvement \\ \hline UKB & Sex (Accuracy) & 0.528 & \(0.82\pm 0.02\) & \(0.89\pm 0.006\) & **7**\% \\ \hline UKB & Age (RMSE) & 7.68 & \(5.92\pm 0.03\) & \(4.97\pm 0.50\) & **12.4**\% \\ \hline PNC & WRAT (RMSE) & 14.6 & \(14.45\pm 0.92\) & \(11.0\pm 0.28\) & **23.6**\% \\ \hline Fibro & Diagnosis (Accuracy) & 0.51 & \(0.61\pm 0.17\) & \(0.86\pm 0.038\) & **25**\% \\ \hline BSNIP & Diagnosis (Accuracy) & 0.5 & \(0.78\pm 0.05\) & \(0.89\pm 0.04\) & **11**\% \\ \hline \end{tabular}
\end{table}
Table 1: Prediction results with and without erroneous exploitation of identifiability. Best Prediction for UKB is reported for 1350 subjects in the training set, whereas for all other datasets it is reported with the maximum number of training subjects available.
Figure 3: Prediction performance in the UKB with and without misuse of identifiability. Age prediction (left) and sex prediction (right). We find misused identifiability can lead to superb results with very small number of training subjects.
Figure 4: Incorrect attribution of FC ability to predict race as FC ability to predict achievement score (top), and the greater predictive accuracy enhancement possible by treating independent scans as independent subjects (bottom). EA refers to European Ancestry and AA refers to African Ancestry. Top graph from Orlichenko et al. (2023).[38]
Figure 5: Prediction accuracy in the Fibromyalgia dataset using resting state scans as the training set and epr scans as the test set (and vice versa) compared to performing prediction on only one set of scans.
## 4 Discussion
Various studies have examined the reproducibility of regions identified through fMRI studies [235]. To our knowledge, few studies have examined the reproducibility of, e.g., fMRI schizophrenia classification results. One multi-site study did report classification accuracies of 79.8 to 97.1% [58], the lower end of which is consistent with our own findings. Another methodology-based study found various methods to provide between 56.7 and 92.5% classification accuracy [59]. We use schizophrenia as an example, but any type of phenotype prediction may be used instead. While not a predictive study, van den Heuvel et al. (2017) showed that with a small number of subjects, minor manipulation of proportional thresholding can cause group differences to appear or disappear in an fMRI schizophrenia dataset [60]. There is another effect where an artificially inflated 95% classification accuracy existing in the literature may inhibit other researchers from publishing results that only achieve a 70-80% accuracy in their own data.
There have been at least three recent high-profile cases of data manipulation in academia: a Harvard scientist faking data [61], a Stanford scientist implicitly allowing graduate students to fake data [62], and evidence of fake graphs in a room temperature semiconductor publication [63]. We are not sure how prevalent this practice is in the fMRI literature, but we think based on evidence presented earlier that some misuse may occur, especially since we provide a procedure by which one may misuse longitudinal or contemporaneous scans unintentionally.
In regards to the UK Biobank, as more data is released, people may have a greater opportunity to reuse longitudinal data from the same individuals for prediction. Another potential unexplored effect is whether identifiability extends to family members. In this case, FC similarity due to relatedness may be mistaken for true FC-phenotype correlations.
### Potential Solutions
The most obvious solution is not treating different scans as independent subjects and not using different dynamic FC windows as independent subjects. Otherwise, we recommend training a model on one dataset and testing on another, thus eliminating the possibility of subject memorization. Additionally, prediction results should be corroborated through different machine learning models and methods. This means that a result should only be considered valid when it is identified by several different models, not just a single newly proposed model, except with good justification. The use of very reduced feature sets (only up to 10 features per subject) may also hinder the ability of complex models to memorize identifiable subject features, even though predictive performance will almost certainly decrease. Finally, the use of a mixup model, as found in the computer science literature, may be explored as a mitigating strategy [64].
Figure 6: Schizophrenia diagnosis prediction using FC, correctly applied dFC, and dFC with using different connectivity matrices as independent subjects. We find that treating different FC matrices of the same subject as independent subjects leads to skewing of prediction results.
## 5 Conclusion
We find that unintentional treatment of independent scans as independent subjects can greatly increase predictive accuracy. Prediction accuracy is increased by 7 to 25% compared to the best legitimate training procedure, using a small fraction of training subjects. This highlights the importance of reproducibility studies, as well as meaningful physiological interpretations of prediction results in contrast to optimization of prediction accuracy. It would be especially helpful if machine learning studies using neuroimaging data made proposals that could be tested in an independent manner.
## 6 Acknowledgements
The authors would like acknowledge the NIH (grants R01 GM109068, R01 MH104680, R01 MH107354, P20 GM103472, R01 EB020407, R01 EB006841, R56 MH124925) and NSF (grant #1539067) for partial funding support.
fMRI and phenotype data for the PNC dataset came from the Neurodevelopmental Genomics: Trajectories of Complex Phenotypes database of genotypes and phenotypes repository, dbGaP Study Accession ID phs000607.v3.p2. The authors would also like to thank the UK Biobank (UKB application ID 61915), the BSNIP study organizers, and OpenNeuro as well as the Fibromyalgia dataset curators for making data publicly available or available to authorized researchers.
|
2310.14570 | DICE: Diverse Diffusion Model with Scoring for Trajectory Prediction | Road user trajectory prediction in dynamic environments is a challenging but
crucial task for various applications, such as autonomous driving. One of the
main challenges in this domain is the multimodal nature of future trajectories
stemming from the unknown yet diverse intentions of the agents. Diffusion
models have shown to be very effective in capturing such stochasticity in
prediction tasks. However, these models involve many computationally expensive
denoising steps and sampling operations that make them a less desirable option
for real-time safety-critical applications. To this end, we present a novel
framework that leverages diffusion models for predicting future trajectories in
a computationally efficient manner. To minimize the computational bottlenecks
in iterative sampling, we employ an efficient sampling mechanism that allows us
to maximize the number of sampled trajectories for improved accuracy while
maintaining inference time in real time. Moreover, we propose a scoring
mechanism to select the most plausible trajectories by assigning relative
ranks. We show the effectiveness of our approach by conducting empirical
evaluations on common pedestrian (UCY/ETH) and autonomous driving (nuScenes)
benchmark datasets on which our model achieves state-of-the-art performance on
several subsets and metrics. | Younwoo Choi, Ray Coden Mercurius, Soheil Mohamad Alizadeh Shabestary, Amir Rasouli | 2023-10-23T05:04:23Z | http://arxiv.org/abs/2310.14570v1 | # DICE: Diverse Diffusion Model with Scoring for Trajectory Prediction
###### Abstract
Road user trajectory prediction in dynamic environments is a challenging but crucial task for various applications, such as autonomous driving. One of the main challenges in this domain is the multimodal nature of future trajectories stemming from the unknown yet diverse intentions of the agents. Diffusion models have shown to be very effective in capturing such stochasticity in prediction tasks. However, these models involve many computationally expensive denoising steps and sampling operations that make them a less desirable option for real-time safety-critical applications. To this end, we present a novel framework that leverages diffusion models for predicting future trajectories in a computationally efficient manner. To minimize the computational bottlenecks in iterative sampling, we employ an efficient sampling mechanism that allows us to maximize the number of sampled trajectories for improved accuracy while maintaining inference time in real time. Moreover, we propose a scoring mechanism to select the most plausible trajectories by assigning relative ranks. We show the effectiveness of our approach by conducting empirical evaluations on common pedestrian (UCY/ETH) and autonomous driving (nuScenes) benchmark datasets on which our model achieves state-of-the-art performance on several subsets and metrics.
## I Introduction
Accurate prediction of road user behavior is prerequisite to safe motion planning in autonomous driving systems. One of the key challenges in trajectory prediction is the probabilistic and multimodal nature of road users' behaviors. To model such uncertainty, many approaches have been proposed, such as Generative Adversarial Networks (GANs) [7, 11, 17, 47], Conditional Variational Autoencoder (CVAE) [5, 21, 25, 28, 44], anchor-based proposal networks [51], or target (intention) prediction networks [6, 15, 62]. However, these methods are not without challenges, encompassing issues, such as unstable training, artificial dynamics within predicted trajectories, and reliance on hand-crafted heuristics that lack generalizability.
Diffusion models have recently gained popularity as powerful tools for various generative tasks in machine learning [41, 43, 18, 56]. These models learn the process of transforming informative data into Gaussian noise and how to reversely generate meaningful output from noisy data. Although effective, diffusion models impose high computational costs due to successive denoising and sampling operations making their performance limited for real-time applications, such as trajectory prediction.
To this end, we propose a novel computationally efficient diffusion-based model for road user trajectory prediction. Our approach benefits from the efficient sampling method, Denoising Diffusion Implicit Models (DDIM) [46] resulting in \(20\times\) computational speed-up, allowing us to oversample from trajectory distributions in order to maximize the diversity and coverage of predicted trajectories, while maintaining the inference speed well below existing approaches. To select the most likely trajectory candidates, we propose a novel scoring network that assigns relative rankings in conjunction with a non-maximum suppression operation in order to downsample the trajectories into the final prediction set. To highlight the effectiveness of our approach, we conduct extensive empirical studies on common trajectory prediction benchmark datasets, UCY/ETH [26, 10] and nuScenes [3], and show our model achieves state-of-the-art performance on some subsets and metrics while maintaining real-time inference time.
## II Related Work
### _Trajectory Prediction_
Trajectory prediction is modeled as a sequence prediction problem where the future of the agents is predicted based on their observed history and potentially available contextual information. In the pedestrian prediction domain, one of the key challenges is to model the interactions among pedestrians for better estimates of their future behavior. These methods include, spatial pooling of representations [1], graph architectures [20, 21, 34, 44, 48, 58], attention mechanisms [12, 24, 53, 42, 60], and in egocentric setting, semantic scene reasoning [39, 38]. In the context of autonomous driving, it is also important to model agent-to-map interactions. For this purpose, models rely on environment representations in the form of drivable areas [36, 2], rasterized maps [44, 14], point-clouds [57], and computationally efficient vectorized representations in conjunction with graph neural networks [13, 15], or transformers [64, 35] for generating holistic representations of the scenes and interactions.
Another main challenge in trajectory prediction is to capture the uncertainty and multi-modality in the agent's behaviour. To address this problem, a category of models resort to explicitly predicting the goal (intentions or target) of agents and predict future trajectories conditional on those goals, which are typically defined using heuristic methods, which limit the generalizability of these approaches [62, 15]. Other methods, such as Generative Adversarial Networks (GANs) [7, 11, 47] and Conditional Variational Autoencoders (CVAEs) [5, 21, 25, 28, 44] implicitly capture agents' intentions. These methods introduce latent variables that are randomly sampled from
a simple distribution to produce complex and multi-modal distributions for the predicted trajectories. However, existing generative models suffer from limitations, such as mode collapse, unstable training, or the generation of unrealistic trajectories [50, 63], highlighting the need for more robust and accurate models.
### _Denoising Diffusion Models_
Denoising Diffusion Probabilistic Models (DDPM) [19], commonly referred to as diffusion models have gained popularity various generative tasks such as image [41, 43], audio [23], video [56, 18], and 3D point cloud generation [31]. These models simulate a diffusion process motivated by non-equilibrium thermodynamics, where a parameterized Markov chain is learned to gradually transition from a noisy initial state to a specific data distribution. More recently, diffusion methods have been adopted in trajectory generation and prediction tasks [16, 22, 40]. The approach in [16] models the indeterminacy of human behaviour using a transformer-based trajectory denoising module to capture complex temporal dependencies across trajectories. The authors of [40] introduce a model for generating realistic pedestrian trajectories that can be controlled to meet user-defined goals by implementing a guided diffusion model. MotionDiffuser introduced in [22] creates a diffusion framework for multi-agent joint prediction with optional attractor and repeller guidance functions to enforce compliance with prior knowledge, such as agent intention and accident avoidance.
Although diffusion-based models have proven effective, there exist critical shortcomings to their adoption in practice. For instance, the inference time of these models is highly computationally expensive due to an iterative denoising algorithm that requires a large number of forward passes. For example, MotionDiffuser's inference latency is 408.5ms (32 diffusion steps) compared to the conventional prediction models, such as HiVT's [64] with 69ms latency. This characteristic makes diffusion models a less desirable option for real-time safety-critical applications, such as autonomous driving. To speed-up prediction inference time, Leapfrog Diffusion Model (LED) [32] is proposed that learns to skip a large number of denoising steps in order to accelerate inference speed. However, LED is only effective when dealing with small dimensional trajectory data without any complex contextual encoding. To address this shortcoming, in the proposed approach we focus on improving efficiency at the sampling stage providing a more general framework for different data representation.
**Contributions** of this paper are threefold: 1) We propose a novel diffusion-based model for trajectory prediction that relies on efficient sampling operation for over-sampling trajectories and a novel scoring mechanism for relatively ranking them to produce final prediction set; 2) we conduct empirical evaluation by comparing the proposed approach to the past arts and highlight the effectiveness of our approach in both pedestrian and autonomous driving prediction domains; 3) We conduct ablation studies on the effect of proposed scoring scheme and oversampling on the accuracy of prediction and inference time.
## III Methodology
### _Problem Formulation_
We represent the future trajectory of an agent \(i\), \(\mathbf{\tau}_{future}^{i}=[\mathbf{p}_{t+1}^{i},\dots,\mathbf{p}_{t+T_{f}}^{i}]\) over \(T_{f}\) time steps where \(\mathbf{p}^{i}\in\mathbb{R}^{2}\) is the 2D coordinates of the agent. Similarly, the agent's trajectory over the last \(T_{p}\) time steps is \(\mathbf{\tau}_{past}^{i}=[\mathbf{p}_{t-T_{p}+1}^{i},\dots,\mathbf{p}_{t}^{i}]\). Here, the objective is to learn distribution \(p(\tau_{future}^{i}|\tau_{past}^{i})\).
### _Architecture_
An overview of the proposed framework is illustrated in Figure 1. Our model consists of an input layer containing trajectory history and lane information (if map information is available); An encoder inspired by [64] that encoded interactions between agents and agents and road lanes (if map is available); A decoder based on [16] comprised of multiple transformer layers. The decoder is trained to generate meaningful trajectories from noisy data conditioned on the scene context and used iteratively in each step of the denoising; And lastly, an attention-based scorer ranks the generated trajectories. In the following subsections, we describe the key components of the proposed model.
### _Data Processing_
Inspired by [64], we use translation- and rotation-invariant scene representation vectors by converting absolute positions to relative positions and rotating them according to the heading angle of each agent denoted by \(\theta^{i}\) at the current timestamp \(t\). We convert the trajectory coordinates into displacements, and rotate them so that all agents have the same heading at time step zero. Each trajectory is represented as \(\mathbf{x}^{i}=[0,\mathbf{R}_{t}^{T}(\mathbf{p}_{t-T_{p}+2}^{i}-\mathbf{p}_{t-T_{p}+1} ^{i}),\dots,\mathbf{R}_{t}^{T}(\mathbf{p}_{t}^{i}-\mathbf{p}_{t-1}^{i})]\), where \(\mathbf{R}_{t}\in\mathbb{R}^{2\times 2}\) is the rotation matrix parameterized by \(\theta^{i}\). \(\mathbf{X}=[\mathbf{x}^{1},\dots,\mathbf{x}^{N}]\in\mathbb{R}^{N\times T_{p} \times 2}\) is the set of agents' observations, where \(N\) denotes the number of agents in a scene. Using this representation makes the model robust to both translation and rotation and consequently requires less training data as no rotation-augmented data is needed as in the case of other methods. Contrary to the standard approach in the literature, we also convert future trajectories \(\mathbf{\tau}_{future}^{i}\) to the displacement of relative positions rotated and centered at the last point of the history, that is \(\mathbf{y}^{i}=[\mathbf{R}_{i}^{T}(\mathbf{p}_{t+1}^{i}-\mathbf{p}_{t}^{i}),\dots, \mathbf{R}_{i}^{T}(\mathbf{p}_{t+T_{f}}^{i}-\mathbf{p}_{t+T_{f}-1}^{i})]\).
### _Conditional Diffusion Model for Trajectory Prediction_
The forward diffusion process is a Markov chain that gradually adds Gaussian noise to a sample drawn from the data distribution, \(\mathbf{y}_{(0)}\sim q(\mathbf{y})\), iteratively for \(H\) times, in order to produce progressively noisier samples. The approximate posterior distribution is given by,
\[q(\mathbf{y}_{(1:H)}|\mathbf{y}_{(0)})=\prod_{\eta=1}^{H}q(\mathbf{y}_{(\eta )}|\mathbf{y}_{(\eta-1)}) \tag{1}\]
\[q(\mathbf{y}_{(\eta)}|\mathbf{y}_{(\eta-1)})=\mathcal{N}(\mathbf{y}_{(\eta)} ;\sqrt{1-\beta_{\eta}}\mathbf{y}_{(\eta-1)},\beta_{\eta}\mathbf{I}) \tag{2}\]
where \(H\) denotes the total number of diffusion steps and \(\beta_{\eta}\) is a uniformly increasing variance to control the level of noise. By setting \(\alpha_{\eta}=1-\beta_{\eta}\) and \(\bar{a}_{\eta}=\prod_{s=1}^{\eta}\alpha_{s}\), we get,
\[q(\mathbf{y}_{(\eta)}|\mathbf{y}_{(0)})=\mathcal{N}(\mathbf{y}_{(\eta)};\sqrt{ \bar{\alpha}}_{\eta}\mathbf{y}_{(0)},(1-\bar{\alpha}_{\eta})\mathbf{I})\]
Here, if \(H\) is large enough, \(q(\mathbf{y}_{(H)})\sim\mathcal{N}(\mathbf{y}_{(H)};\mathbf{0},\mathbf{I})\), where \(\mathcal{N}\) is a normal distribution.
For future trajectory prediction, we apply the reverse diffusion process by reducing noise from trajectories sampled under the noise distribution. The initial noisy trajectory is sampled from a normal distribution \(\mathcal{N}(\mathbf{y}_{(H)};\mathbf{0},\mathbf{I})\), and then it is iteratively run through conditioned denoising transition \(p_{\theta}(\mathbf{y}_{(\eta-1)}|\mathbf{y}_{(\eta)},\mathbf{C})\) parameterized by \(\theta\) for \(H\) steps. Here, \(\mathbf{C}\in\mathbb{R}^{d_{c}}\) denotes a feature embedding with the dimension of \(d_{c}\). representing the scene context, learned by encoder \(f_{\xi}\) parameterized by \(\xi\). Formally, the process is as follows,
\[p_{\theta}(\mathbf{y}_{(0:H)})=p(\mathbf{y}_{(H)})\prod_{\eta=1}^{H}p_{\theta }(\mathbf{y}_{(\eta-1)}|\mathbf{y}_{(\eta)})\]
At each step \(\eta\), we have
\[p_{\theta}(\mathbf{y}_{(\eta-1)}|\mathbf{y}_{(\eta)},\mathbf{C})=\mathcal{N}( \mathbf{y}_{(\eta-1)};\mathbf{\mu}_{\theta}(\mathbf{y}_{(\eta)},\eta,\mathbf{C}),\mathbf{ \Sigma}_{(\eta)})\]
where \(p(\mathbf{y}_{(H)})=\mathcal{N}(\mathbf{y}_{(H)};\mathbf{0},\mathbf{I})\) and \(\mathbf{\Sigma}_{(\eta)}\) is a fixed variance scheduler, \(\beta_{(\eta)}\mathbf{I}\).
### _Scoring Network_
Generative models, such as CVAEs [45] and diffusion models [19] are capable of producing multiple outputs by sampling from an underlying distribution. Specifically, CVAEs repeatedly sample a latent variable from a prior distribution for decoding. Diffusion models generate multiple outputs by repeatedly sampling independent noise \(\mathbf{y}_{(H)}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\)\(M\) times and denoising them resulting in a wide range of varied outputs with complex distributions. With a large number of samples, one can accurately approximate the distribution parameterized by the models. However, in practice, for efficiency, a smaller set of \(K\) predictions are used to characterize the performance of the models. Therefore, we require a downsampling (or selection) mechanism to select the most plausible \(K\) predictions.
Our goal is to select \(K\) most plausible trajectories among \(M\) denoised samples given the feature embedding \(\mathbf{C}\), where \(K<<M\). We achieve this by training scoring network \(g_{\phi}(\cdot)\), parameterized by \(\phi\). The scoring network takes \(M\) denoised samples and encoder's embedding \(\mathbf{C}\) as input and outputs a score for each of \(M\) trajectories conditioned on \(\mathbf{C}\) and the rest of \(M-1\) trajectories. For this purpose, we first concatenate each trajectory with feature embedding \(\mathbf{C}\), \(\mathbf{s}_{j}=\hat{\mathbf{\tau}}_{j}\oplus\mathbf{C}\) for \(j=1,\ldots,M\), where \(\hat{\mathbf{\tau}}_{j}\) is predicted trajectory converted from the \(j\)th denoised sample \(\hat{\mathbf{y}}_{(0)}^{j}\). We define the matrix \(\mathbf{S}=[\mathbf{s}_{1},...,\mathbf{s}_{M}]\in\mathbb{R}^{M\times(T_{f}+d_{c})}\), which is then fed into a multi-head self-attention block [52],
\[\mathbf{Q}_{i}=\mathbf{S}\mathbf{W}_{i}^{Q},\mathbf{K}_{i}=\mathbf{S}\mathbf{W}_{i}^{K},\mathbf{V}_{i}=\bm {S}\mathbf{W}_{i}^{V}\]
\[\text{Attn}_{i}(\mathbf{S})=\text{softmax}(\frac{1}{\sqrt{d_{k}}}\mathbf{Q}_{i}\mathbf{K} _{i}^{T})\mathbf{V}_{i}\]
\[\text{MHA}(\mathbf{S})=\text{Concat}(\text{Attn}_{1}(\mathbf{S}),\ldots,\text{Attn}_{ h}(\mathbf{S}))\mathbf{W}^{O}\]
where \(h\) is the number of attention heads, \(\mathbf{W}_{i}^{Q},\mathbf{W}_{i}^{K},\mathbf{W}_{i}^{V}\in\mathbb{R}^{(T_{f}+d_{c})\times d _{k}}\) for \(i=1,\ldots,h\), \(d_{c}\) and \(d_{k}\) are the dimensions of the feature embedding and attention, respectively, and \(\mathbf{W}^{O}\in\mathbb{R}^{h\cdot d_{k}\times d}\) are the learnable parameters of the multi-head attention module. Next, we apply an MLP layer with residual connection followed by downsampling \(d\) to a 1-dimensional score:
\[\hat{\mathbf{S}}=g_{\phi}(\mathbf{S})=(\text{MHA}(\mathbf{S})+\text{MLP}(\text{MHA}(\mathbf{S} )))\mathbf{W}^{down}\]
Fig. 1: Overview of the proposed framework. During training of decoder \(\mathbf{e}_{\theta}\), encoder \(f_{\xi}\) encodes the history and the map into a feature embedding. Using diffusion step \(\eta\), the feature embedding, and noisy \(\mathbf{y}_{(\eta)}\), decoder \(\mathbf{e}_{\theta}\) generates noise to noise clean \(\mathbf{y}_{(0)}\) into \(\mathbf{y}_{(\eta)}\). In the second stage, scoring network \(g_{\phi}\) takes \(M\) generated trajectories \(\hat{\mathbf{\tau}}_{j}\) using efficient DDIM sampling method along with the feature embedding from the encoder. A non-maximum suppression algorithm is applied to trajectories sorted in descending order based on their predicted scores, from which final \(K\) trajectories are selected.
where \(\mathbf{W}^{down}\in\mathbb{R}^{d\times 1}\) is a trainable parameter matrix. Finally, we have predicted relative raw scores \(\hat{\mathbf{S}}\in\mathbb{R}^{M}\) conditioned on a scene. The scores are normalized using a softmax operation.
### _Training_
As shown in Figure 1, the training process consists of two stages. First, we train denoising module \(p_{\theta}\) and encoder \(f_{\xi}\), and in the second stage, we train scoring network \(g_{\phi}\) with frozen \(p_{\theta}\) and \(f_{\xi}\).
**Diffusion** The model is optimized by maximizing the log-likelihood of the predicted trajectories given the ground truth \(\mathbb{E}[\log p_{\theta}(\mathbf{y}_{(0)})]\). Since the exact log-likelihood is intractable, we follow the standard Evidence Lower Bound (ELBO) maximization method and minimize the KL Divergence,
\[\mathcal{L}=\mathbb{E}_{q,\eta}[D_{KL}(q(\mathbf{y}_{(\eta-1)}| \mathbf{y}_{(\eta)},\mathbf{y}_{(0)})||p_{\theta}(\mathbf{y}_{(\eta-1)}|\mathbf{ y}_{(\eta)},\mathbf{C}))]\] \[=\mathbb{E}_{q,\eta}[D_{KL}(\mathcal{N}(\mathbf{y}_{(\eta-1)}; \mathbf{\mu}_{\eta},\mathbf{\Sigma}_{q}(\eta)||\mathcal{N}(\mathbf{y}_{(\eta-1)}; \mathbf{\mu}_{\theta},\mathbf{\Sigma}_{(\eta)}))]\] \[=\mathbb{E}_{q,\eta}[||\mathbf{\mu}_{\theta}-\mathbf{\mu}_{q}||^{2}_{2}] \tag{10}\]
By utilizing the reparameterization trick, the corresponding optimization problem becomes [19]:
\[\mathcal{L}_{MSE}(\theta,\xi)=\mathbb{E}_{\mathbf{\epsilon}_{(0)},\mathbf{y}_{(0) },\eta}||\mathbf{\epsilon}_{(0)}-\hat{\mathbf{\epsilon}}_{(\theta,\xi)}(\mathbf{y}_{( \eta)},\eta,\mathbf{C})|| \tag{11}\]
where \(\mathbf{\epsilon}_{(0)}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), \(\mathbf{y}_{(\eta)}=\sqrt{\alpha_{\eta}}\mathbf{y}_{(0)}+\sqrt{1-\bar{\alpha} _{\eta}}\mathbf{\epsilon}_{(0)}\). In other words, denoising module \(\hat{\mathbf{\epsilon}}_{(\theta,\xi)}\) learns to predict the source noise \(\mathbf{\epsilon}_{(0)}\sim\mathcal{N}(\mathbf{\epsilon};\mathbf{0},\mathbf{I})\) that noises \(\mathbf{y}_{(0)}\) to \(\mathbf{y}_{(\eta)}\).
**Scorer** We use cross-entropy loss between the predicted scores \(\text{softmax}(\hat{\mathbf{S}})\) and scores calculated using the ground truth future trajectories \(\mathbf{\tau}_{future}\) and the predicted future trajectories \(\{\hat{\mathbf{\tau}}_{j}\}_{j=1}^{M}\). A combination of Average Displacement Error (ADE) and final Displacement Error (FDE) is used to calculate the scores,
\[\psi_{j}=ADE(\mathbf{\tau}_{future},\hat{\mathbf{\tau}}_{j})+\lambda FDE(\mathbf{\tau}_{future },\hat{\mathbf{\tau}}_{j}) \tag{12}\]
where \(\psi_{j}\) represents the closeness of \(j\)th predicted trajectory to the ground truth trajectory, and \(\lambda\) balances how metrics ADE and FDE affect the score. \(\Psi\in\mathbb{R}^{M}\) denotes the matrix of \(M\) ground truth scores (i.e., \(j\)th element of \(\Psi\) is \(\psi_{j}\)). We use a cross-entropy loss to train the scoring network,
\[\mathcal{L}_{score}=\mathcal{L}_{CE}(\text{softmax}(\hat{\mathbf{S}}),\Psi) \tag{13}\]
### _Inference_
As shown in Figure 1, inference is a two-stage process: we first oversample \(M\) trajectories and denoise them using the denoising module, and then we select the top plausible trajectories.
**Diffusion** We begin by sampling \(M\) independent Gaussian noises \(\{\mathbf{y}_{(H)}^{j}:\mathbf{y}_{(H)}\sim\mathcal{N}(\mathbf{0},\mathbf{I}) \}_{j=1}^{M}\). Next, we denoise each \(\mathbf{y}_{(H)}^{j}\) through reverse process \(p_{\theta}\). During the reverse process, DDPM [19] sampling technique repeatedly denoises \(\mathbf{y}_{(H)}\) to \(\mathbf{y}_{(0)}\) by using equation below for \(H\) steps,
\[\mathbf{y}_{(\eta-1)}=\frac{1}{\sqrt{\alpha_{\eta}}}(\mathbf{y}_{\eta}-\frac{ \beta_{\eta}}{\sqrt{1-\bar{\alpha}_{\eta}}}\hat{\mathbf{\epsilon}}_{(\theta,\xi)} (\mathbf{y}_{(\eta)},\eta,\mathbf{C}))+\sqrt{\beta_{\eta}}\mathbf{z} \tag{14}\]
where \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The downside of the DDPM sampling is that it requires \(H\) denoising steps in order to generate a sample, which is time-consuming and computationally expensive, especially when \(H\) is large (usually \(H>100\)). To mitigate this issue, we use the DDIM [46] sampling technique and skip every \(\gamma\) step in the reverse process, in which we only iterate \(\frac{H}{\gamma}\) steps leading to more efficient and faster operation compared DDPM by a factor of \(\gamma\),
\[\mathbf{y}_{(\eta-1)}=\sqrt{\bar{\alpha}_{(\eta-1)}}(\frac{ \mathbf{y}_{(\eta)}-\sqrt{1-\bar{\alpha}_{(\eta)}}\hat{\mathbf{\epsilon}}_{(\theta, \xi)}(\mathbf{y}_{(\eta)},\eta,\mathbf{C})}{\sqrt{\bar{\alpha}_{(\eta)}}})\] \[+\sqrt{1-\bar{\alpha}_{(\eta-1)}}\hat{\mathbf{\epsilon}}_{(\theta,\xi) }(\mathbf{y}_{(\eta)},\eta,\mathbf{C}) \tag{15}\]
**Scoring and selection** Using the scoring network, we select \(K\) out of \(M\) oversampled trajectories that are generated by the diffusion model. We first convert predicted displacements, \(\hat{\mathbf{y}}_{(0)}=[\hat{\mathbf{y}}_{t+1},\ldots,\hat{\mathbf{y}}_{t+T_{f}}]\) back to relative positions \(\hat{\mathbf{\tau}}\):
\[\hat{\mathbf{\tau}}_{future}=[\hat{\mathbf{p}}_{t+1},\ldots,\hat{\mathbf{p}}_{t+T_{f}}] \tag{16}\]
\[\hat{\mathbf{p}}_{t+j}=\sum_{i^{\prime}=0}^{j}(\mathbf{R}^{-1})^{T}\hat{\mathbf{y}}_{t+i ^{\prime}} \tag{17}\]
where \(\mathbf{R}^{-1}\) is the reverse of the rotation matrix used in the preprocessing. Following [62], given the scores, we ensure diversity and sufficient multi-modal coverage by applying a non-maximum suppression operation. For this, we sort the trajectories according to their scores, starting with the highest one. If it is greater than the optimized distance threshold \(\omega\) from all previously selected trajectories, we select it. We continue until we have \(K\) trajectories.
## IV Experiments
### _Experimental Setup_
**Datasets.** We evaluate the proposed model on both pedestrian and autonomous driving prediction benchmarks. For the former, we report on **UCY/ETH**[26, 10], which consists of real pedestrian trajectories at five different locations captured at 2.5 Hz, ETH and HOTEL from ETH dataset, and UNIV, ZARA1, and ZARA2 are from UCY dataset. Following previous works, we use the leave-one-out approach with four sets for training and the remaining set for testing [32, 55]. For autonomous driving, we **nuScenes**[3], a large-scale real-world autonomous driving dataset, which contains 1000 scenes from Boston and Singapore annotated at 2 Hz. nuScenes provides 2 seconds of history with HD maps and requires 6 seconds of future trajectory to be predicted. **Models**. We compare our method against state-of-the-art algorithms on each benchmark. Note that we omit the recent LED [32] from the pedestrian benchmark since we were unable to validate the results using the official published code and procedures and no explanations were provided by the authors regarding the discrepancy. We refer to our model as **DICE** (**D**iverse dffusion with sCoring for prEdiction).
**Metrics** We adopt the standard evaluation metrics including the minimum average/final displacement error over the top \(K\) predictions (\(\text{minADE}_{K}\)/\(\text{minFDE}_{K}\)) on UCY/ETH
benchmark. For nuScenes, we also report on miss rate (MR\({}_{K}\)) which is the percentage of scenarios where the final points of the \(k\) predicted trajectories are more than 2 meters away from the final point of the ground truth trajectory.
**Implementation Details** We train our denoising module for 80 epochs and the scoring network for 20 epochs, using AdamW [29] optimizer with a learning rate of \(5\times 10^{-4}\), batch size of 32, and a dropout rate of 0.1. For training the denoising module, we use \(H=200\) diffusion steps. When sampling using DDIM [46], we skip over every \(\gamma=20\) steps, resulting in only 10 denoising steps to generate each independent trajectory. For training the scoring network, we set the distance metric control weight in \(\mathcal{L}_{scorer}\) to be \(\lambda=1.5\). Our denoiser consists of 5 transformer layers. All attention modules have \(h=4\) heads with 128 hidden dimensions. We set the distance threshold to \(\omega=8\) (meters) for the non-maximum suppression operation in inference. All experiments are done on a Tesla V100 GPU.
### _Comparison to SOTA on Pedestrian Benchmark_
Table I shows the results for minADE\({}_{20}\)/minFDE\({}_{20}\) of models evaluated on UCY/ETH. We can see that the proposed model achieves SOTA performance on 4 out of 5 subsets on minFDE metrics and overall best performance by a significant margin on _ETH_ by improving approx. \(50\%\) on both metrics. On average across all subsets, our model is best on minFDE by improving up to \(20\%\) compared to GroupNet while achieving second best on minADE with a small margin. Such a performance gain is due to our model successfully producing a diverse prediction set that captures a wide range of multi-modal intentions.
### _Qualitative Results_
Figure 2 illustrates the reverse process of our model on a subset of scenes from the UCY/ETH dataset. \(\eta^{\prime}=\frac{\eta}{\gamma}\) denotes the diffusion steps taken by DDIM sampling, where DDPM sampling steps being \(H=200\) and the number of steps skipped being \(\gamma=20\), which results in the total diffusion steps taken by DDIM sampling being \(10\). We plot \(M=100\) generated trajectories after being scored, and \(K=20\) final selected. The last column in Figure 2 shows that the selected trajectories using the scoring network are very diverse, but also more dense around the ground truth trajectory.
### _Comparison to SOTA on Autonomous Driving Benchmark_
To further highlight the effectiveness of the proposed model, we evaluate our approach on the nuScenes autonomous driving benchmark. We report the results for minADE\({}_{K}\), minFDE\({}_{K}\), MR\({}_{K}\), for \(K=1,5,10\). The results are summarized in Table II. Here, we can see that our method is particularly effective in predicting endpoints (targets) as it achieves an improvement of up to \(9\%\) on miss rate while ranking first and second on minFDE\({}_{5}\) and minFDE\({}_{10}\), respectively. Again, this confirms the ability of our prediction set to capture intention points. This should be noted that in the autonomous driving domain, more emphasis is often given to final error, variations of which are used for ranking of prediction models in more recent benchmarks, such as [4, 49]. In terms of minADE our model lags behind, which can be due to the tendency of the model to generate more low curvature trajectories, hence, causing higher average error in driving scenes as they may contain turns.
### _Ablation Studies_
**Effect of the scoring network.** We evaluate the effect of oversampling and then undersampling trajectories with our scoring model and non-maximum suppression by comparing
Fig. 2: Visualization of generated trajectories from DDIM sampling at each diffusion step \(\eta^{\prime}\). The plots in the last two columns show the generated trajectories after scoring and after applying the selection algorithm respectively. Lighter colours reflect higher scores.
against 2 baselines. These encompass randomly generating \(K\) trajectories directly from our denoiser, and an intelligent post-processing selection algorithm implemented in [22] and partially inspired from [51]. Let \(\hat{T}_{1:K}=\{\hat{\tau}_{future}^{j}\}_{j=1}^{K}\) be the selected \(K\) predicted trajectories, and \(\hat{T}_{1:M}=\{\hat{\tau}_{future}^{j}\}_{j=1}^{M}\) be the oversampled set of \(M\) trajectories. We attempt to select \(\hat{T}_{1:K}\) so that the maximum number of elements in \(\hat{T}_{1:M}\) lie within distance \(r\) of any of the \(\hat{T}_{1:K}\) elements.
\[\hat{T}_{1:K}=\operatorname*{argmax}_{\hat{T}_{1:K}}\sum_{j=1}^{M}\ \max_{\hat{T}_{1}\in\hat{T}_{1:K}}\mathbb{I}\left(\ dist(\hat{T}_{i},\hat{T}_{j })<r\right)\]
where \(dist(.)\) is the distance function for which we use ADE, and \(r\) is an adjustable threshold. We approximate our solution in a greedy fashion, iteratively adding one trajectory to \(\hat{T}_{1:K}\) until we have \(K\) trajectories. The trajectory we select at each step is the one that maximizes the above argmax objective when added to the current set.
As shown in Table III, selecting trajectories using the scoring network with non-maximum suppression achieves the best results, with on average \(7\%/15\%\) improvement on minADE\({}_{20}\)/minFDE\({}_{20}\) compared to the case where we randomly sample \(20\) trajectories directly from the denoiser. It also performs up to \(4\%/5\%\) better than our post-processing selection baseline. The scoring network, however, negatively affects the performance on the _Univ_ subset. We hypothesize that the substantially smaller size of the train set compared to the test set for _Univ_ may have lead to underfitting.
**Steps vs sampling** We study the effect of the number of steps and samples on accuracy and latency. The results are illustrated in Figure 3. On the left, as one would expect, increasing the number of denoising steps improves the performance, although only minFDE metric. This, however, comes at a significant cost of increasing the latency by as much as 19 times. On the other hand, as shown in the table on the right, we can see that much better improvement can be achieved by simply increasing the number of samples. Increasing the number of samples by five-fold, only increases latency by \(15\%\) while improving the performance by up to \(21\%\). This highlights the effectiveness of oversampling and the proposed scoring method to improve accuracy.
## V Conclusion
We presented a novel model for road user trajectory prediction, leveraging the capabilities of diffusion models. Our approach benefited from an efficient sampling approach resulting in significant speed-up. This allows our model to oversample trajectory distributions to better capture the space of possibilities. Furthermore, we proposed a scoring network that ranks the sampled trajectories in order to select the most plausible ones. We conducted extensive evaluations on the pedestrian and autonomous driving benchmark datasets and showed that our model achieves state-of-the-art performance on a number of subsets and metrics, in particular minFDE and MR. In addition, we conducted ablative studies to highlight the effectiveness of the proposed scoring scheme and oversampling on boosting the accuracy of predictions.
Fig. 3: Ablation study on number of denoising _Steps_ and oversampling size (\(M\)). Metrics are minADE\({}_{20}\)/minFDE\({}_{20}\)/latency(ms) and reported on ETH. Colors indicate better (green) and worse (red) values and are assigned columnwise for each metric and normalized across both tables. |
2303.17141 | A declarative approach to data narration | This vision paper lays the preliminary foundations for Data Narrative
Management Systems (DNMS), systems that enable the storage, sharing, and
manipulation of data narratives. We motivate the need for such formal
foundations and introduce a simple logical framework inspired by the relational
model. The core of this framework is a Data Narrative Manipulation Language
inspired by the extended relational algebra. We illustrate its use via examples
and discuss the main challenges for the implementation of this vision. | Patrick Marcel, Veronika Peralta, Faten El Outa, Panos Vassiliadis | 2023-03-30T04:16:19Z | http://arxiv.org/abs/2303.17141v1 | # A declarative approach to data narration
###### Abstract.
This vision paper lays the preliminary foundations for Data Narrative Management Systems (DNMS), systems that enable the storage, sharing, and manipulation of data narratives. We motivate the need for such formal foundations and introduce a simple logical framework inspired by the relational model. The core of this framework is a Data Narrative Manipulation Language inspired by the extended relational algebra. We illustrate its use via examples and discuss the main challenges for the implementation of this vision.
+
Footnote †: c) 2022 Copyright held by the owner/author(s), Published in Proceedings of the 26th International Conference on Extending Database Technology (EDIST), 28th March-31st March, 2023, ISBN 978-3-89318-68-2 on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-and 4.0.
+
Footnote †: c) 2022 Copyright held by the owner/author(s), Published in Proceedings of the 26th International Conference on Extending Database Technology (EDIST), 28th March-31st March, 2023, ISBN 978-3-89318-68-2 on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-and 4.0.
+
Footnote †: c) 2022 Copyright held by the owner/author(s), Published in Proceedings of the 26th International Conference on Extending Database Technology (EDIST), 28th March-31st March, 2023, ISBN 978-3-89318-68-2 on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-and 4.0.
+
Footnote †: c) 2022 Copyright held by the owner/author(s), Published in Proceedings of the 26th International Conference on Extending Database Technology (EDIST), 28th March-31st March, 2023, ISBN 978-3-89318-68-2 on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-and 4.0.
+
Footnote †: c) 2022 Copyright held by the owner/author(s), Published in Proceedings of the 26th International Conference on Extending Database Technology (EDIST), 28th March-31st March, 2023, ISBN 978-3-89318-68-2 on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-and 4.0.
+
Footnote †: c) 2022 Copyright held by the owner/author(s), Published in Proceedings of the 26th International Conference on Extending Database Technology (EDIST), 28th March-31st March, 2023, ISBN 978-3-89318-68-2 on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-and 4.0.
## 1. Introduction
A data narrative (DN) is a structured composition of messages that (a) convey findings over the data, and, (b) are typically delivered via visual means in order to facilitate their reception by an intended audience (Han et al., 2017). Data narration (Peralta et al., 2017; Peralta et al., 2017) refers to the notoriously tedious process of crafting a DN by extracting insights from data and telling stories with the goal of "exposing the unanticipated" (Peralta et al., 2017) and facilitating the understanding of insights. Data narration is practiced in many domains and by various domain experts, ranging from data journalists to public authorities.
Consider the two infographics of Figure 1.1,1. These two infographics can be seen as "physical" representations of the DNs. In (Han et al., 2017), a conceptual model for DNs was proposed. The goal of the present paper is to propose a logical representation of DNs, to bridge the gap between the physical and conceptual representations.
Footnote 1: footnotemark: (c) 2022 Copyright held by the owner/author(s), Published in Proceedings of the 26th International Conference on Extending Database Technology (EDIST), 28th March-31st March, 2023, ISBN 978-3-89318-68-2 on OpenProceedings.org.
Let us first briefly review the proposed conceptual model for DNs (Han et al., 2017). This model is based on 4 layers following Chatman's organisation (Chatman, 2017), who defined narrative as a pair of (a) _story_ (content of the narrative), and, (b) _discourse_ (expression of it). In the conceptual model, the _factual_ layer handles the _exploration_ of facts (i.e., the underlying data) for fetching _findings_ while the _intentional_ layer models the subjective substance of the story, identifying the _messages_, _characters_ and _measures_ the narrator intends to communicate. As to the discourse, the _structural_ layer models the structure of the DN, its plot being organized in terms of _episodes_, while the _presentational_ layer deals with its rendering, that is communicated to the audience through visual artifacts named _dashboard components_). The interested reader is redirected to (Han et al., 2017) for a deeper presentation of the model.
For instance, in the DN of Figure 1 bottom, an episode of the DN is rendered in the upper right dashboard component, with the message indicating that measure'stroke deaths' is 57/100000 for character 'black women'.
Figure 2 is an excerpt of this conceptual model considered in the present paper. Indeed, in this vision paper, our goal is to show the benefit of manipulating data narratives declaratively, i.e., with a formal logical data model and a manipulation language. We voluntarily keep this model and language simple, mostly inspired by the relational data model and extended relational algebra (Han et al., 2017). As will be discussed in Section 7, taking into account all the concepts of the domain will need to revisit the model and
Figure 1. Examples of data narratives
language, while keeping the flavor of the manipulation described here. These manipulations rely on the concept of _message_ which is the conceptual model's corner stone. A message is rooted in the facts analyzed, conveying essential findings that can be related to one another. The message allows introducing episodes, the building blocks of the discourse. Each episode of the discourse is specifically tied to a message which it aims to convey, with dashboard components being their presentational counterparts.
While the Web abounds with DNs, manipulating them in a declarative way using the concepts of this model has not yet been proposed, to our knowledge. This paper aims at filling this gap, by envisioning a DN Management System (DNMS), the foundations of which should include a logical layer enabling the declarative manipulation of DNs.
The outline of the paper is the following. Section 2 motivates the need for a logical layer. Section 3 introduce the logical model for DNs and Section 4 the algebra for manipulating DNs. Section 5 illustrates the languages while Section 6 presents related work and Section 7 concludes this vision paper by discussing the main challenges for the implementation of DNMS.
## 2. Motivation
As indicated above, the corner stone of a DN is a message, that associates characters with measures. Intuitively, a DN is an ordered set of messages and will be manipulated based on the characters and measures they deal with.
We list below simple queries that should be expressed over DNs. Each one corresponds to an operation of the algebraic language introduced in Section 4.
* Find DNs concerning some characters or measures, e.g., DNs about stroke deaths. This _selection_ operation allows to find DNs that satisfy a given condition.
* Retain from DNs only the messages about some characters or measures, e.g., messages concerning Hispanic and Native American women. This _projection_-like operation produces new DNs keeping only a subset of messages.
* Concatenate messages of several DNs. For example, produce a DN with all messages of the DNs of Figure 1. This _concatenation_ operation allows gathering messages.
* Remove duplicate messages in DNs. This _duplicate elimination_ operation only keeps one occurrence of each message in a DN.
* Synthesize groups of messages in DNs. For instance, produce DNs aggregating messages about stroke cases and stroke deaths. This _group-aggregate_-like operation groups messages in each DN using grouping conditions and merges them using built-in merging functions.
* Manipulate sets of DNs using the classical set operations (_cross-product_, _intersection_, _union_ and _difference_).
* Change the plot of DNs, for example, arranging messages according to some measures. This _order by_-like operation allows to modify the order of messages in the DNs.
Composing these operations enables to devise complex query expressions as will be seen in more details in Section 5. We simply mention here a few complex operations that we postulate will be very useful in practice:
* Connect the dots between DNs, i.e., merge DNs that have characters, or characters and measures in common. This join-like operation can be expressed with a combination of cross product, group-aggregate and selection.
* Summarize a narrative for a particular character (e.g., from women stroke to stroke, France to Europe, etc.). This roll-up-like operation supposes to find in the DNDB narratives with messages having characters that generalize the particular character. It can be expressed using selection, cross product, projection, and group-aggregate.
* Detail DNs for a particular character. This drill-down-like operation can be seen as the inverse of the previous one and can be expressed with the same combination of selection, cross product, projection, and group-aggregate.
## 3. Logical Data Model
This section presents the data model of the logical framework. Again, in this vision paper, the goal is not to define a thorough logical layer for DNs, but instead to give the flavor of what this logical layer should be for an end-to-end DNMS.
### Data narrative components
_Atomic concepts and relations._ The atomic components of the model are characters and measures. For instance, the DN of Figure 1 (bottom) includes character 'black women' and measure'stroke death'. To keep things simple in this preliminary version of the model, measure values and units (e.g., 57/100000) are not part of the present preliminary model. The semantics of the message is given by a predicate connecting characters and measures, in the spirit of semantic triples.
We also assume binary relations between characters as the bases for the relations between messages. Relations between characters are at the core of relations between findings (see Figure 2). In data narration, it is common to use these relations as transitions between episodes of the narrative's plot, and it has been seen that most transitions are of the following nature: specialisation, temporal or spatial (cf. e.g., (Golovolov et al., 2015)). More precisely,
* a specialization relation, noted \(c<c^{\prime}\), allows to classify characters in a hierarchy, for instance to indicate that black women is more specific than women,
* a spatial relation, noted \(c\vdash c^{\prime}\) to indicate that \(c\) is in a spatial relation with \(c^{\prime}\). For instance Greece \(\vdash\) France.
* a temporal relation, noted \(c\dashdash c^{\prime}\) to indicate that \(c\) is in temporal relation with \(c^{\prime}\). For instance, in Europe, Spring \(\dashv\)2nd quarter.
* we also assume a general similarity relation noted \(c\approx c^{\prime}\) to indicate that character \(c\) is similar to character \(c^{\prime}\), for instance birth control pills is similar to abortion pills.
_Complex concepts._ A message is a tuple associating characters with measures. Formally, a message \(m\) is a tuple \(\langle C,V,P\rangle\) where \(C\) is a set of characters, \(V\) is a set of measures and \(P\) is a predicate3. The simplest message is the empty message, \(\langle\emptyset,\emptyset,\emptyset\rangle\).
Footnote 3: For the sake of consistency, \(P\) is a singleton.
_Running example._ Consider the messages of Figure 3. Messages \(m_{1}\) to \(m_{5}\) and \(m_{6}\) to \(m_{8}\) are inspired from those of the DNs of Figure 1, restricting to a subset of messages and simplifying many characters. Message \(m_{9}\) is inspired from a DN about covid.
Since findings can be related to one another (e.g. findings about black women are more specific than those concerning all women), we consider that relations over characters also applies to messages. For example, among messages of Figure 3, \(m_{3}\) is more general than \(m_{1}\) regarding characters 'black women' and 'women'. Given two messages \(m,m^{\prime}\) we consider the transition
relation between them which can be one of: spatial, temporal, generalization, similarity.
Formally, for \(m=\langle C,V,P\rangle\) and \(m^{\prime}=\langle C^{\prime},V^{\prime},P^{\prime}\rangle\) it is \(mRm^{\prime}\) if \(\exists c\in C,c^{\prime}\in C^{\prime},cRe^{\prime}\) and \(R\) is one of \(\prec,\approx,\vdash,\dashv\).
### Data model
To be consistent with the conceptual description of DNs of Figure 2, we define a DN as a sequence of episodes. Each episode narrating a message, i.e., a tuple, a DN is formally defined as a tuple of tuples. We distinguish its schema, which consists in the number of messages (remember that each message has the same structure), from its instance.
Unless otherwise specified, all sets are infinite and countable. Let \(\mathcal{C}\) be a set of characters, \(\mathcal{V}\) a set of measures, \(\mathcal{P}\) a set of predicates and \(\mathcal{M}\) the set of messages \(2^{\mathcal{C}}\times 2^{\mathcal{V}}\times\mathcal{P}\). Let \(\mathcal{H}\) be a set of DN names.
Data narrativeThe schema of a DN of length \(k\) (i.e., with \(k\) messages) is a couple \(\langle h,k\rangle\) where \(h\in\mathcal{H}\) is the DN name. A DN instance is a tuple of messages.
For the sake of readability, in what follows we will consider a DN of length \(k\) as an injective function from \(\mathcal{M}\) to \(2^{\mathbb{N}}\). For instance, the DN \(n=\langle m,m,m^{\prime}\rangle\) can be seen as the function \(n\) where \(n(m)=\{1,2\}\) and \(n(m^{\prime})=\{3\}\). We abuse notations and note \(m\in n\) if a message \(m\) appears in the DN \(n\), and \(messages(n)\) the set of messages of DN \(n\).
Data Narrative Database (DNDB)A DNDB schema is a set of DN schemas and a DNDB instance is a set of DN instances.
ExampleContinuing the running example, \(I=\{n_{1},n_{2},n_{3}\}\), is a DNDB instance that organizes the messages of Figure 3:
\(n_{1}=\langle m_{1},m_{2},m_{3},m_{4},m5\rangle\); \(n_{2}=\langle m_{6},m_{7},m_{8}\rangle\); \(n_{3}=\langle m_{9}\rangle\).
## 4. Data Narrative Manipulation Language (Dmml)
This section presents an algebra for manipulating DNs. The focus is the description of the operators in their general form; more user-friendly constructors, specially for expressing conditions, will be discussed in Section 7. All operators have the same signature in the sense that they are applied over a DNDB instance and output a DNDB instance. Finally, note that DNs can be manipulated using their schemas, i.e., their names and lengths. However, it is preferable to also manipulate DNs with conditions over their messages and this is why most operators rely on logical formulas for expressing these conditions. In what follows, let \(D\) be a DNDB and \(I,I_{1}\) and \(I_{2}\) be instances of \(D\).
### Constants
The first operation is the constant DN. Given a message \(\langle C,V,P\rangle\), it is simply: \(\{\langle C,V,P\rangle\}\)
### Unary operators
SelectionSects the DNs in instance \(I\) that satisfy a given condition.
\(\sigma_{\varphi}(I)=\{n\in I|\varphi\}\)
where \(\varphi\) is a logical formula to express selection conditions, for instance:
* \(\exists m\in n,m=\langle C,V,P\rangle,c\in C\) (has character \(c\))
* \(\exists m\in n,m=\langle C,V,P\rangle,v\in V\) (has measure \(v\))
* \(\exists m\in n,m=\langle C,V,P\rangle,p\in P\) (has predicate \(p\))
* \(\exists m\in n,m=\langle C,V,P\rangle,\exists c^{\prime}\in C,c^{\prime}Re\) (has character in relation \(R\) with \(c,R\) being one of the relations over characters)
* \(\exists m,m^{\prime}\in n,mRm^{\prime}\) (has messages that are in relation \(R\) where \(R\) is one of the relations over messages)
* \(\forall m\in n,m=\langle\emptyset,\emptyset,\emptyset\rangle\) (has only empty messages)
ExampleThe operation \(\sigma_{\exists m\in n,m=\langle C,V,P\rangle,strokeDenth^{\prime}\in V}(I)\) looks for DNs about stroke deaths in instance \(I\) of the running example. Its output is \(\{n_{1},n_{2}\}\).
ProjectionFor each DN in I, keeps only the messages satisfying a condition.
\(\pi_{\varphi}(I)=\{n|_{\{m\in n|\varphi\}}|n\in I\}\)
where \(\varphi\) is a logical formula, similar to those used for the selection and \(n|_{\{m\in n|\varphi\}}\) is the restriction of DN \(n\) to the set of messages satisfying \(\varphi\).
ExampleThe operation \(\pi_{\exists m\in n,m=\langle C,V,P\rangle,BlackWomen^{\prime}\in C}(I)\) produce DNs \(n_{4},n_{5}\) and \(n_{6}\), projecting a subset of messages of DNs in instance \(I\), i.e., those containing Black women as characters.
\(n_{4}=\langle m_{3}\rangle;n_{5}=\langle m_{6},m_{7}\rangle;n_{6}=\langle\rangle\).
Duplicate-eliminationFor each DN in \(I\), keeps only one occurrence of each message.
Figure 3. Messages of the running example
Figure 2. Selection of concepts used for the logical layer
\[\delta(I)=\{n^{\prime}|n\in I,\forall m\in n,n^{\prime}(m)=min(n(m))\}\]
Group-aggregateFor each DN in \(I\), groups the messages using a grouping condition and aggregates each group using a specific aggregation function.
\(Y_{\varphi_{1},\ldots,\varphi_{j},agg_{1},\ldots,agg_{j}}(I)=\)
\(\{(agg_{1}(\{m\in n|\varphi_{1}\}),\ldots,agg_{j}(\{m\in n|\varphi_{j}\}))|n\in I\}\)
where the \(\varphi_{i}\) are grouping conditions on the set of messages of the DN. It is not requested that the grouping conditions partition the set of messages, which allows one message to appear in several groups. The \(agg_{i}\) are aggregation functions for merging a set of messages into one message.
ExampleThe operation \(Y_{BW,WW,A,A}(I)\), where \(BW\) and \(WW\) are conditions about characters 'Black women' and 'White women' and \(A\) is an aggregation function computing the union of characters and measures in the input messages. The output DNs contain two messages, the former concerning Black women and the latter White women: \(n\gamma=\langle m_{3},\langle\emptyset,\emptyset,\emptyset\rangle\rangle;n_{8}= \langle A(m_{6},m\gamma),A(m_{6},m\gamma)\rangle;n_{9}=\langle\langle \emptyset,\emptyset,\emptyset\rangle,\langle\emptyset,\emptyset,\emptyset \rangle\rangle.\)
Full group-aggregateAnother group-aggregate operation allows to merge messages across DNs into one DN.
\(Y_{\varphi_{1},\ldots,\varphi_{j},agg_{1},\ldots,agg_{j}}^{across}(I)=\)
\(\{(agg_{1}(\{m|\varphi_{1}\}),\ldots,agg_{j}(\{m|\varphi_{j}\}))|m\in\bigcup_{ n\in I}messages(n)\}\)
ExampleThe operation \(Y_{BW,WW,A,A}^{across}(I)\), with \(BW\), \(WW\) and \(A\) as in previous example, outputs
\(n_{10}=\langle A(m_{3},m_{6},m_{7}),A(m_{6},m_{7})\rangle\)
OrderbyAllows to change the order of messages in the DNs.
\(\tau_{\varphi_{1},\ldots,\varphi_{j},sort_{1},\ldots,sort_{I}}(I)=\)
\(\{(sort_{1}(\{m\in n|\varphi_{1}\}),\ldots,sort_{I}(\{m\in n|\varphi_{j}\}))|n \in I\}\)
where the \(\varphi_{i}\) are selection conditions over messages (like the ones used e.g., for the selection operation) and the \(sort_{i}\) are sorting functions, mapping a set of messages to a tuple of messages.
ConcatenationFlatens all DNs in I into one DN, by concatenating all their messages.
\(\chi(I)=\{\langle n_{1},...n_{k}\rangle|n_{i}\in I\}\)
### Binary operators
The classical relational binary operations \(\times,\cup,\cap\setminus\) have their standard set theoretic meaning.
### Properties
We give here a few insights on the properties of the algebra, a complete study thereof is part of our future work.
Closure: By definition, each operator defines a set of DNs from a set of DNs.
CompletenessFrom a given set of messages, all narratives can be obtained using the constant and cross-product operators, which together form the core of the algebra.
MinimalityThe minimal set of operators includes constant, cross product, both group-aggregates, order by, union and difference. All other operations can be expressed from this set. We can anticipate that some expressions will be popular and deserve to be promoted as operators, like the join in the relational algebra that is a shortcut from cross product followed by selection.
Some properties of the operatorsLike in the case of the relational algebra, the cross product is non commutative, admits one absorbing element (the empty set) and one neutral element (the empty DN, i.e., \(\langle\rangle\)). Set operations keep their usual properties. Note that, because DNDB instances are sets of DNs of different length, intersection cannot be expressed by combining cross product, selection and projection, while it is the case for the relational algebra.
## 5. Example
In this section, we give some examples of useful and easy to express queries in DNML. They are based on the running example.
Comparing messagesWe first illustrate how to "join" DNs having contradictory messages for characters women and stroke:
1. select DNs about women and stroke: \(I_{1}=\sigma_{\exists m\in n,m=\langle C,V,P\rangle,\{stroke,women\}_{\leq C}(I)}\)
2. group by messages with women and stroke, aggregate by keeping messages that are contradictory: \(I_{2}=Y_{\varphi,\ldots,check,drop}^{across}(I_{1})\), where \(\varphi=\exists m\in n,m=\langle C,V,P\rangle,\{stroke,women\}\subseteq C,\)\(check\) is a function merging messages that are contradictory otherwise producing an empty message, and \(drop\) is a function producing an empty message.
3. project out empty messages: \(\pi_{m\neq\langle\emptyset,\emptyset,\emptyset\rangle}(I_{2})\)
Roll-up and drill-downAssume we want to see DNs about 'black women, stroke' and then to "roll-up' from 'black women'.
1. select the DNs with characters 'black women',stroke' \(I_{1}=\sigma_{\exists m\in n,m=\langle C,V,P\rangle,\{black\ \ \ women^{\prime},stroke^{\prime}\}\leq C}(I)\)
2. select the DNs with characters more general than 'black women' \(I_{2}=\sigma_{\exists m\in n,m=\langle C,V,P\rangle,\{black\ \ \ women^{\prime}<x,x\in C}(I)}\)
3. compute the cross product of found DNs: \(I_{3}=I_{1}\times I_{2}\)
4. group by messages with black women and one more general character, aggregate by merging messages: \(I_{4}=Y_{\varphi,\ldots,merge,drop}(I_{3})\), where \(\varphi=\exists m\in n,m=\langle C,V,P\rangle,\{\{\textit{'black\ women'}\}\in C\lor^{\prime}\)
\(black\ \ women^{\prime}<x,x\in C,\)\(merge\) is a function merging messages, and \(drop\) is a function producing an empty message.
5. project out empty messages: \(\pi_{m\neq\langle\emptyset,\emptyset,\emptyset\rangle}(I_{4})\)
## 6. Related Work
Data narrative modelingCalegari et al. (Calegari et al., 2017) proposed a narrative metamodel based on the conceptual model of (Calegari et al., 2017), to provide abstract models to data narratives. They explored the definition of model transformation for converting narrative models into HTML or a Jupyter computational notebook. Zhang et al.(Zhang et al., 2019) proposed a framework for creating data storytelling applications from three major perspectives: concept, component, and procedure with the absence of any logical means. Bach et al. (Bach et al., 2017) introduces narrative design patterns, defined as "a low-level narrative device that serves a specific intent". A pattern can be used individually or in combination with others to give form to a story. Five major patterns of group are identified: argumentation, flow, framing, emotion, engagement. For example, if the intent of the data narrator is to _persuade_ and _convince_ audience, he can use one of the following patterns: compare, concretize, and repetition. Importantly, these patterns are not specifically related to a visualization or interaction medium.
Many approaches exist for describing DN crafting, mostly describing an essentially manual process (Bach et al., 2018; Zhang et al., 2019; Zhang et al., 2019), while others proposing approaches for automatically generating simple data
narratives (Les and Yang, 2017; Yang et al., 2018; Yang et al., 2019). In all cases, no manipulation language was specifically proposed for manipulating DNs.
_Languages for DN._ A DNDB is a set of tuples of tuples of sets, i.e., a form of nested relation, albeit with tuples of different lengths. This means that the proposed language, DNML, is likely to be expressed in the nested relational algebra (NRA) (Bachordi et al., 2016). However, due to the relatively simple structure of DNs, some of NRA's operations are not needed (e.g., nesting/unnesting, powerset).
Many languages or primitives were proposed to express data exploration sessions (Yang et al., 2018; Yang et al., 2019; Yang et al., 2019). While relevant for understanding the logic under the discoveries of finding, these languages are not adapted the manipulation of messages and are not devised as an algebra. Some are not even meant to be used by humans, since, e.g., in (Yang et al., 2018), primitives are used to generate exploration sessions through reinforcement learning.
## 7. Discussion
We close this paper by discussing some of the main challenges raised by the development of end-to-end DNMS.
Modeling the complexity of data narrationThe logical layer proposed in this paper only covers a small portion of data narration, i.e., the complex process that goes from data exploration to the visual presentation of messages. In particular, to account for the complexity of the process, all the concepts and relations present in the model of (Yang et al., 2019) should have a counterpart in the logical layer.
For instance, the provenance of messages (how findings in the dataset were discovered), and the meaning carried to the reader should be logically modeled. For provenance, messages should be linked to a collection of findings, independently of the form of findings, as well as to the queries the findings are results of. For the semantics, one can assume extending the message predicates with user-intuitive semantics, which requires including measure values, units, quantification, etc. A challenge will be to cover all the data narration process while keeping the model and language simple enough.
Data modelAs noted above, the data model proposed above is very close to RA. In fact, if the order of messages is not included in the data model, DNs could simply be defined as sets of messages, i.e., relations of arity 2. However, this simplicity would oblige to code the complexity of DN, leading to unnatural queries. Besides, as explained in the previous paragraph, this model is meant to be extended to cover the complete conceptual model of (Yang et al., 2019).
More semantics should be added in the different layers of the conceptual model of (Yang et al., 2019). For instance, at the message level, this can be achieved by using predicates having semantics known to the reader, like in (Yang et al., 2019), or by modeling the relations between characters using e.g., property graphs. At the data exploration layer, modeling findings can be done by relating characters and measures in the spirit of (Yang et al., 2019). Semantics can also be added by modeling the intentions of the narrator using abstract primitives like the ones of (Yang et al., 2019) or narrative patterns of Bach et al. (Bachordi et al., 2016).
Manipulation languageAs mentioned above, extending DNML will be needed to account for the complexity of the data narration process. An eye should be kept on query languages for sequences (Yang et al., 2019) and query languages for the Semantic Web (Bachordi et al., 2016). A challenge will be to keep the formalism simple enough for ensuring its adoption by the narrators, analysts or data enthusiasts.
This can be achieved by, e.g., (i) devising operations summarizing complex expressions that are useful in practice, in the spirit of join for RA, (ii) devising a SQL-like language for DNs, (iii) using built in predicates for the logical formulas used in the operations (e.g., selection, projection, group-aggregate). In this last case, for instance, predicate \(hasChar(c)\) could be used to express that DN \(n\) has character \(c\), i.e., logical formula \(\exists m\in n,m=\langle C,V,P\rangle,c\in C\).
RDBMS like stackOn the long term, the challenge will be to implement a DNMS on the model of the successful achievements of RDBMS, notably, a clear distinction of conceptual, logical, physical layers with intuitive mappings between the objects of different layers, loading facilities for populating the DNDB from existing DNs, data organization at the physical layer, including specific index mechanisms (e.g., inspired by information retrieval techniques), optimizations at the logical and physical layer, etc.
|
2309.02271 | Dual Effects of the US-China Trade War and COVID-19 on United States
Imports: Transfer of China's industrial chain? | The trade tension between the U.S. and China since 2018 has caused a steady
decoupling of the world's two largest economies. The pandemic outbreak in 2020
complicated this process and had numerous unanticipated repercussions. This
paper investigates how U.S. importers reacted to the trade war and worldwide
lockdowns due to the COVID-19 pandemic. We examine the effects of the two
incidents on U.S. imports separately and collectively, with various economic
scopes. Our findings uncover intricate trading dynamics among the U.S., China,
and Southeast Asia, through which businesses relocated portions of their global
supply chain away from China to avoid high tariffs. Our analysis indicates that
increased tariffs cause the U.S. to import less from China. Meanwhile,
Southeast Asian exporters have integrated more into value chains centered on
Chinese suppliers by participating more in assembling and completing products.
However, the worldwide lockdowns over pandemic have reversed this trend as,
over this period, the U.S. effectively imported more goods directly from China
and indirectly through Southeast Asian exporters that imported from China. | Wei Luo, Siyuan Kang, Sheng Hu, Lixian Su, Rui Dai | 2023-09-05T14:37:14Z | http://arxiv.org/abs/2309.02271v1 | ### Main Manuscript for
###### Abstract
The trade tension between the U.S. and China since 2018 has caused a steady decoupling of the world's two largest economies. The pandemic outbreak in 2020 complicated this process and had numerous unanticipated repercussions. This paper investigates how U.S. importers reacted to the trade war and worldwide lockdowns due to the COVID-19 pandemic. We examine the effects of the two incidents on U.S. imports separately and collectively, with various economic scopes. Our findings uncover intricate trading dynamics among the U.S., China, and Southeast Asia, through which businesses relocated portions of their global supply chain away from China to avoid high tariffs. Our analysis indicates that increased tariffs cause the U.S. to import less from China. Meanwhile, Southeast Asian exporters have integrated more into value chains centered on Chinese suppliers by participating more in assembling and completing products. However, the worldwide lockdowns over pandemic have reversed this trend as, over this period, the U.S. effectively imported more goods directly from China and indirectly through Southeast Asian exporters that imported from China.
###### Contents
* 1 Introduction
## Introduction
In the realm of global trade, the United States and China, as the world's leading economies, have become embroiled in a protracted trade war since 2018, resulting in an unprecedented escalation of retaliatory tariffs. This contentious trade tension has significantly reshaped established worldwide supply-chain networks that have evolved over decades of international commerce, with enduring ramifications extending beyond the current year [1, 2]. These mutually escalating tariffs, encompassing a staggering volume of approximately $600 billion in trade flows, have profoundly impacted bilateral trade across product categories over the global supply chain [3, 4]. The intricate consequences of this trade war have been further compounded by the global outbreak of the COVID-19 pandemic, one of the most significant pandemics in human history. Existing literature, however, has primarily focused on the effects of the trade war or the pandemic mainly from the perspective of individual economies while overlooking the dynamic nature of the supply chain network over the two events. Fewer studies examining the combined impact of the two events primarily rely on theoretical calibration or simulation prediction. Namely, little is empirically known about the responses of economies with high spatiotemporal heterogeneity to those exceptional disruptions. Our study fills this void by applying innovative methodologies over a unique data set. We illustrate how the two events jointly reshaped the global supply chain networks till 2021, offer new insights into the economic consequences to various parties, and shed light on the future development of geoeconomics relationships.
Against the backdrop of these two far-reaching global events, the landscape of global trading and the global supply chain has become remarkably complex and sophisticated. Numerous studies have endeavored to examine the economic implications of the trade war and the COVID-19 pandemic individually [5, 6, 7, 8, 9, 10]. The ongoing trade dispute between the United States and China has significantly influenced worldwide economic operations, leading to the restructuring of supply chains and generating extensive consequences throughout various industries, nations, and geographical areas. Regarding the domestic ramifications within the United States, industries associated with manufacturing and technology, specifically those with close ties to China, such as electronics, machinery, and automobiles, have encountered notable disruptions and escalated expenses [11, 12] Some other studies have further explored the deeper impact of the US-China trade war on the pass-through of tariffs to importers and on welfare by including tariff-related variables in the model [4]. Moreover, the trade conflict has instigated a noteworthy reorganization of worldwide supply chains in which Southeast Asia such as Vietnam, Thailand, and Malaysia have emerged as the primary recipients [13]. Numerous corporations endeavored to broaden their product portfolio, resulting in a notable upswing in manufacturing operations within those Southeast Asian nations [14, 15, 16, 17]. Several other nations and territories, including the European Union, Mexico, and South Korea, have endeavored to leverage the possibilities arising from supply chain relocation [18, 19]. The reorganization of worldwide supply chains is exerting an impact on the intricacies of global commerce and manufacturing networks.
The reallocation of global supply chains due to the trade war has attracted scholars from numerous fields, including geography, logistics, management science, and economics. Economists have developed theoretical frameworks to comprehend the mechanism of value chain reallocation. Using various models, one body of economic literature suggests that tariffs could harm bilateral trade [13, 20]; meanwhile, another demonstrates that exogenous shocks in product cost and market size would result in the reallocation of a firm's business activities [21, 22, 23]. Recent research [24] studies the reallocation effects caused by the U.S.-China trade war and suggests that, under typical circumstances, countries that substitute or complement Chinese or U.S. goods on the global supply chain benefit from the trade war [24]. Based on the presented premises, we expect that a trade war between the U.S. and China will inevitably lead to a decline in bilateral trade between the two countries. Furthermore, according to the gravity theory of trade [25, 26, 27], an economy will gravitate toward trading with its neighbors with similar cultural preferences and development stages. Therefore, we hypothesize that China's neighboring developing economies will be the initial beneficiaries of value chain allocations.
The intricate nature of the alterations in worldwide economic activity resulting from the abrupt emergence of the COVID-19 pandemic is noteworthy. Initially, the COVID-19 pandemic caused significant disturbances to worldwide supply chains, resulting in extensive manufacturing, transportation, and commercial exchange disruptions [28, 29, 30]. The implementation of trade embargoes, limitations on travel, and the cessation of manufacturing operations caused disruptions in the transportation of goods, resulting in impediments and scarcities within supply chains [31, 32, 33, 7, 34, 35]. However, previous literature on epidemics and other supply chain disruptions provides us little theoretical guidance [36], making it difficult to postulate how the COVID-19 pandemic would affect the global supply chain allocation process. Only a handful of theories related to the pandemic disruption on the supply chain are from management disciplines. Those theories are usually founded on the propositions [32]: 1) a surged demand for essential products and 2) a significant contraction in raw material supply constraining production capacity over the pandemic period. Furthermore, global supply chains have experienced spatiotemporal heterogeneity of vulnerability because of the effectiveness of COVID-19 containment and the development and distribution of vaccines in different regions and countries [37, 38]. China led Asia-Pacific economies to show remarkable resilience in securing the global supply chain because they efficiently contained the virus at an early stage [39, 40]. Combining the theoretical components and observed facts, Chinese businesses retained a relatively stable labor supply and endured less from a lack of raw materials because of their considerably self-sufficient value chain system. Thus, we expect to observe a reverse flow of supply chains previously diverted from China due to the US-China trade war, especially in industries including healthcare, pharmaceuticals, and high technology [41]. Nonetheless, the extent to which the two significant incidents could have a net effect is an empirical question we will investigate in the current article.
Although some studies have attempted to examine the combined impact of the US-China trade war and COVID-19, most are either analytical or rely on model calibration or simulation predictions
due to the inherent constraints of time and data availability [42, 43]. Furthermore, most focus on individual countries and overlook the trading dynamic among various parties. In contrast, our research provides the first empirical study to assess two events' individual and combined impact by incorporating diverse economies and leveraging comprehensive data until 2021. By adopting a broader perspective encompassing global, continental, regional, and national dimensions, our study offers a comprehensive understanding compared to spatially focused analyses concentrating solely on specific regions or countries [44, 45, 46]. Moreover, we delve into the economic sectoral scale, investigating the common changes observed across all economic goods and the variations in how specific categories of goods are affected across different regions. Through a thorough analysis of US imports and the inclusion of an examination of Chinese exports to specific Asian countries, our research provides a more holistic and elucidating picture of how the triangular trade relationship between the US, China, and Southeast Asia has evolved because of these two significantly sequential events.
## Materials and Methods
### Materials
In our investigation, we employed the UN Comtrade database to explore the intricate realm of both US imports and China's exports. The UN Statistics Division collects, processes, and validates the data from approximately 200 countries, representing over 99% of the world's merchandise trade. We use the import or export records provided by the originating countries to ensure data accuracy, originality, and consistency in our analysis. In particular, for analyzing United States imports, we utilize trade records the US provides to UN Comtrade, which identify the United States as a reporter and other nations as its trading partners. Similarly, we analyze China's exports using data gathered by Chinese authorities.
We acquired an extensive compilation of monthly trade data, meticulously disaggregated at the 6-digit HS code level, facilitating a nuanced comprehension of distinct economic sectors. Specifically, the 6-digit HS code system identifies 5,439 categories of bilaterally traded products in our data. Following international trade literature [24], we further partition products into nine categories: agriculture, apparel, chemicals, materials, machinery, metals, minerals, transport, and miscellaneous (Table S2). Our dataset on US imports initially consisted of 6,403,989 trade records from 2015 to 2021. These trade records comprised data fields such as Trading Month, Reporting and Partner Country, Trade Direction, 6-digit HS Code, Good Description, and Trade Value (Figure S1). We conducted data integrity checks, including detecting missing data, assessing outliers, eliminating duplicate entries, and validating data by comparing it to the exact figures from US Census Bureau statistics at the monthly level. Finally, we aggregated the monthly trade data at the 9-category level, resulting in 14,781 transposed records (Figure S2). Through a similar process, we aggregated Chinese export data from 1,868,176 trade records that used 6-digit HS codes to 480
instances of 9 categories from 2016 to 2021 (Figure S3). The omission of 2015 data in Chinese analysis is due to the unavailability of Chinese export data in the UN Comtrade database.
## Methods
We employed innovative methodology to analyze the value chain reallocation by combining the power of iteratively multi-scale event studies and visualization. To conduct our event study analysis, we first utilize a difference-in-differences OLS regression with 'time' and 'country'/'region' fixed effects. This approach mitigates the omitted variable issues associated with unobservable heterogeneity and time-specific factors. The resulting regression coefficients were assessed at 10% or lower significance levels to determine the most appropriate one for different data pairs. Compared to other empirical works, such as the gravity model and those built directly from specific models, our approach is more robust and less sensitive to variable omissions, data outliers, and theoretical oversimplifications. Our iterative approach allowed for a comprehensive examination of economic sectors at both continental and national or regional levels, contributing to a deeper understanding of the nuanced dynamics and impacts of the events analyzed. Then, to present our massive results comprehensively, we rely on various visualization representations to illustrate the relative changes over numerous metrics.
Specifically, we present a multi-scale event study model to investigate the consequential impact of two pivotal events, the trade war and the COVID-19 pandemic, on U.S. import trade. Rooted in econometrics, the event study model offers a robust means to evaluate the effects of specific interventions by contrasting outcome changes over time between treatment and comparison groups[47]. Using longitudinal data from treatment and control groups with a quasi-experimental design, this model enables the establishment of a reliable counterfactual to quantify the magnitude of changes caused by exogenous shocks. As a desirable form of event analysis, it finds diverse applications across various domains, including economics, public health, and beyond.
Given the intricate interplay of the international landscape, we postulate that the event study model is amenable to estimating the impact of crucial events, such as the trade war and the COVID-19 pandemic, on global trade centered on U.S. imports and China's exports. We propose an innovative combination of the power of visualization and this well-established approach to illustrate otherwise hard-to-comprehend patterns from the unique U.S. trade data. We aim to discern the effects of the trade war and the COVID-19 pandemic on global trade dynamics at the continent and country levels.
By accounting for the trade value variable and incorporating geographically fixed effects and monthly fixed effects, our study seeks to provide credible estimations of the numerical consequences of these events on U.S. import trade and China's export trade. The model equation is as follows:
\[\mathit{Trade}_{(i,t)}=\alpha*\mathit{Treat}_{i}+\beta*\mathit{Post}+\gamma( \mathit{Treat}_{i}\times\mathit{Post})+\theta_{(i)}+\ \delta_{(t)}+\varepsilon_{(i,t)}\]
where the U.S. import or Chinese export trade share value (\(\mathit{Trade}_{(i,t)}\)) is the explained variable. \(\mathit{Treat}_{i}\) is a dummy variable that denotes a focal (treatment) region or country (set to 1) from the control regions or countries (set to 0). \(\mathit{Post}\) is set to 1 for the trade war or COVID-19 pandemic period and 0 for the comparison or benchmark period. When we investigate the trade war effect or joint effect, we use the sample from 2015 to 2017 as a benchmark sample, while the sample from 2019 is the benchmark sample for the sole effect of the pandemic analysis. \(\mathit{Treat}_{i}\times\mathit{Post}\) is a core explanatory variable, the interaction variable of \(\mathit{Treat}_{i}\) and \(\mathit{Post}\). \(\theta_{(i)}\) and \(\delta_{(t)}\) are country/region and year fixed effects, respectively.
Calculated by the model, the coefficient of this variable, that is \(\gamma\), measures the effects of the trade war and the COVID-19 pandemic on U.S. import trade. Without fixed effect variables, \(\gamma\) measures the differences between the change in the focal country's or region's trade share and that of benchmarking countries or regions after or before an event [48]. Region/country and monthly-fixed effects tend to subsume the treatment/post-dummy variables and deviate the \(\gamma\) slightly away from the exact value from the difference-in-differences. Despite the deviation, social scientists still consider that the interaction coefficient from this regression can reasonably approximate the difference-in-differences in the sample.
## Results
### The U.S. Imports Analysis
Figure 1 depicts the trend of US imports over time, in which Figure 0(a) shows a counterintuitive tendency. From 2015 to 2021, the overall value of U.S. imports steadily increased until Oct 2018, despite intensified confrontation between the U.S. government and many of its trading partners besides China [13, 49, 50]. In March 2019, the imports from US trading partners other than China (ROW) began to rebound and eventually exceed its pre-trade war levels. The U.S. import value from China even continued to increase until Oct 2018 followed by a drop afterward, resulting in an average decline of -6.13 % by the end of 2019 (Table 0(b)). The COVID-19 outbreak precipitated a substantial reduction in imports from ROW and China. However, by June 2020, US imports from ROW started to grow and reached an all-time high by the conclusion of our sample period. Despite the intensifying trade war, the imports from China over the same period increased modestly by an average of 5.69% from phase 2 to 3 (Table 0(b)).
Figure 0(b) and Table 1 together reveal the impacts of the trade war and the pandemic across economic sectors. Imports from the Minerals sector increased by an average of 28.34% from Phase 1 to Phase 2, followed by Chemicals and Miscellaneous with average increases of 18.75% and 15.37%, respectively. After the emergence of COVID-19, the pandemic significantly and negatively affected Minerals and Transport imports from Phase 2 to Phase 3, with declines of
25.34% and _-12.42%, respectively. In contrast, over this period, imports of Materials, Chemicals, and Metals increased by an average of 18.27%, 17.53%, and 16.91%, respectively. These findings indicate that the impact of the trade war and the pandemic on international trade differ by geographic and economic sectors.
Figure 1: The monthly change of imports to the U.S. from 2015 to 2021. The unit of measurement is USD. Panel **(a)** illustrates the monthly U.S. imports from Mainland China and the rest of the world (ROW). Panel **(b)** illustrates the monthly U.S. imports across different economic sectors.
(YoY) growth rates of the average total value of U.S. imports and (b) The Year-over-Year (YoY) growth rates of the U.S. import trade value from the top 20 countries or territories with the highest exports to the U.S.
**(a)**
**Sector** **Examples** **From Phase 1** **From Phase 2** **to Phase 3**
Agriculture
Soybeans, wine, coffee, beef
10.31%
15.05%
Apparel
Footwear, t-shirts, handbags
1.57%
0.87%
Chemicals
Medications, cosmetics,
18.75%
17.53%
Machinery
Engines, computers, cell
11.85%
3.54%
phones
Materials
Plastics, lumber, stones,
glass
Metals
Copper, steel, iron, aluminum
12.12%
16.91%
Minerals
Oil, coal, salt, electricity
28.34%
-25.34%
Miscellaneous
Medical devices, furniture, art
15.37%
5.23%
Transport
Vehicles, airplanes, parts
5.53%
-12.42%
[MISSING_PAGE_POST]
Figure 2 depicts the Year-over-Year (YoY) growth rates of imports to the U.S. from the top 20 exporting countries or territories. From Phase 1 to Phase 2, exports to the U.S. rose from the top 20 markets, except China. China had an average decline rate of -6.13% from Phase 1 to Phase 2 (Figure 2a), but the exports rebounded to 5.69% during the pandemic period from Phase 2 to Phase 3. Exports from other Asian markets except Japan continued to grow during this time. On the other hand, the other countries except for the Netherlands, Ireland, Russia, and Switzerland showed a declining trend in exports to the U.S. compared to the top Asian countries. In Europe, the exports to the U.S. from the Netherlands, Ireland, Russia, Switzerland, and Italy increased tremendously from Phase 1 to Phase 2 (Figure 2a). Specifically, the Netherlands exported 75.88% more goods to the U.S. during Phase 1 compared to Phase 2. Except for the United Kingdom and France, most European countries maintained or increased exports to the U.S. during the COVID-19 outbreak. In Phase 3, the export value of Switzerland increased by 53.09% compared to Phase 2. It is noteworthy that the Netherlands' growth rate declined substantially to 4.35%. The value of French exports to the U.S. declined by a significant -18.31%. A closer study of the volume variations
across economic sectors indicates that the pandemic likely disrupted French exports and, presumably, its production of certain goods, such as medicine, medical devices, and equipment. Likewise, exports from the United Kingdom decreased by -15.96%.
From Phase 1 to Phase 2, the export value of the Top 20 countries and territories in North and South America to the U.S. increased slightly (Figure 2a). Nonetheless, exports from these three nations to the United States decreased throughout the epidemic, and Brazil's decline of -10.75% is the most severe.
Figure 2: The Year-over-Year (YoY) growth rates of the U.S. imports from the top 20 countries and territories with the highest exports to the US. **(a)** The YoY growth rates of the U.S. imports between Phase 1 (2015-2017) and Phase 2 (2019). **(b)** The YoY growth rates of the U.S. imports between Phase 2 and Phase 3 (2020-2021). The green indicates positive growth between the two countries.
phases, while the red indicates decline. The size of the circle means the value of the positive or negative growth rate.
### The U.S. Imports-Event-study Analysis
In this section, we use an event-study model to examine the effects of the trade war and the COVID-19 pandemic on U.S. imports. To explore the impacts of the trade war and epidemic, we categorize exporters to the U.S. into groups and investigate the differences in their trading activities across periods. We first split our samples from 2015 to 2021 into three Phases, i.e., 2015-2017, 2019, and 2020-2021, based on various scenarios. To measure the impacts of the trade war and pandemic, we assess the "norm" of U.S. import activities of each group with subsamples from 2015 to 2017, allowing us to control seasonality, business cycle, etc. To evaluate the effect of the trade war, we eliminate observations in 2018 to alleviate the short-term regulatory ambiguity and strategic procurement associated with tariff expectation or speculation. To measure the joint effects of the trade war and COVID-19, we use the same 'norm'. In contrast, to isolate the COVID-19 impact on U.S. imports, we employ measures generated from the 2019 sample to approximate the 'norm' of the trade war period. In an event study from analogous to standard difference-in-differences analysis, we separate our sample into focal and benchmark groups (equivalent to treatment and control groups) on the continent-sector and country-sector levels (Table S1 in the Supplementary Materials). In other words, the coefficient of estimates reflects the difference in percentage changes between the treatment and the control group in the period of interest compared to the period of normality. At the continent-sector level, each focal group represents the exporters from a specific economic sector in one continent, and the corresponding benchmark group is their counterpart from the same industry in the other continents. At the country-sector level, each focal group is the exporter of one country (or territory), and the corresponding group represents the exporters in all other countries (or territories).
Figures 3 and 4 illustrate the impact of the trade war and the pandemic on U.S. imports in all economic sectors at the both continent level and the country level. Figure 3a indicates that the trade war adversely affects imports from China to the U.S. but has a mixed impact on imports from other countries or territories. The Chinese apparel, machinery, and miscellaneous sectors experienced the most significant declines with magnitudes of -7.71, -6.89, and -5.04, respectively, which are also the economic sectors corresponding to the top goods imported by the U.S. [51]. The U.S. raised tariffs on Chinese goods from many economic sectors from 2018 to 2019, especially those from mechanical components, materials, and electrical equipment [52]. On the other hand, the trade war had a positive impact on U.S. apparel imports from Africa, Europe, and the rest of Asia (with values of 0.57, 0.89, and 6.03, respectively), as well as on machinery imports from Mexico (with a value of 1.081) and from Vietnam and Taiwan (with values of 1.654 and 1.239 respectively). In addition, there is a negative impact on the material imports from other American nations with a negative value of -1.36, and Canada with a negative value of -1.385. The trade war had the least detrimental effect on the U.S. imports of rare minerals from Mainland China
compared to other economic sectors, which was likely owing to the strong dependence of the U.S. on rare materials from China [53].
When analyzing the COVID-19 pandemic, one must consider its interaction with the trade war. Figure 2(b) depicts the COVID-19 outbreak and the trade war on U.S. imports. In general, it exhibits similar tendencies as the trade war alone in Phase II (Figure 2(a)), with some noticeable differences. Over this period, U.S. imports from China were adversely impacted for most commodities, except for transportation imports, which increased with a value of 1.00. Compared to Phase II, Chinese exports of textiles, machinery, materials, and metals have decreased considerably more, while downward trends on chemical and miscellaneous imports have diminished. Meanwhile, imported goods of agriculture, apparel, machinery, materials, and miscellaneous sectors from other Asian regions grow significantly. These include apparel exports from Vietnam and India with magnitudes of 5.758 and 1.358, respectively, machinery exports from Malaysia, Korea, Taiwan, Thailand, and Vietnam with magnitudes of 0.566, 0.678, 2.405, 0.684, and 3.371, respectively, and materials exports from India, Malaysia, Thailand, and Vietnam with magnitudes of 0.701, 1.563, 1.138, and 2.111, respectively.
To separate the impact of COVID-19 from that of the trade war, we utilize metrics derived from the 2019 sample to determine the 'norm' for U.S. imports during the trade war. Figure 2(c) depicts the possible impact of COVID-19 on U.S. imports. COVID-19 caused the U.S. to increase imports from China on transportation and chemicals with values of 0.88 and 1.33, respectively, but led to a decrease in agriculture, metals, and minerals imports. When focusing on machinery imports, the COVID-19 pandemic resulted in a decline in imports from American and European continents and a rise in the rest of Asia (ROA). In conjunction with Figure 4, machinery imports from Canada, France, and Germany declined during the pandemic, while imports from Malaysia, Taiwan, Korea, Thailand, and Vietnam increased. This result implies that the quick economic recovery during the early stage of COVID-19 outbreak in China boosted machinery manufacturing activity in the downstream supply chain network near China and increased exports to the U.S., albeit not directly from Chinese exporters facing high tariffs. Conversely, machinery exports from other supply-chain clusters, such as those in Europe and the American continents, dropped dramatically presumably due to different epidemic management policies [54]. Furthermore, the COVID-19 outbreak had a remarkable positive impact on the U.S. importing materials from other American regions, metals from European areas, and materials and miscellaneous items from Asian regions.
Figure 3: Event-study analysis of sector import shares (in percentage) by continent (or China) and sectors due to **(a)** the US-China trade war, **(b)** the US-China trade war and COVID-19 pandemic, **(c)** the COVID-19 pandemic. We benchmark each focal economic sector’s monthly average market share from a continent (treatment) with all exporters outside that continent (control). We take 2015-2017 as the ”normal” period (before), 2019 as the ”post-event” era for (a), and 2020-2021 as the ”post-event” period for (b). Moreover, we take 2019 as the ”norm” period (before) and 2020-2021 as the ”post-event” period for (c).
2021 as the "post-event" era for (c). The resulting regression coefficients were assessed at 10% or lower significance levels to determine the most appropriate one for different data pairs.
Figure 4: Event-study estimates at the country or territory level. The heat map shows the top 20 U.S. importers on the horizontal axis (in alphabetical order) and the nine economic sectors on the vertical axis. Each box corresponds to the impact of the event on that economic sector for that
country or territory, with positive impacts marked in green and negative impacts in red, and the colors deepening as the impact increases. Values less than -0.5 and greater than 0.5 are labeled in this figure.
### China's Exports Analysis
Figure 5 illustrates a growth in China's exports to eight Asian countries (or territories) that were among the top 20 trade partners exporting to the U.S. In terms of value, China's export to these eight Southeast Asian countries is comparable to its export to the U.S. For example, in 2017, China's total export amount was $446 billion to those eight countries or territories and $563 billion to the U.S. In 2022, these figures become $642 billion and $541 billion, respectively. The substantial expansion of imports from China in this region suggests a growing dependency of the Asian supply chain on Chinese exporters. In combination with Figure 5a and Table 2a, the exports of China to these countries and territories exhibited a similar upward trend from 2016 to 2017. During Phase 2, China's exports to these Southeast Asian countries increased by 47.49%, 31.39%, 22.43%, and 20.40% for Vietnam, Malaysia, Singapore, and Thailand, respectively, compared to the rest of the countries and territories. Due to pandemic lockdowns, China's exports to these countries and territories declined significantly in Feb 2020 with a rapid growth rebound one month later.
Table 2b reveals that China's export dynamics to these eight Asian countries or territories correspond to U.S. import dynamics in Table 1. For example, from Phase 1 to Phase 2, China's mineral exports increased the most by 48.21%, whereas the U.S. has the most significant increase in the country's GDP.
Figure 5: The monthly change in China’s exports to eight Asian countries and territories that were among the top 20 trade partners exporting to the U.S. from 2016 to 2021. The Comtrade database does not include Chinese export records for 2015. Panel **(a)** displays monthly exports by countries and territories. Panel **(b)** illustrates the monthly exports at the economic sector level.
in the same sector with 28.34%. Moreover, the exports of miscellaneous materials, equipment, and chemicals from China rose, while U.S. imports from the same economic sectors increased significantly. Between Phases 2 and 3, China's mineral exports decreased dramatically by -20.40%, mirroring the same changing pattern of U.S. mineral imports. Meanwhile, China exports more minerals, chemicals, and metals to these countries and territories, while the U.S. imports more goods from the same sectors.
\begin{table}
\begin{tabular}{c|c c} \hline \hline & **From phase1 to phase2** & **From phase2 to phase3** \\ \hline
**India** & 18.36\% & 9.74\% \\
**Japan** & 7.49\% & 7.65\% \\
**Malaysia** & 31.39\% & 29.45\% \\
**Singapore** & 22.43\% & 2.96\% \\
**Rep. of Korea** & 13.00\% & 17.74\% \\
**Taiwan** & 30.98\% & 25.70\% \\
**Thailand** & 20.40\% & 31.48\% \\
**Viet Nam** & 47.49\% & 28.60\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: The annual change in China’s exports to eight Asian countries or territories that were among the top 20 trade partners exporting to the U.S.. Phase 1, covering the period from 2016 to 2017, represents the pre-trade war period. Phase 2 is 2019, covering the trade war period. Phase 3 spans from 2020 to 2021, when the trade war and the COVID-19 pandemic clash.
### China's Exports-Event-Study Analysis
During the same study period, we also do an event-study analysis to determine the influence of the combined effect of the trade war and the pandemic on China's exports to these eight Asian countries or territories. We employ the identical three baseline periods as we applied in the U.S.
imports event-study analysis, except that the first baseline period begins in 2016 due to the absence of 2015 China export data in the Comtrade database. Each treatment group represents a particular economic sector from one of these Asian countries and territories. In contrast, each control group represents the same sector of the remaining seven Asian countries and territories.
Figure 6a demonstrates that the commodities with the most significant decrease in China's export to the U.S. during the US-China trade war (Figure 3a) significantly increased in exports from China to developing Southeast Asian countries. In particular, we found that exports of apparel, machinery, and miscellaneous goods from China to Asian countries, such as Vietnam, Malaysia, and Thailand, significantly increase. In the machinery, minerals, and textiles sectors, there is a substantial alignment between Vietnam's imports from China and its exports to the U.S. (Figure 4).
Figure 6b illustrates the combined impact of the trade war and COVID-19 on China's exports. China's exports of machinery and materials sectors are expanding rapidly in Malaysia, Thailand, and Vietnam from which the U.S. imports significantly more. For instance, machinery exports from China to Vietnam and Malaysia had positive and significant coefficient estimates of 8.04 and 1.94, respectively. In contrast, its material exports to Vietnam, Thailand, and Malaysia had positive coefficient estimates of 7.04, 2.08, and 1.8, respectively.
Figure 6c depicts the isolated impact of the COVID-19 pandemic on China's exports to these eight Asian countries or territories. It indicates that China's exports of chemicals to the U.S., Thailand, and Vietnam increased in combination with Figure 3c, suggesting that Chinese suppliers play a crucial role in the worldwide combat against COVID-19. Over the same period, the U.S. increased machinery imports from ROA to which China's machinery exports differed. China's machinery exports to Japan and the Republic of Korea declined dramatically, whereas the exports to Thailand, Vietnam, and Taiwan grew significantly along their increased demand for medical device production[55].
**Figure 6.** Event-study analysis of sector export shares (in percentage) by 8 Asian trading partners due to (a) the US-China trade war, (b) the US-China trade war and COVID-19 pandemic, (c) the COVID-19 pandemic. We benchmark each focal economic sector's monthly average export share to a country/territory (treatment) with all importers outside that country/territory (control). We take 2016-2017 as the "normal" period (before), 2019 as the "post-event" era for (a), and 2020-2021 as the "post-event" period for (b). Moreover, we take 2019 as the "norm" period (before) and 2020-2021 as the "post-event" era for (c). The resulting regression coefficients were assessed at 10% or lower significance levels to determine the most appropriate one for different data pairs.
## Discussion
This study demonstrates that the United States' imposition of increased tariffs had a significant impact on China's exports to the United States during the early stage of the trade war before the COVID-19 outbreak, forcing China to relinquish its market share to other countries. This transfer of market share exhibited heterogeneity in geographic dimensions. Some southeast emerging economies, such as Vietnam, Malaysia, and Thailand, benefited the most from China's diversion of exports away from the United States, while the European Union also potentially gained a modest advantage from the trade conflict[8, 56, 57, 58, 59, 60]. The emerging economies of Southeast Asia (i.e., Vietnam, Malaysia, Thailand) have not only gained market share that originally belonged to China, but also taken over part of the value chain transferred from China.[61, 62, 63, 17, 64]. Our empirical study confirms this observation, showing that both the European Union and emerging economies of Southeast Asia (i.e., Vietnam, Malaysia, Thailand) experienced an increase in exports to the U.S. In addition, our study reveals that emerging economies of Southeast Asia also had surging imports from China that used to belong to the most negatively affected economic sectors (i.e., apparel, machinery, miscellaneous) from China to the U.S.. This suggests that Southeast Asia has undeniably become more integrated into the global supply chain system in segments traditionally dominated by Chinese suppliers.
Our research further shows that countries that had previously benefited from the effects of the trade war between the U.S. and China exhibited spatiotemporally heterogeneous challenges when facing the outbreak of COVID-19. For example, most Asian countries demonstrated more resilience than European and American countries in terms of their exports to the U.S.. Previous research also shows that China led the Asia-pacific countries to secure the global supply chain networks because of their disease control effectiveness[65, 66]. Our research affirms the exceptional resilience of China's supply chain, enabling the country/territory to swiftly recover from the impact of the COVID-19 pandemic. Additionally, we find that the negative consequences of the US-China trade war have been significantly diminished during this period. This can be attributed to China's proactive implementation of stringent prevention and control policies in the early stages of the pandemic, as well as its well-developed industry chain system. As a result, Chinese exporters have gained a competitive advantage in sustaining continuous production following the COVID-19
outbreak. This has limited the options available to US importers, as the pandemic has continuously disrupted global supply networks outside of China along the COVID-19 waves, deterring value chains from relocating out of China.
Our analysis uncovers a growing triangular trade relationship involving China, Southeast Asia, and the United States, along with the trade war and COVID-19. Several key factors contribute to this trend. Firstly, the existing role of China as a pivotal hub within the Southeast Asian and East Asian trade bloc, connected through cross-border value chains, predates the trade conflict[67]. Secondly, the significant cost advantage derived from concentrated value chains in China presents challenges for upstream and downstream companies seeking to relocate[68]. Thirdly, after decades of globalization relying on comparative advantages, the complexity of reallocating production capacities and skilled labor from China to other nations poses a substantial burden[9]. Lastly, the economic strain on the United States and the costs associated with relocating value chains act as barriers to their migration from China[69]. Our findings indicate that, under the influence of the trade war, sectors such as apparel, machinery, and miscellaneous goods experience a decline in exports to the United States, which corresponds to an increase in exports of these sectors to Southeast Asia. This suggests the formation of a triangular trade relationship, wherein companies opt to transfer assembly processes to Southeast Asia to avoid additional U.S. tariffs. As a result, businesses can achieve cost savings by importing semi-finished goods from China, conducting assembly in non-China regions, and subsequently exporting to the United States. The COVID-19 reinforced the triangular trade relationships because the U.S. had to increase the import directly from China and indirectly through Southeast Asian countries that imported from China.
In conclusion, the US-China trade war has had substantial implications for the global supply chain and the subsequent COVID-19 pandemic has further reshaped the changing global supply chain. The escalated trade tension has shifted some segments associated with certain Chinese products and deepened the decoupling between the two world's largest economic bodies. However, the trade war initiated by the U.S. may not accomplish the objective of benefiting the U.S. businesses, especially those depending on overseas suppliers, and bypassing the cost of tariffs to their Chinese suppliers. The existing literature investigates the tariff mechanisms and calibrates the magnitude of tariff pass-through within economic models[42, 46] and finds that the trade war is not harmless to the U.S. economy even in regular conditions. In contrast, some parties on the value chain network may have adopted a tariff avoidance strategy by forming the triangle trade framework. Furthermore, China's aggressive COVID policy helped its exporters and made some of them more formidable in the global economic arena, though at a substantial cost and with long-term uncertainty. There is no doubt, however, that the ongoing trade war results in higher input costs for U.S. importers and, subsequently, higher prices for U.S. consumers, especially in the early stage of the pandemic breakout. Even though this trade war's long-term consequence is mainly unknown, we hope this research keeps policymakers and the general public abreast of these measurable economic effects associated with the trade war.
Even though our study contributes valuable insights, we admit certain limitations and suggest further research directions. Future research can expand on our work by including U.S. exports to China, thereby providing a better understanding of the impacts of retaliatory tariffs on the downstream segments of the global supply chain, especially in the high-tech sector. In addition, future research might examine the changes in the consequential effects of the trade war in the aftermath of the pandemic, especially how various entities beyond two conflicting countries in global supply networks respond to the dual causes. Also, the lack of publicly accessible data prevents us from exploring the long-term impact of Chinese COVID-19 policies on the global supply chain beyond 2022.
## Description of supplemental information
Supplemental information includes two tables (Table S1 - Table S2) and seven Figures (Figure S1-S7).
## Declaration of Interests
### Competing Interest Statement:
The authors declare no competing interests.
## Acknowledgments
**Funding:** This work was supported in part by National University of Singapore FY2020 START-UP GRANT under WBS A-0003623-00-00. |
2302.00714 | A functional approach to the Van der Waals interaction | Based on a microscopic model, we use a functional integral approach to
evaluate the quantum interaction energy between two neutral atoms. Each atom is
coupled to the electromagnetic (EM) field via a dipole term, generated by an
electron bound to the nucleus via a harmonic potential. We show that the
resulting expression for the energy becomes the Van der Waals interaction
energy at the first non-trivial order in an expansion in powers of the fine
structure constant, encompassing both the long and short distance behaviours.
We also explore the opposite, strong-coupling limit, which yields a result for
the interaction energy as well as a threshold for the existence of a vacuum
decay probability, manifested here as an imaginary part for the effective
action.
In the weak-coupling limit, we also study the effect of using a general
central potential for the internal structure of the atoms. | C. D. Fosco, G. Hansen | 2023-02-01T19:14:28Z | http://arxiv.org/abs/2302.00714v1 | # A functional approach to the Van der Waals interaction
###### Abstract
Based on a microscopic model, we use a functional integral approach to evaluate the quantum interaction energy between two neutral atoms. Each atom is coupled to the electromagnetic (EM) field via a dipole term, generated by an electron bound to the nucleus via a harmonic potential. We show that the resulting expression for the energy becomes the Van der Waals interaction energy at the first non-trivial order in an expansion in powers of the fine structure constant, encompassing both the long and short distance behaviours. We also explore the opposite, strong-coupling limit, which yields a result for the interaction energy as well as a threshold for the existence of a vacuum decay probability, manifested here as an imaginary part for the effective action.
In the weak-coupling limit, we also study the effect of using a general central potential for the internal structure of the atoms.
## 1 Introduction
A celebrated manifestation of the existence of vacuum fluctuations are the Casimir, Van der Waals, and related interactions [1, 2, 3]. The second is a well-known example of an attractive force, between two neutral atoms, which results from the correlation between their dipole-moment fluctuations. That correlation, on the other hand, is mediated by the (vacuum) electromagnetic (EM) field.
In this paper, we use a microscopic model to derive the interaction energy between two neutral atoms, each one described by a static nucleus, to which an electron is bound by a harmonic potential. The coupling of each atom to the EM field is, on the other hand, implemented by a dipole term. In the approach that we follow, we evaluate the interaction energy by calculating the Euclidean (imaginary time) effective action resulting from the integration of the quantum fluctuations of the electrons and the EM field. The vacuum energy obtained thusly, may be thought of as the result of taking the zero-temperature limit of the thermal free energy.
This paper is organized as follows: in Section 2 we introduce the model we use to describe the system, and the tools used to evaluate the interaction energy, in particular, its imaginary-time effective action. Then, in Sect. 3, we evaluate the static interaction energy between the two atoms, discussing different limits. We also derive an expression for the imaginary part of the energy, and interpret it in terms of a vacuum decay probability. We conclude the section by studying the case of a general central potential for the atoms, in the weak-coupling regime.
Finally, in Sect. 4, we present our conclusions.
## 2 The system and its effective action
### The model
The model that we consider in this work deals with two atoms, labeled by 1 and 2, having their centres of mass at \({\bf r}^{(1)}\) and \({\bf r}^{(2)}\), while the electrons are located at the positions \({\bf x}^{(1)}\) and \({\bf x}^{(2)}\), relative to \({\bf r}^{(1)}\) and \({\bf r}^{(2)}\), respectively. The action \({\cal S}\), a functional of the gauge field \(A\) also, is given by:
\[{\cal S}({\bf x}^{(1)},{\bf x}^{(2)},\,A\;;{\bf r}^{(1)},{\bf r}^ {(2)})= \,{\cal S}^{a}_{0}({\bf x}^{(1)})+{\cal S}^{a}_{0}({\bf x}^{(2)}) \,+\,{\cal S}^{a}_{I}({\bf x}^{(1)},\,A\;;{\bf r}^{(1)}) \tag{1}\] \[+ \,{\cal S}^{a}_{I}({\bf x}^{(2)},\,A\;;{\bf r}^{(2)})\,+\,{\cal S }^{\rm EM}_{0}(A)\;,\]
where \({\cal S}^{a}_{0}({\bf x})\) is the action for an electron in the presence of the bounding potential, \({\cal S}^{\rm EM}_{0}(A)\) the one for the free EM field, and \({\cal S}^{a}_{I}\) contains the coupling of an electron to the EM field. The \({\bf r}^{(1)}\) and \({\bf r}^{(2)}\) vectors, which appear in the action, are to be regarded as external parameters: the ones upon which the effective action will depend.
For the sake of simplicity, the action for each orbiting electron, having mass \(m\) and position \({\bf x}(t)\) relative to the nucleus, is taken to be of the form:
\[{\cal S}^{a}_{0}({\bf x})\;=\;\frac{m}{2}\,\int dt\left(\dot{\bf x}^{2}-\Omega ^{2}{\bf x}^{2}\right)\,, \tag{2}\]
since it will allow for the exact evaluation of the interaction energy. Note, however, that it may be applied to some real physical systems, like heavy muonic atoms [4, 5].
The interaction with the EM field is assumed to be given by the dipolar term:
\[{\cal S}^{a}_{I}(A;{\bf r},{\bf x})=\int dt\,q\,x_{i}\,E_{i}({\bf r})\;, \tag{3}\]
where \(q\) is the charge of the "electron" 1, and \(E_{i}\) denotes the \(i^{th}\) component of the electric field. In our conventions, \(E_{i}=F_{0i}\) with \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). Indices from the middle of the Latin alphabet: \(i,\,j,\,k,\ldots\) run from 1 to 3, while Greek ones are assumed to run from 0 to 3. Besides, we shall later on use \(\alpha,\beta,\ldots\), taking values 1 and 2, corresponding to the two atoms.
Footnote 1: We use this terminology, although the actual value of \(q\) will be assumed to be a variable which measures the strength of the EM coupling. In the same vein, the binding potential is not Coulombian but harmonic
We follow Einstein's convention: throughout this paper, a sum over repeated indices is assumed unless explicitly stated otherwise.
Here, \(A_{\mu}\) denotes the 4-potential, which has the action:
\[{\cal S}^{\rm EM}_{0}(A)=\int d^{4}x\,\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}- \frac{\lambda}{2}\,(\partial_{\mu}A^{\mu})^{2}\right]\;\;, \tag{4}\]
consisting of the standard (vacuum) Maxwell term, plus a covariant gauge-fixing term (\(\lambda\neq 0\)). We use natural units (\(c=1\), \(\hbar=1\)) and the metric signature \((+,-,-,-)\).
### Effective action
A rather convenient way to obtain the quantum interaction energy for a system composed of two or more objects is by means of its imaginary-time effective action, \(\Gamma_{\rm eff}\). Indeed, by considering a static configuration, one can extract the vacuum energy by taking the limit:
\[E_{I}\;=\;\lim_{T\to\infty}\left(\frac{\Gamma_{\rm eff}}{T}\right)\;, \tag{5}\]
where \(T\) denotes the extent of the (imaginary) time interval [6]. \(\Gamma_{\rm eff}\) results from the integration of the quantum fluctuations, yielding as a result a function of the remaining, classical degrees of freedom. Since we are interested in the _interaction_ part of the energy, we shall subtract the self-energy contributions, which are the ones that survive when the objects are infinitely far apart.
We will use for \(\Gamma_{\rm eff}\) a convenient representation in terms of a functional integral:
\[e^{-\Gamma_{\rm eff}({\bf r}^{(1)},{\bf r}^{(2)})}\,\equiv\,\frac{1} {{\cal N}}\;{\cal Z}({\bf r}^{(1)},{\bf r}^{(2)})\] \[{\cal Z}({\bf r}^{(1)},{\bf r}^{(2)})=\,\int{\cal D}{\bf x}^{(1)} \,{\cal D}{\bf x}^{(2)}\,{\cal D}A\ e^{-{\cal S}_{E}({\bf x}^{(1)},\,{\bf x}^{(2 )},\,A\,;\,{\bf r}^{(1)},\,{\bf r}^{(2)})}\;, \tag{6}\]
where \({\cal N}\) is a constant, and \({\cal S}_{E}\) is the Euclidean (Wick rotated) version of the action:
\[{\cal S}_{E}\,=\,\frac{m}{2}\,\int d\tau\left[(\dot{\bf x}^{(1)})^ {2}+(\dot{\bf x}^{(2)})^{2}+\,\Omega^{2}\Big{(}({\bf x}^{(1)})^{2}+({\bf x}^{(2 )})^{2}\Big{)}\right]\] \[+\,q\,\int d\tau\left({\bf x}^{(1)}\cdot\,{\bf E}(\tau,{\bf r}^{( 1)})\,+\,{\bf x}^{(2)}\cdot\,{\bf E}(\tau,{\bf r}^{(2)})\right)\,+\,\int d^{4} x\,\frac{1}{2}A_{\mu}(-\partial^{2})A_{\mu}\;. \tag{7}\]
Here, \(\tau\equiv x_{0}\) is the imaginary time, the metric becomes \((g_{\mu\nu})={\rm diag}(1,1,1,1)\), \({\bf E}\equiv\partial_{\tau}{\bf A}-\nabla A_{0}\), and we have adopted the Feynman (\(\lambda=1\)) gauge. The normalization constant \({\cal N}\), is chosen in such a way that the energy vanishes when the distance between atoms tends to infinity:
\[{\cal N}\,=\,\Big{[}{\cal Z}({\bf r}^{(1)},{\bf r}^{(2)})\Big{]}\,\Big{|}_{|{ \bf r}^{(1)}-{\bf r}^{(2)}|\to\infty}\;. \tag{8}\]
Note that this implies that any factor independent of \({\bf r}^{(1)}\) or \({\bf r}^{(2)}\) in \({\cal Z}\), may be discarded.
As a first step towards obtaining \(\Gamma_{\rm eff}\), we introduce an intermediate object, \({\cal S}_{\rm eff}\): the result of performing the functional integral just over \(A_{\mu}\):
\[e^{-{\cal S}_{\rm eff}({\bf x}^{(1)},\,{\bf x}^{(2)}\,;\,{\bf r}^ {(1)},\,{\bf r}^{(2)})}=\,e^{-\frac{m}{2}\int_{\tau}\,\Big{[}(\dot{\bf x}^{(1 )})^{2}+(\dot{\bf x}^{(2)})^{2}\,+\,\Omega^{2}\Big{(}({\bf x}^{(1)})^{2}+({ \bf x}^{(2)})^{2}\Big{)}\Big{]}}\] \[\times\,e^{-\frac{1}{2}\int_{x,y}\,J_{\mu}(x)\Delta_{\mu\nu}(x-y)J _{\nu}(y)}\;, \tag{9}\]
where we used a shorthand notation for the integrations, and \(J_{\mu}=J_{\mu}^{(1)}+J_{\mu}^{(2)}\), \(J_{\mu}^{(\alpha)}\) (\(\alpha=1,\,2\)) being a dipole current concentrated on each atom, given explicitly by:
\[J_{0}^{(\alpha)}(y)=\,-q\,x_{j}^{(\alpha)}(\tau)\,\delta(y_{0}- \tau)\,\frac{\partial}{\partial y_{j}}\,\delta^{3}({\bf y}-{\bf r}^{(\alpha)})\] \[{\bf J}^{(\alpha)}(y)=\,q\,\dot{\bf x}^{(\alpha)}(\tau)\,\delta( y_{0}-\tau)\,\delta^{3}({\bf y}-{\bf r}^{(\alpha)})\;\;, \tag{10}\]
and
\[\Delta_{\mu\nu}(x-y)=\delta_{\mu\nu}\,\Delta(x-y) \tag{11}\]
where \(\Delta(x-y)\) is the scalar propagator:
\[\Delta(x-y)=\int\frac{d^{4}k}{(2\pi)^{4}}\,\frac{e^{-ik(x-y)}}{k^{2}}. \tag{12}\]
Note that each current is conserved (\(\partial_{\mu}J^{(\alpha)}_{\mu}=0\)), so the result (9) is independent of the value of the constant \(\lambda\). Taking into account the form of the gauge field propagator one sees that, due to the coupling to the EM field, \(\Omega\) in the harmonic term of each atom's action gets renormalized. Keeping the same notation, \(\Omega\), now for the _renormalized_ frequency, we see that
\[{\cal S}_{\rm eff}({\bf x}^{(1)},{\bf x}^{(2)},A\,;{\bf r}^{(1)},{\bf r}^{(2) })\ =\ \frac{1}{2}\,\int_{-\infty}^{+\infty}\frac{d\nu}{2\pi}\,[\tilde{x}^{(\alpha)} _{i}(\nu)]^{*}\,K^{(\alpha\beta)}_{ij}(\nu)\,\tilde{x}^{(\beta)}_{j}(\nu)\, \tag{13}\]
where we have introduced:
\[\tilde{x}^{(\alpha)}_{j}(\nu)\equiv\int_{-\infty}^{+\infty}dt\,e^{i\nu t}\,x^ {(\alpha)}_{j}(t)\ \, \tag{14}\]
and \({}^{*}\) denotes complex conjugation.
On the other hand,
\[K^{(ab)}_{ij}(\nu)\ \equiv\ m(\nu^{2}\,+\,\Omega^{2})\delta^{(ab)}\delta_{ij} \ +\ \sigma^{(ab)}\,M_{ij}(\nu) \tag{15}\]
where \(\sigma^{(ab)}\ \equiv\ \delta^{(a1)}\delta^{(b2)}+\delta^{(a2)}\delta^{(b1)}\), while the matrix elements \(M_{ij}(\nu)\) are given by:
\[M_{ij}(\nu)\equiv q^{2}\int\frac{d^{3}{\bf k}}{(2\pi)^{3}}\,e^{i{\bf k}\cdot{ \bf r}}\,\frac{\nu^{2}\delta_{ij}+k_{i}k_{j}}{\nu^{2}+{\bf k}^{2}}\ \,\ \ \ \ {\bf r}\equiv{\bf r}^{(1)}-{\bf r}^{(2)}. \tag{16}\]
With the appropriate choice of the system of coordinates \((\hat{e}_{k_{3}}\,||\,{\bf r})\), we cast this Hermitian matrix in diagonal form: \({\rm diag}(M_{1}(\nu),M_{2}(\nu),M_{3}(\nu))\), with:
\[M_{1}(\nu) = M_{2}(\nu)=-\frac{q^{2}}{4\pi r^{3}}\,(1+r|\nu|+r^{2}|\nu|^{2}) \,e^{-r|\nu|},\] \[M_{3}(\nu) = -\frac{q^{2}}{2\pi r^{3}}\,(1+r|\nu|)\,e^{-r|\nu|}\, \tag{17}\]
where, as expected, the dependence on the relative positions of the atoms is only through their distance: \(r\equiv|{\bf r}|\).
Finally, we integrate out the electrons' coordinates relative to each atom. This is still a Gaussian functional integral which, by converting to Fourier space also the integration measure, becomes an infinite product of decoupled ordinary integrals. This means that the integral
\[e^{-\Gamma_{\rm eff}(r)}\,=\,\frac{1}{{\cal N}}\int{\cal D}{\bf x}^{(1)}\,{ \cal D}{\bf x}^{(2)}\,e^{-{\cal S}_{\rm eff}({\bf x}^{(1)},{\bf x}^{(2)},A\,; r)}\, \tag{18}\]
yields an expression for \(\Gamma_{\rm eff}\) which is proportional to the total evolution time, \(T\), as it should be for a static configuration. The interaction energy, however, is given by the ratio \(\frac{\Gamma_{\rm eff}}{T}\) in the \(T\to\infty\) limit, which is well-defined:
\[E_{I}(r)\;=\;\left[\frac{\Gamma_{\rm eff}({\bf r})}{T}\right]_{T\to\infty}\,=\, \frac{1}{2}\,\int_{-\infty}^{\infty}\frac{d\nu}{2\pi}\,\log\det(\mathds{1}-{ \rm T}(\nu)), \tag{19}\]
where \({\mathds{T}}(\nu)\) is the \(3\times 3\) matrix:
\[{\mathds{T}}(\nu)\,=\,e^{-2|\nu|r}\,\left[\begin{matrix}[\xi_{\mbox{\tiny i}} (\nu,r)]^{2}&0&0\\ 0&[\xi_{\mbox{\tiny i}}(\nu,r)]^{2}&0\\ 0&0&[\xi_{\mbox{\tiny$\perp$}}(\nu,r)]^{2}\end{matrix}\right]\;, \tag{20}\]
where
\[\xi_{\mbox{\tiny i}}(\nu,r)\,=\,\frac{q^{2}(1+r|\nu|+r^{2}|\nu|^{2})}{4\pi r^ {3}m(\nu^{2}+\Omega^{2})}\;\;,\;\;\;\xi_{\mbox{\tiny$\perp$}}(\nu,r)\,=\, \frac{q^{2}(1+r|\nu|)}{2\pi r^{3}m(\nu^{2}+\Omega^{2})}\;. \tag{21}\]
## 3 Interaction energy
Computing the determinant, we have the resulting expression for the interaction energy
\[E_{I}(r)=\frac{1}{2}\int_{-\infty}^{\infty}\frac{d\nu}{2\pi}\left[2\log\left( 1-\xi_{\mbox{\tiny i}}^{2}e^{-2|\nu|r}\right)+\,\log\left(1-\xi_{\mbox{\tiny$ \perp$}}^{2}e^{-2|\nu|r}\right)\right] \tag{22}\]
which is a realization, in a particular context, of the \(TGTG\) formula [7].
By a redefinition of the integration variable: \(\nu=\frac{u}{r}\), we see that:
\[E_{I}(r)=\frac{1}{2r}\int_{-\infty}^{\infty}\frac{du}{2\pi}\, \bigg{\{} 2\log\left[1-\Big{(}\frac{q^{2}}{4\pi m\,r}\frac{1+|u|+u^{2}}{u^{2}+( \Omega r)^{2}}\Big{)}^{2}e^{-2|u|}\right] \tag{23}\] \[+\,\log\left[1-\Big{(}\frac{q^{2}}{2\pi m\,r}\frac{1+|u|}{u^{2}+ (\Omega r)^{2}}\Big{)}^{2}e^{-2|u|}\right]\bigg{\}}\;.\]
Introducing the dimensionless variable \(x\equiv\Omega r\), which measures the distance between atoms in terms of a length scale \(\sim\Omega^{-1}\), and using \(\Omega\) to measure energies: \(E_{I}(r)\equiv\Omega{\cal E}_{I}(\Omega r)\), where:
\[{\cal E}_{I}(x)=\frac{1}{x}\int_{0}^{\infty}\frac{du}{2\pi}\, \bigg{\{} 2\log\left[1-\Big{(}\frac{q^{2}}{4\pi}\frac{\Omega}{m}\Big{)}^{2}\, \frac{(1+u+u^{2})^{2}}{x^{2}(u^{2}+x^{2})^{2}}\,e^{-2u}\right] \tag{24}\] \[+\,\log\left[1-\Big{(}\frac{q^{2}}{2\pi}\frac{\Omega}{m}\Big{)}^ {2}\,\frac{(1+u)^{2}}{x^{2}(u^{2}+x^{2})^{2}}\,e^{-2u}\right]\bigg{\}}\;,\]
a result that we shall analyze below under different assumptions regarding the parameters of the system.
### Weak coupling and Van der Waals interaction
This corresponds to situations where, keeping a few terms in the power series expansion: \(\log(1-x)=-\sum_{n=1}\frac{x^{n}}{n+1}\) for each one of the logs above, is a reliable approximation.
Moreover, we also assume here that that is achieved by means of the two (independent) conditions: \(\frac{q^{2}}{2\pi}\ll m/\Omega\), and \(r\gg 1/\Omega\). The first one is essentially a constraint on the maximum value of the coupling constant or, equivalently, on the size of the electric dipole fluctuations on each atom.
Under those two assumptions, the leading term in the expansion is:
\[{\cal E}_{I}(x) \sim {\cal E}_{w}(x)\] \[{\cal E}_{w}(x) \equiv -\frac{1}{16\pi^{3}}\Big{(}\frac{q^{2}\Omega}{m}\Big{)}^{2}\, \frac{1}{x^{3}}\,\int_{0}^{\infty}du\,e^{-2u}\,\frac{u^{4}+2u^{3}+5u^{2}+6u+3}{ (u^{2}+x^{2})^{2}}. \tag{25}\]
The integral from (25) can be computed exactly, for that we introduce the two auxiliary functions:
\[f(x) \equiv {\rm Ci}(x)\sin(x)-{\rm si}(x)\cos(x), \tag{26}\] \[g(x) \equiv -({\rm Ci}(x)\cos(x)+{\rm si}(x)\sin(x)), \tag{27}\]
where \({\rm si}(x)\equiv{\rm Si}(x)-\frac{\pi}{2}\), being \({\rm Ci}(x)\) and \({\rm Si}(x)\) the cosine and sine integrals, respectively [8]. In terms of those functions, we have:
\[{\cal E}_{w}(x)=-\frac{1}{32\pi^{3}}\left(\frac{q^{2}\Omega}{m} \right)^{2}\frac{1}{x^{6}} \Big{(}x\,(6-x^{2})+(3-7x^{2}+x^{4})\,f(2x)\] \[+2x\,(3-3x^{2}+x^{4})\,g(2x)\Big{)}. \tag{28}\]
We plot \({\cal E}_{w}(x)\) in Fig. 1. For \(x\gg 1\), result (28) reproduces the asymptotic behaviour of Van der Waals forces at long distances:
\[E_{I}(r)\sim-\frac{23}{(4\pi)^{3}}\,\Big{(}\frac{q^{2}}{m\Omega^{2}}\Big{)}^{ 2}\,\frac{1}{r^{7}}\, \tag{29}\]
in agreement with [9] (see also [10]). From (29), we identify the static electric susceptibility of the microscopic model: \(\alpha_{E}=\frac{q^{2}}{m\Omega^{2}}\), which has volume dimensions.
In terms of the original variables, and written in a way that makes the comparison with the next-to-leading term more straightforward,
\[E_{I}(r)\sim-\frac{23}{4\pi}\,\Big{(}\frac{q^{2}}{4\pi m\Omega}\Big{)}^{2}\, \frac{1}{\Omega r}\ \frac{1}{\Omega r^{6}}. \tag{30}\]
The next-to-leading term at long distances corresponds to the London limit [11, 1]. This can be picked up by extracting the next negative power of the distance or, equivalently, by evaluating the frequency integral in (22) using the approximation \(|\nu|r\simeq 0\). In this situation,
\[E_{I}(r) \simeq -\frac{1}{2}\,\int_{-\infty}^{+\infty}\frac{d\nu}{2\pi}\,\frac{1}{ (\nu^{2}+\Omega^{2})^{2}}\left[2\left(\frac{q^{2}}{4\pi r^{3}m}\right)^{2}+ \left(\frac{q^{2}}{2\pi r^{3}m}\right)^{2}\right] \tag{31}\] \[= -\frac{3}{4}\left(\frac{q^{2}}{4\pi m\Omega}\right)^{2}\frac{1}{ \Omega\,r^{6}}\]
The ratio between this term and the asymptotic one at long distances is \({\cal O}(\Omega r)\), as it should be.
### Strong coupling: short distances and imaginary part of the energy
One should expect the interaction energy to be real. Note, however, that due to the presence of the logarithms, the interaction will have both real and imaginary parts. The existence of imaginary parts for the logarithms may be seen to depend on the value of the dimensionless ratio:
\[g\;\equiv\;\frac{q^{2}}{2\pi}\frac{1}{m\Omega^{2}r^{3}}\;. \tag{32}\]
The existence of an instability at short distances, and therefore of an imaginary part in the effective action, may be understood by an argument based on the form of the intermediate effective action \({\cal S}_{\rm eff}\), and its short distance behaviour: the London limit. We recall that it is a quadratic form in the electrons' coordinates, with a frequency-dependent kernel \(K_{ij}^{(ab)}(\nu)\). At sufficiently small distances \(r\), we may use the instantaneous approximation for the gauge field propagator, namely, assume that \(r\ll\lambda\), and therefore use \(r\nu\simeq 0\) (see [1] & references there in).
This is in fact equivalent to replacing the gauge-field propagator by its Coulomb form. Therefore,
\[K_{ij}^{(ab)}(\nu)\;\simeq\;m(\nu^{2}\,+\,\Omega^{2})\delta^{(ab)}\delta_{ij} \;+\;\sigma^{(ab)}\,M_{ij}(0) \tag{33}\]
where \(M_{ij}(0)\) is diagonal if the orthogonal coordinate system is chosen so that \(x_{3}\) points along the direction of the line connecting the two atoms. With this choice,
\[\Big{[}M_{ij}(0)\Big{]}\,\equiv\,-\frac{q^{2}}{4\pi r^{3}}\,\left(\begin{array} []{ccc}1&0&0\\ 0&1&0\\ 0&0&2\end{array}\right)\;. \tag{34}\]
In this limit, transforming back the electrons fluctuations from frequency to time, \({\cal S}_{\rm eff}\) becomes local in time, and may thus be interpreted as an action.
That action involves the original six harmonic modes (one for each \(x_{i}^{(\alpha)}\)) with identical oscillation frequency \(\Omega\), plus an \(r\)-dependent term which couples them. Altogether, they produce a potential which we denote by \(V_{\rm eff}\), and is still quadratic:
\[{\cal S}_{\rm eff} \simeq \int d\tau\,\Big{[}\frac{m}{2}\dot{x}_{i}^{(\alpha)}\dot{x}_{i}^{ (\alpha)}\,+\,V_{\rm eff}(\{x_{i}^{(\alpha)}\})\Big{]}\] \[V_{\rm eff}(\{x_{i}^{(\alpha)}\}) = \frac{m}{2}\Omega^{2}x_{i}^{(\alpha)}x_{i}^{(\alpha)}-\frac{q^{2} }{4\pi r^{3}}\Big{(}x_{1}^{(1)}x_{1}^{(2)}+x_{2}^{(1)}x_{2}^{(2)}+2x_{3}^{(1) }x_{3}^{(2)}\Big{)}\;. \tag{35}\]
The diagonalization of this potential is straightforward; the normal coordinates
\[x_{i}^{(\pm)}\;\equiv\;\frac{x_{i}^{(1)}\pm x_{i}^{(2)}}{\sqrt{2}}\;\;,\;\;\; i=1,\,2,\,3\;, \tag{36}\]
lead to the potential:
\[V_{\rm eff}(\{x_{i}^{(\pm)}\}) = \frac{m}{2}\sum_{i=1}^{3}\,\Big{(}(\Omega_{i}^{+})^{2}\,x_{i}^{( +)}x_{i}^{(+)}+(\Omega_{i}^{-})^{2}\,x_{i}^{(-)}x_{i}^{(-)}\Big{)}\] \[(\Omega_{1}^{\pm})^{2} = (\Omega_{2}^{\pm})^{2}\,=\,\Omega^{2}\,\mp\,\frac{q^{2}}{4\pi m ^{3}}\;\;,\;\;\;(\Omega_{3}^{\pm})^{2}\,=\,\Omega^{2}\,\mp\,\frac{q^{2}}{2\pi m ^{3}}\;. \tag{37}\]
By diagonalizing the action (35), the system becomes a set of three uncoupled harmonic oscillators, which partition function factorices as the product of the partition functions of each individual oscillator, with frequencies \(\sqrt{(\Omega_{i}^{\pm})^{2}}\), \(i=1,2,3\)[6, 12]. The energy of the system is:
\[E_{0}=\frac{1}{2}\,\sum_{i=1}^{3}\Big{(}\sqrt{\left(\Omega_{i}^{+}\right)^{2}}+ \sqrt{\left(\Omega_{i}^{-}\right)^{2}}\,\Big{)}. \tag{38}\]
From the expressions of (37), we see that there will appear complex frequencies, depending on the value of \(r\). Defining the two distances:
\[r_{1}\,\equiv\,\Big{(}\frac{q^{2}}{2\pi m\Omega^{2}}\Big{)}^{1/3}\,\,\,,\,\,\, \,r_{2}\,\equiv\,\Big{(}\frac{q^{2}}{4\pi m\Omega^{2}}\Big{)}^{1/3}\,\,\,, \tag{39}\]
we note that there is a first threshold at \(r=r_{1}\) for the existence of an complex frequency, and then another one at \(r=r_{2}<r_{1}\). Note that this explains the behaviour observed in the plot of the imaginary part of the energy as a function of \(r\). Indeed, besides the clear existence of the first threshold at \(r=r_{1}\), we also see the emergence of the second one at \(r=r_{2}\). Note also that the existence of two modes for the latter is reflected in the steepest rise for \(r<r_{2}\). The real and imaginary parts of the interaction energy are shown in Fig. 2.
The physical interpretation of the imaginary part of the energy is that of a non-vanishing probability of vacuum decay, understanding as vacuum the one used in the calculation of the effective action. Regarding the atoms, that vacuum is the tensor product of the two respective ground states. On the other hand, when the two atoms are sufficiently close to each other, it is clear the true vacuum should be closer to the one of two electrons in a molecule. That is not a tensor product of the two: rather, it should be closer to a linear combination of two atomic orbitals. That is indeed what may be seen from the form of the normal coordinates obtained in (36): the modes that destabilize the vacuum correspond to \({\bf x}^{(+)}=\frac{x^{(1)}+x^{(2)}}{\sqrt{2}}\).
### General central potential
Let us consider here the case of a more general central potential \(V\), in such a way that the action for each atom, rather than having the specific form (2) is now assumed to fall under the more general form:
\[{\cal S}_{0}^{a}({\bf x})\,\,=\,\,\int dt\,\Big{[}\frac{m}{2}\,\dot{\bf x}^{2} -V(|{\bf x}|)\Big{]}\,\,. \tag{40}\]
The most immediate way to compute the effect of using this potential rather than the original, harmonic one, is to evaluate the Euclidean the effective
action for the redefined action in the weak-coupling regime. The lowest non-trivial contribution to \(\Gamma_{\rm eff}(r)\) is again of order \(q^{4}\),
\[\Gamma_{\rm eff}(r)\;\simeq\;\Gamma_{\rm eff}^{(4)}(r)\;, \tag{41}\]
and it may be obtained by using the properly redefined \({\cal S}_{\rm eff}\) in (18), after integrating out the EM field fluctuations, and discarding self-energy terms, the result being
\[\Gamma_{\rm eff}^{(4)}(r)\;=\;-\frac{1}{2}\,\int d^{4}x\int d^{4}y \int d^{4}z\int d^{4}w\left[\Delta(x-y)\Delta(z-w)\right.\] \[\times\left.\langle J_{\mu}^{(1)}(x)J_{\nu}^{(1)}(z)\rangle\, \langle J_{\mu}^{(2)}(y)J_{\nu}^{(2)}(w)\rangle\right]. \tag{42}\]
Here, the functional averaging is understood with the (40) action determining the respective weight; namely:
\[\langle J_{\mu}^{(\alpha)}(x)J_{\nu}^{(\alpha)}(y)\rangle\,\equiv\,\frac{ \int{\cal D}{\bf x}^{(\alpha)}\,J_{\mu}^{(\alpha)}(x)J_{\nu}^{(\alpha)}(y)\,e ^{-{\cal S}_{0}^{a}({\bf x}^{(\alpha)})}}{\int{\cal D}{\bf x}^{(\alpha)}e^{-{ \cal S}_{0}^{a}({\bf x}^{(\alpha)})}} \tag{43}\]
(no sum over \(\alpha\)). The model we are using is such that each electron is concentrated on one of the atoms, and as a consequence \(\langle J_{\mu}^{(1)}J_{\nu}^{(2)}\rangle=0\).
Figure 2: Real and imaginary parts of the interaction energy (\({\cal E}_{I}\)), as a function of \(x=\Omega r\), with \(\frac{q^{2}}{2\pi}\,\frac{\Omega}{m}=0.5\). \(x_{1}=\Omega r_{1}\) and \(x_{2}=\Omega r_{2}\) are the first and second thresholds for which \({\cal E}_{I}\) develops an imaginary part.
Recalling the definition of the currents in (10), it is clear that the above averages are going to depend on the correlators involving the coordinates and velocities of the electrons:
\[\langle x_{i}^{(\alpha)}(\tau)x_{j}^{(\alpha)}(\tau^{\prime})\rangle \;,\;\langle\dot{x}_{i}^{(\alpha)}(\tau)x_{j}^{(\alpha)}(\tau^{\prime}) \rangle\;\;,\] \[\langle\dot{x}_{i}^{(\alpha)}(\tau)\dot{x}_{j}^{(\alpha)}(\tau^{ \prime})\rangle \;,\;\langle\dot{x}_{i}^{(\alpha)}(\tau)x_{j}^{(\alpha)}(\tau^{ \prime})\rangle \tag{44}\]
(no sum over \(\alpha\)).
Using all the ingredients above, a lengthy but otherwise straightforward calculation allows us to evaluate (42). We find that it is proportional to the total time and, besides, it may be written as a single integral over a frequency:
\[\Bigl{[}\frac{\Gamma_{\rm eff}^{(4)}(r)}{T}\Bigr{]}_{T\to \infty}\;=\;-\frac{q^{4}}{2}\,\int\frac{d\nu}{2\pi}\,\widetilde{G}_{ij}(\nu) \,\widetilde{G}_{kl}(-\nu)\Bigl{[}\partial_{i}\partial_{k}\widetilde{\Delta}( \nu,r)\,\partial_{j}\partial_{l}\widetilde{\Delta}(\nu,r)\] \[-\nu^{2}\,\delta_{jl}\,\partial_{i}\partial_{k}\widetilde{\Delta }(\nu,r)\widetilde{\Delta}(\nu,r)-\nu^{2}\,\delta_{ik}\,\widetilde{\Delta}( \nu,r)\,\partial_{j}\partial_{l}\widetilde{\Delta}(\nu,r)\] \[+\nu^{4}\,\delta_{ik}\delta_{jl}\,\widetilde{\Delta}(\nu,r) \widetilde{\Delta}(\nu,r)\Bigr{]}\;, \tag{45}\]
where \(\widetilde{G}_{ij}(\nu)\) is the Fourier transform of \(\langle x_{i}^{(\alpha)}(\tau)x_{j}^{(\alpha)}(\tau^{\prime})\rangle\):
\[\langle x_{i}^{(\alpha)}(\tau)x_{j}^{(\alpha)}(\tau^{\prime})\rangle\,=\, \int\frac{d\nu}{2\pi}\,e^{i\nu(\tau-\tau^{\prime})}\,\widetilde{G}_{ij}(\nu)\;, \tag{46}\]
and
\[\widetilde{\Delta}(\nu,r)\;=\;\frac{e^{-|\nu|r}}{4\pi r}\;. \tag{47}\]
Under the assumption that the potential is central, we may, for each \(\alpha\), write:
\[\langle x_{i}^{(\alpha)}(\tau)x_{j}^{(\alpha)}(\tau^{\prime})\rangle = \delta_{ij}G(\tau-\tau^{\prime})\;,\;\;\langle\dot{x}_{i}^{( \alpha)}(\tau)x_{j}^{(\alpha)}(\tau^{\prime})\rangle\,=\,\delta_{ij}\partial_{ \tau}G(\tau-\tau^{\prime})\] \[\langle x_{i}^{(\alpha)}(\tau)\dot{x}_{j}^{(\alpha)}(\tau^{ \prime})\rangle = -\delta_{ij}\partial_{\tau}G(\tau-\tau^{\prime})\;,\;\langle \dot{x}_{i}^{(\alpha)}(\tau)\dot{x}_{j}^{(\alpha)}(\tau^{\prime})\rangle\,=\,- \delta_{ij}\partial_{\tau}^{2}G(\tau-\tau^{\prime})\;, \tag{48}\]
in terms of a single scalar function \(G\) (we recall that the atoms are assumed to be identical).
Therefore, the energy of interaction becomes:
\[E_{I}\;=\;-\frac{q^{4}}{16\pi^{3}r^{2}}\,\int_{0}^{\infty}\,d\nu\,\Bigl{|} \widetilde{G}(\nu)\Bigr{|}^{2}\,e^{-2\nu r}\,\Bigl{(}\nu^{4}+\frac{2\nu^{3}}{ r}+\frac{5\nu^{2}}{r^{2}}+\frac{6\nu}{r^{3}}+\frac{3}{r^{4}}\Bigr{)}\;. \tag{49}\]
Introducing yet again the variable \(u\equiv\nu r\),
\[E_{I}\;=\;-\frac{q^{4}}{16\pi^{3}r^{7}}\,\int_{0}^{\infty}du\,e^{-2u}\,\Bigl{|} \widetilde{G}(\frac{u}{r})\Bigr{|}^{2}\,\Bigl{(}u^{4}+2u^{3}+5u^{2}+6u+3\Bigr{)}\;. \tag{50}\]
When \(\widetilde{G}(\nu)\) has a finite zero-frequency limit, we can extract the long-distance behaviour of the interaction energy in terms of that limit. Besides,
\[\widetilde{G}(0)\:=\:\frac{1}{3}\:\int_{-\infty}^{+\infty}\frac{d\nu}{2\pi}\: \langle\left|\tilde{x}_{i}(\nu)\right|^{2}\rangle\;. \tag{51}\]
Thus, the asymptotic form of the energy is:
\[E_{I}\:=\:-\frac{q^{4}}{16\pi^{3}r^{7}}\:\Big{|}\widetilde{G}(0)\Big{|}^{2}\: \int_{0}^{\infty}du\,e^{-2u}\left(u^{4}+2u^{3}+5u^{2}+6u+3\right)\,. \tag{52}\]
By evaluating the integral, we get:
\[E_{I}\:=\:-\frac{23}{(4\pi)^{3}}\:\frac{q^{4}}{r^{7}}\:\Big{|}\widetilde{G}(0) \Big{|}^{2}. \tag{53}\]
In the special case of the three-dimensional harmonic potential we just considered before, we have, \(\widetilde{G}(0)=\frac{1}{m\Omega^{2}}\), which reproduces our previous result.
Finally, note that we can also find the result for the London limit in the case of a general central potential, since in that approximation we get:
\[E_{I}\:=\:-\frac{3}{4}\left(\frac{q^{2}}{4\pi}\right)^{2}\frac{\pi}{r^{6}}\: \int_{0}^{\infty}d\nu\left|\widetilde{G}(\nu)\right|^{2}\,, \tag{54}\]
which again produces the right result for the harmonic potential case.
## 4 Conclusions
We have presented a derivation of some known expressions for the Van der Waals interaction energy between two atoms, based on a microscopic description of the system, and applying functional methods: the energy is obtained by functional integration of the degrees of freedom in the Euclidean formalism, in order to obtain the vacuum energy from the resulting effective action.
We have analyzed the region where the description begins to fail, namely, when the atoms are too close, and the dipole interaction may overcome the binding energy of the electrons to their respective nuclei. This phenomenon shows up as the emergence of an imaginary part in the energy, and the consequent vacuum decay probability per unit time.
We suggest that beyond such a limit a molecular description should be the proper framework to describe the physics of the system.
Our results may be interpreted as providing a lower bound for the distances to which one can apply the usual Van der Waals description, in terms of parameters related to the structure of the atoms (in our case, \(m\) and \(\Omega\)), and the electromagnetic coupling (\(q\)). For distances larger that what we denoted by \(r_{1}\), an effective description for the interaction energy like the one we have used should be reliable.
## Acknowledgements
The authors thank ANPCyT, CONICET and UNCuyo for financial support.
|
2306.03932 | Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA
Tasks? A: Self-Train on Unlabeled Images! | Finetuning a large vision language model (VLM) on a target dataset after
large scale pretraining is a dominant paradigm in visual question answering
(VQA). Datasets for specialized tasks such as knowledge-based VQA or VQA in non
natural-image domains are orders of magnitude smaller than those for
general-purpose VQA. While collecting additional labels for specialized tasks
or domains can be challenging, unlabeled images are often available. We
introduce SelTDA (Self-Taught Data Augmentation), a strategy for finetuning
large VLMs on small-scale VQA datasets. SelTDA uses the VLM and target dataset
to build a teacher model that can generate question-answer pseudolabels
directly conditioned on an image alone, allowing us to pseudolabel unlabeled
images. SelTDA then finetunes the initial VLM on the original dataset augmented
with freshly pseudolabeled images. We describe a series of experiments showing
that our self-taught data augmentation increases robustness to adversarially
searched questions, counterfactual examples and rephrasings, improves domain
generalization, and results in greater retention of numerical reasoning skills.
The proposed strategy requires no additional annotations or architectural
modifications, and is compatible with any modern encoder-decoder multimodal
transformer. Code available at https://github.com/codezakh/SelTDA. | Zaid Khan, Vijay Kumar BG, Samuel Schulter, Xiang Yu, Yun Fu, Manmohan Chandraker | 2023-06-06T18:00:47Z | http://arxiv.org/abs/2306.03932v1 | # Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks?
###### Abstract
Finetuning a large vision language model (VLM) on a target dataset after large scale pretraining is a dominant paradigm in visual question answering (VQA). Datasets for specialized tasks such as knowledge-based VQA or VQA in non natural-image domains are orders of magnitude smaller than those for general-purpose VQA. While collecting additional labels for specialized tasks or domains can be challenging, unlabeled images are often available. We introduce **S**eITDA (**S**elf-**Ta**ught** **D**ata** **A**ugmentation), a strategy for finetuning large VLMs on small-scale VQA datasets. SelfDAS uses the VLM and target dataset to build a teacher model that can generate question-answer pseudolabels directly conditioned on an image alone, allowing us to pseudolabel unlabeled images. SelfDAS then finetunes the initial VLM on the original dataset augmented with freshly pseudolabeled images. We describe a series of experiments showing that our self-taught data augmentation increases robustness to adversarially searched questions, counterfactual examples and rephrasings, improves domain generalization, and results in greater retention of numerical reasoning skills. The proposed strategy requires no additional annotations or architectural modifications, and is compatible with any modern encoder-decoder multimodal transformer. Code available at [https://github.com/codezakh/SelTDA](https://github.com/codezakh/SelTDA).
## 1 Introduction
Large, pretrained vision language foundation models [3, 20, 25, 26, 35, 49] are approaching human-level performance on visual question answering (VQA) [26, 50, 51, 52, 54, 62], as measured by the standard VQAv2 [13] benchmark. Yet on more complex VQA tasks [37, 43] there is a larger gap between humans and machines. One difficulty is the small scale of datasets for complex VQA tasks or those in domains beyond natural images. The first solution to deal with the data scarcity is to employ transfer learning from a larger VQA dataset (e.g. VQAv2) to the smaller, specialized VQA dataset. However weaknesses of VQA models such as lack of consistency [44], weakness to adversarially searched questions [27] and tendency to cheat by learning shortcuts [8] can be exacerbated when fine-tuning on small datasets.
Collecting annotations to expand a dataset for knowledge-intensive tasks or specialized domains is often prohibitively expensive. However, _unlabeled images_ are cheap and often available. How can we exploit unlabeled images for specific visual question answering tasks? One possibility is to generate new question+answer pairs for the unlabeled images, and use them during training. However, existing methods for visual question _generation_ require images with annotations -- either ground truth captions [2, 4], or bounding boxes [21, 48]. Even if these annotations were to be acquired, they induce a limited set of possible questions; they are limited to objects and concepts included in the acquired annotation, which are in turn limited by the finite label space of pretrained object detectors and the information disparity between a caption and an image (an image usually contains much more content
Figure 1: _SelfDAS_ expands the self-training paradigm to VQA. By self-generating supervision (orange line) for an image \(I\) without needing extra annotations, we can augment a target dataset with new images and their pseudo-questions and answers \((Q,A)\).
than a short caption can describe).
**Motivating Experiment**: In Fig 2, we show that a large vision-language model (VLM) pretrained on web-scale data contains knowledge that can be drawn out with image-conditional text generation, but which the model cannot verify when posed as a visual question-answering task. We prompt the BLIP [26] VLM (pretrained on 129M image-text pairs) to caption 1000 images from the CC3M [45] dataset starting with the phrase "this is a". We convert each caption into a boolean question where the correct answer is "yes" by inserting the caption into the template is this a <caption>? Next, we ask a BLIP VLM finetuned on the VQAv2 dataset [13] to choose between "yes" and "no" for each caption turned into a question. Surprisingly, the VQA-finetuned BLIP answers "no" to _at least_\(5\%\) of the questions, increasing to \(15\)% as the diversity of captions increases (adjusted by top-\(p\) parameter in nucleus sampling). This suggests the possibility that the VLM has knowledge it cannot exploit when answering questions, but is accessible when directly generating text conditioned on an image.
**Approach**: To exploit unlabeled images for VQA, we propose _SelTDA_, a three-stage framework for **Self**-**T**aught **D**ata **A**ugmentation (Fig 1 bottom panel). We adapt the paradigm of self-training used in object detection [65, 29] and image classification [60, 41] for VQA. In classification / detection, the task of labeling an image is identical to prediction, and the teacher and student optimize identically structured objectives. In VQA self-training, the student and teacher tasks are different. A teacher must pose and answer a question given an image, while the student provides an answers given a question and image. To handle this, we first cast the task of the teacher as a direct image-to-text generation task, and introduce a teacher model by updating the weights of the VLM to learn an _image-conditional_ visual question generation model VQG\({}_{IC}\). Next, we use VQG\({}_{IC}\) as a teacher to pseudolabel unlabeled images by sampling questions and answers from VQG\({}_{IC}\) with stochastic decoding. Finally, we augment the original VQA dataset with the newly labeled image-question-answer pairs, and finetune the VLM for visual question answering on the augmented VQA dataset.
**Benefits**: _SelTDA_ allows us generate synthetic training data by approximating the distribution \(P(Q,A|I)\) of the target VQA task, where \(Q,A,I\) represents a question, answer, and image respectively. One benefit is that the synthetic data increases the number of training pairs available for finetuning, which effects an increase in raw performance. A second benefit is an increase in the diversity of questions and answers due to the introduction of new images and the stochastic nature of the text decoding, which results in increased robustness and domain generalization. A third benefit is the distillation of knowledge from pretraining and transfer learning into the synthetic training data, which can teach new skills (e.g. domain generalization) or prevent the forgetting of specific skills (e.g. numerical reasoning). Finally _SelTDA_ is architecture-agnostic given a vision-language model capable of image-conditional text-generation. Our contributions can be summarized as follows:
1. We introduce _SelTDA_, a variant of the self-training paradigm that is designed for VQA and large generative pretrained VLMs.
2. We propose treating visual question generation as a direct image-to-text task by leveraging the autoregressive decoder of a large, pretrained VLM, enabling us to generate questions and answers from an unlabeled image with no auxillary annotations needed.
3. We show that a large VLM trained with the proposed _SelTDA_ gains increased robustness, domain generalization, numerical reasoning, and performance when finetuning on small-scale VQA datasets.
Figure 2: Motivating experiment. We sample increasingly diverse captions from BLIP [26], convert them to questions, and pose the questions to BLIP after finetuning on VQAv2. As caption diversity increases, self-agreement decreases (right panel). Despite the diversity, many captions remain correct (middle panel), suggesting that the VLM has knowledge that is not exhausted by task-specific finetuning.
## 2 Related Work
**Augmentation for VQA** The method of [53] augments images by using an MLP to classify possible answers in the image and using an LSTM to generate questions matching the answer. While this works with unlabeled images, it is not used for self-training, has a limited label space, and does not leverage large VLMs. KDDAug [6] augments existing question answer pairs by generating pseudoanswers and achieves increases in robustness. ConCat [19] similarly trains more robust models by augmenting the _existing_ QA pairs in a dataset. In contrast to this line of work, we seek to exploit _unlabeled images_ by generating _new_ questions and answers, and using a large VLM to generate augmentation.
**Few/Zero-shot Generalization** Large VLMs have shown impressive generalization to unseen tasks after large-scale pretraining [1], echoing similar achievements in natural language processing [55, 7]. We explore zero-shot generalization to similar tasks in new domains. Domain _adaptation_ in VQA has been explored, first by [58, 5] and most recently by [63]. These fall into the general line of _feature adaptation_ methods for domain adaptation, as they align domain features. Our method is more similar to pseudolabeling based methods for domain adaptation [24, 31] with the difference being that our pseudolabels are natural language rather than distributions. Moreover, we do not focus on _adaptation_, but zero-shot generalization.
**Visual Question Generation** is a well-explored topic with a long history of prior work [23, 28, 64, 38]. In contrast to prior work, our VQG teacher model _does not_ rely on or need paired ground truth annotations for an unlabeled image to generate questions. SimpleAug [21] and GuidedVQG [48] relies on annotations such as bounding boxes to generate new questions, and requires pretrained object detectors, which have a limited label space. WeaQ [2] requires captions to already be present, as does [4], which additionally uses a large language model (T5-XXL with 11B parameters) to generate questions. One similarity of our approach to [4] is that we both seek to use knowledge in a large model to generate questions, with the main differences being that we do not require ground-truth captions for unlabeled images, and we use a large vision-language model than a large language model. VQAPG [61] is similar to our approach in not requiring any ground-truth annotations, but focuses on creating a joint question-generation and question answering model that is consistent, rather than self-training a model with unlabeled data. The authors of [17] propose a VQG method that does not rely on ground-truth annotations, but their method is LSTM-based, rather than based on self-training with a large vision-language model.
**Self-Training** uses labeled data to train a teacher model. The teacher model provides labels for auxiliary unlabeled data. Finally, a student model is trained on the labeled data augmented with newly-labeled data. Previous work in self-training for computer vision focuses on image-classification [59, 57] or object detection [65, 60, 29, 41]. A significant difference between classical self-training and our setting is that in the more traditional settings, the teacher and student have the same task. In our setting, the task of the teacher (ask a question) is different than the task of the student (answer a question). More similar to us, [42] uses self-training for question-answering. However, the teacher model of [42] has a fundamentally different task, since it is a reading comprehension task, where the ground-truth answer is mentioned within the passage itself. In our task, the teacher model must generate the ground-truth answer from its own internal knowledge and by inspecting an image.
## 3 Method
Our goal is to pseudolabel an unlabeled image \(I\) with a generated question-answer pair \((Q,A)\) using a teacher (initialized from the VLM), and then train a student model (the initial VLM) on the real VQA pairs augmented with the generated VQA pairs. To generate the pseudolabels, we first learn a visual question generation model on the real question-answer pairs and images as the teacher. We denote this model VQG\({}_{\mathrm{IC}}\) to highlight the _image-conditional_ nature of the model, because the model generates both a question and answer conditional on an image alone. This approach is end-to-end, requires _no ground truth annotations, bounding boxes, or handcrafted guidance_, and provides a generative model approximating \(P(Q,A|I)\) that we can sample from. We then feed the teacher model unlabeled images and stochastically decode from the teacher model to generate pseudolabels, which we parse into question answer pairs. After the real samples in the dataset have been augmented with the self-generated samples, VQA training can proceed as normal. Our approach is compatible with any modern encoder-decoder multimodal architecture. This is because our approach relies entirely on direct image-to-text generation, which is possible in modern large vision language models since their autoregressive decoders are designed to produce text conditioned on an image.
### The Teacher: Direct Image-Conditional VQG
Self-training requires a teacher model to produce pseudolabels that the student model then learns to mimic. In order to use unlabeled data for VQA, the teacher model must be able to pose a question and provide an answer given an unlabeled image, which is a different task from VQA. Given an image \(I\), a question \(Q\) and answer \(A\), the VQA student must approximate \(P(A\mid Q,I)\), while the teacher model must approximate \(P(Q,A\mid I)\). Previous approaches to visual question generation (VQG) cannot work with unlabeled data because they approximate \(P(Q\mid I,A)\), that is, they generate a question conditional on the image and a potential answer. In contrast to these previous, answer-conditional VQG
approaches, we develop an _image-conditional_ approach (VQG\({}_{IC}\)) that we use as a teacher model. Our approach also contrasts with self-training in image classification or object detection, which benefit from having the teacher and student _both_ approximating and predicting identically structured distributions \(P(Y|I)\), where \(Y\) is often a distribution over a (finite) label space.
To create the VQG\({}_{IC}\) teacher that approximates \(P(Q,A|I)\), we treat the problem of learning such a model as a text-generation problem, and wish to train the autoregressive decoder of the vision-language model to approximate \(P(T|I)\), where \(T=(Q,A)\). Let \(\mathcal{D}_{QA}\) be a question-answer dataset we wish to create a teacher from. For a sample \((Q,A,I)\in\mathcal{D}_{QA}\), we transform it into a target sequence of tokens \(y_{1:N}=(y_{1},y_{2},\dots y_{n})\) by entering \((Q,A)\) into a structured template of the form "**Question**: <question>**? **Answer**: <answer>." where <question> and <answer> are replaced by the content of \(Q\) and \(A\) respectively. Once \(y_{1:N}=(y_{1},y_{2},\dots y_{n})\) is obtained, we train the model by optimizing
\[\mathcal{L}_{\mathrm{VQG}}=-\sum_{n=1}^{N}\log P_{\theta}\left(y_{n}\mid y_{< n},x\right) \tag{1}\]
over all question-image-answer pairs in \(\mathcal{D}_{QA}\), where \(x\) is the latent encoded features in the standard encoder-decoder architecture and \(\theta\) represents the VLM parameters. The VQG\({}_{\mathrm{IC}}\) thus learns to maximize the conditional likelihood of a question-answer _pair_ represented as a unified string, given an image. Recall that VQG\({}_{IC}\) is initialized from the parameters of an autoregressive VLM. The VLM is a quality approximator of \(P(T|I)\), having been exposed to a diverse number of images and paired text. The VQG\({}_{IC}\) teacher can tap into this reservoir of knowledge, because a pseudo question-answer pair \((Q^{\prime},A^{\prime})\) is generated jointly as a text \(T^{\prime}\), allowing us to sample from \(P(T|I)\).
### Training the Student with Unlabeled Data
Once the VQG\({}_{\mathrm{IC}}\) teacher model has been obtained, self-training with unlabeled data can proceed. To produce a
Figure 4: Example questions and answers generated by the teacher on unlabeled images. The questions include unusual pairings (cat wearing necktie) or require broad knowledge (identifying a baby shower or London landmarks) and inferences about scenes (the baby is learning).
Figure 3: Overview of the proposed framework. We first create the teacher VQG\({}_{IC}\) (§3.1), use VQG\({}_{IC}\) to pseudolabel unlabeled images (§3.2), and finetune student on the original training pairs augmented with the pseudolabeled images. The pseudolabels are natural language.
pseudolabel \((Q^{\prime},A^{\prime})\) for an unlabeled image \(I_{u}\), we first obtain \(\mathbf{L}_{1:N}=\text{VQG}_{\text{IC}}(I_{u})\), where \(\mathbf{L}_{1:N}\) are the logits of the decoder. The logits \(\mathbf{L}_{1:N}\) define a distribution \(P\left(L_{N}\mid L_{1:N-1}\right)\) over the tokens of the model's natural language vocabulary. We then apply nucleus sampling [15] to stochastically decode a text \(T^{\prime}\) from \(P\left(L_{N}\mid L_{1:N-1}\right)\). The structured format of the generation template can then be easily parsed by a regular expression to recover a pseudo-question-answer pair \((Q^{\prime},A^{\prime})\) from the decoded text \(T^{\prime}\). This pair \((Q^{\prime},A^{\prime})=T^{\prime}\) is a sample from \(P(T|I)\), and reflects textual knowledge about the content of an image known to the VLM.
We then proceed to pseudolabel the desired number of images and obtain any number of triplets of the form \((Q^{\prime},A^{\prime},I_{u})\), representing self-generated training data \(\mathcal{D}^{\prime}_{QA}\)in the style of a target dataset \(\mathcal{D}_{QA}\). We then augment the real dataset \(\mathcal{D}_{QA}\) with the self-generated question-answer pairs on unlabeled images \(\mathcal{D}^{\prime}_{QA}\) to create a self-augmented training dataset \(\mathcal{D}_{\text{AugQA}}=\mathcal{D}^{\prime}_{QA}\cup\mathcal{D}_{QA}\). The teacher model is no longer needed, and the student can be initialized from the checkpoint obtained after large-scale pretraining that the teacher model was initialized from. At this point, VQA training can proceed as normal. In our setting, we use the training procedure of BLIP [26] in which VQA is treated as an open-ended generation task, and the VQA objective can be expressed as the standard language modeling loss
\[\mathcal{L}_{\text{VQA}}=-\sum_{n=1}^{N}\log P_{\theta}\left(y_{n}\mid y_{<n}, x_{n}\right) \tag{2}\]
where \(x_{n}\) is the \(n\)-th element of the multimodal sequence embeddings \(\mathbf{X}_{1:N}\) produced by \(\mathrm{VLM}(Q,I;\theta))\), \(Q,I\) are the question and image, \(y_{1:N}\) is the sequence of answer tokens, and \(\theta\) represents the VLM parameters, which we initialize from the _pretrained_ weights rather than the teacher. Why can high quality pseudolabels \((Q^{\prime},A^{\prime})\) be generated even when \(\mathcal{D}_{QA}\) is small, and few pairs are available for adapting the teacher VQG\({}_{IC}\)? Knowledge about the _content_ of the image in a textual form \(P(T|I)\) is already well-learned by the VLM from which we initialize VQG\({}_{IC}\). Thus, \(\mathcal{D}_{QA}\) only needs sufficient pairs to teach VQG\({}_{IC}\) how to construct annotations matching the style of \(\mathcal{D}_{QA}\).
## 4 Experiments
**Experimental Setup** We implement our framework in PyTorch [39] and use the same hyperparameter settings for all experiments. Our settings are taken from [26]. We train each VQA model for 10 epochs, using the AdamW [33] optimizer with a weight decay of 0.05 and a linear LR decay to 0 from an initial LR 2e-5. Each VQG model is trained for 10 epochs with the same weight decay and an initial LR of 2e-5. For VQA, we use a global batch size of 64 on 4 GPUs, with a per device batch size of 16. For VQG, we use a global batch size of 128, with a per device batch size of 32. All models are initialized from pretrained BLIP [26] checkpoints. For VQA, we use an image size of \(480\times 480\) and an image size of \(384\times 384\) for VQG. For all datasets, we use the official training, validation, and test splits.
**Baseline** As a strong baseline model, we use the ViT-B/16 version of the BLIP [26] model pretrained on 129M image-text pairs. BLIP [26] has an autoregressive decoder and is trained for text-generation, making it easy to adapt to text-generation tasks. When decoding, we use nucleus sampling with a top-\(p\) of 0.92. Additional experiments and visualizations can be found in the supplemental material.
### Self-Training: A-OKVQA & ArtVQA
We evaluate _SeITDA_ in two domains: outside knowledge VQA on natural images with A-OKVQA [43] and outside knowledge VQA on fine-art images with AQUA [12]. We use the COCO 2017 unlabeled set [30] as a source of addi
\begin{table}
\begin{tabular}{l l l l} \hline \hline & & \multicolumn{3}{c}{A-OKVQA} \\ \cline{3-4} & Model & Validation & Test \\ \hline (a) & ViLBERT [34] & 49.1 & 41.5 \\ (b) & LXMERT [46] & 51.4 & 41.6 \\ (c) & KRISP [36] & 51.9 & 42.2 \\ (d) & GPV-2 [18] & 60.3 & 53.7 \\ (e) & BLIP [26] & 57.1 & \\ (f) & BLIP\({}_{\text{VQA}\text{v2}}\)[26] & 67.8 & **59.5** \\ \hline (g) & BLIP + _SeITDA_ & 62.1 & 54.5 \\ & \% gain w.r.t baseline & +5.0 & \\ & \% gain w.r.t best prior work & +1.8 & +0.8 \\ (h) & BLIP\({}_{\text{VQA}\text{v2}}\) + _SeITDA_ & **68.9** & **59.5** \\ & \% gain w.r.t baseline & +1.1 & +0.0 \\ & \% gain w.r.t best prior work & +8.6 & +5.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: _SeITDA_ improves performance on knowledge-based VQA, even on a strong baseline pretrained on 129M pairs.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & & \multicolumn{3}{c}{ArtVQA Accuracy} \\ \cline{3-4} & Model & Overall & Grounded \\ \hline (a) & BAN [22] & 22.4 & - \\ (b) & BLIP [26] & 21.36 & 81.71 \\ (c) & VIKING [12] & 55.5 & 78.74 \\ (d) & VIKING\({}_{\text{VLM}}\) & 55.9 & 81.9 \\ \hline (e) & BLIP + _SeITDA_ & 21.68 & **83.86** \\ & \% gain w.r.t baseline & +0.32 & +2.15 \\ (f) & VIKING\({}_{\text{VLM}}\) + _SeITDA_ & **56.86** & **83.86** \\ & \% gain w.r.t baseline & +0.92 & +1.96 \\ \hline \hline \end{tabular}
\end{table}
Table 2: _SeITDA_ improves VQA on fine art images [12] for VIKING and BLIP models. Grounded denotes visually grounded questions.
tional images for A-OKVQA, and SemArt [11] as a source of fine art images for ArtVQA. On A-OKVQA, we perform model selection over students trained with varying amounts of _SeITDA_ with the training set, and on ArtVQA, we use the validation set. On A-OKVQA (Table 1), we show that self-taught data augmentation improves overall performance, especially in the setting where no extra data (VQAv2) is available. BLIP with _SeITDA_ achieves SOTA performance on A-OKVQA without transfer learning (row g in Table 1), even relative to competitors using transfer learning. This performance improvement holds even when \(447k\)_real_ pairs from VQAv2 are used for transfer learning, suggesting that self-taught data augmentation offers real improvements over manual annotations. On fine art VQA (Table 2), we show that self-taught data augmentation achieves state-of-the-art and improves overall performance, with a large increase for visually grounded questions.
### Ablations & Analysis of Pseudolabels
We manually evaluate 100 randomly sampled questions generated by the teacher model on A-OKVQA (Table 3). The generated questions and answers are noisier than the real questions and answers, but the levels of noise are not substantially below the human agreement on A-OKVQA. Questions which require visual reasoning or external knowledge are harder to generate correctly compared to those that require simpler visual identification (e.g. "what is this object?"). Next, we show using t-SNE [47] that the teacher model learns to copy the "style" of questions in a particular dataset (Fig 5). Synthetic questions generated by a teacher finetuned for a specific dataset (ArtVQA) are more similar to the style of the questions found in the target dataset compared to real questions from a different dataset (VQAv2), while being more diverse. We show that the performance
Figure 5: A T-SNE embedding shows that questions generated by a teacher finetuned on ArtVQA (orange) differ from real VQAv2 questions (blue) and are more similar to the real ArtVQA questions (green), yet more diverse, covering a larger area. We use SimCSE [10] to obtain a dense vector representation of each sentence. All the sets of questions are embedded together with T-SNE.
\begin{table}
\begin{tabular}{l c c c|c} \hline \hline Question Type & Well-Posed Question & Answers Correct & Answerable & \% of Total (95\% CI) \\ \hline External Knowledge & 73\% & 62\% & 70\% & 29.6\% - 50.00\% \\ Visual Identification & 94\% & 88\% & 94\% & 11.18\% - 27.65 \% \\ Visual Reasoning & 83\% & 70\% & 80\% & 32.54\% - 53.17\% \\ \hline Overall (95\% CI) & 71.16\% - 87.96\% & 59.77\% - 78.98\% & 68.83\% - 86.22\% & \\ \hline \hline \end{tabular}
\end{table}
Table 3: We manually inspect 100 questions and answers generated by the teacher model finetuned on A-OKVQA. We show the 95% confidence interval obtained by a proportion test. Annotator agreement on A-OKVQA is about 79.5% on the validation set.
Figure 6: Sunburst chart of questions generated by a teacher model finetuned on A-OKVQA.
gains of _SelTDA_ are due to novel-question answer pairs (first half of Tab 4) that add information not present in the ground-truth QA pairs, not only due to the additional images. However, the student model benefits from _both_ the novel-question answer pairs and unlabeled images (second half of Table 4).
**Optimal Amount of Augmentation** We explore how the amount of augmentation affects performance. The highest performance on the A-OKVQA validation and test sets is reached when the number of synthetic is double that of the real pairs (Table 4). When transfer learning from VQAv2, the ratio is different, and peak performance is reached when the number of synthetic pairs is \(50\%\) the number of real pairs (Table 5,4). Performance and robustness improvements (Table 5) saturate as increasing amounts of synthetic pairs are added, which may be the result of task-irrelevant information seeing into the dataset due to stochastic sampling.
### Robustness
We investigate whether the self-taught data augmentation improves robustness of VQA models. We consider three known weaknesses. The first is adversarially searched questions, collected in the AdVQA [27] dataset through human-in-the-loop attacks against state-of-the-art VQA models. In Table 5, we show that models trained with self-taught data augmentation perform significantly better (\(20\%\) relative improvement and \(6\%\) absolute improvement) on AdVQA. The second form of robustness we consider is resistance to multimodal shortcut learning, which the VQA-CE (Counterexamples) [8] test set measures. The test set is constructed so that models which have learned to answer questions using shortcuts based on correlations in the VQAv2 training set (ex: tennis racket detected + question about sport \(\rightarrow\) always answer tennis) will display reduced performance on the VQA-CE test set. We construct our A-OKVQA models by transfer learning from the VQAv2 training set, so VQA-CE can be used to test multimodal shortcut learning in our models. In Table 5, we show that models trained with self-taught data augmentation are more resistant to shortcut learning (\(1.9\%\) absolute improvement on VQA-CE) compared to the baseline model trained without self-taught data augmentation. Finally, we consider robustness to rephrasings. VQA
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline \multicolumn{2}{c}{Images} & \multicolumn{3}{c}{Questions} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-7} Labeled & Unlabeled & Real & Synthetic & Total & Multiplier & Accuracy & \% Gain & Questions/Image \\ \hline
17,000 & 0 & 17,000 & 0 & 17,000 & 1x (baseline) & 57.11 & & N/A \\
17,000 & 0 & 17,000 & 17,000 & 34,000 & 2x & 57.85 & +0.74 & 1 / 1 \\
17,000 & 0 & 17,000 & 34,000 & 51,000 & 3x & **60.01** & **+2.90** & 2 / 1 \\
17,000 & 0 & 17,000 & 51,000 & 68,000 & 4x & 59.73 & +2.62 & 3 / 1 \\ \hline
17,000 & 0 & 17,000 & 0 & 17k & 1x (baseline) & 57.11 & & N/A \\
17,000 & 8,500 & 17,000 & 17,000 & 34,000 & 2x & 60.69 & +3.57 & 2 / 1 \\
17,000 & 17,000 & 17,000 & 34,000 & 51,000 & 3x & **62.09** & **+4.98** & 2 / 1 \\
17,000 & 25,500 & 17,000 & 51,000 & 68,000 & 4x & 61.31 & +4.20 & 2 / 1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: _SelTDA_ can improve performance even without additional unlabeled images, by generating more QA pairs for already labeled images. However, using previously unlabeled and unseen images results in further improvements. A-OKVQA is used.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline \multicolumn{2}{c}{} & \multicolumn{1}{c}{\# of Real + Synthetic QA Pairs} & \multicolumn{5}{c}{Robustness Test Sets} & \multicolumn{1}{c}{} \\ \cline{2-9} & Real & Synthetic & Multiplier & AdVQA & VQA-CE & VQA-Rephrasings & Avg. \% Increase & Robustness Total \\ \hline (a) & 17,000 & 0 & \(\times 1\) & 31.06 & 51.43 & 65.88 & 0 & 148.37 \\ (b) & 17,000 & 2,000 & \(\times 1.1\) & 37.09 & 52.96 & 67.94 & +3.21 & 157.99 \\ (c) & 17,000 & 4,500 & \(\times 1.3\) & 36.99 & 53.15 & **67.98** & +3.25 & 158.12 \\ (d) & 17,000 & 8,000 & \(\times 1.5\) & 37.34 & **53.33** & 67.57 & **+3.29** & **158.24** \\ (e) & 17,000 & 12,000 & \(\times 1.7\) & **37.43** & 52.62 & 67.35 & +3.01 & 157.4 \\ (f) & 17,000 & 17,000 & \(\times 2\) & 36.95 & 52.05 & 66.95 & +2.53 & 155.95 \\ (g) & 17,000 & 34,000 & \(\times 3\) & 36.89 & 51.00 & 65.64 & +1.72 & 153.53 \\ (h) & 17,000 & 51,000 & \(\times 4\) & 36.06 & 50.25 & 64.78 & +0.91 & 151.09 \\ \hline \multicolumn{2}{c}{Max \% increase on each dataset} & +6.03 & +1.9 & +2.1 & & +9.87 \\ \hline \hline \end{tabular}
\end{table}
Table 5: _SelTDA_ improves robustness of VQA models on AdVQA (adversarially searched questions), VQA-CE (multimodal shortcut learning) and VQA-Rephrasings test sets. The baseline (a) is trained on VQAv2 after pretraining, then finetuned on A-OKVQA.
models have been shown to be inconsistent when evaluated on rephrasings [44]. The VQA-Rephrasings test set consists of 3 human-provided rephrasings of the questions in the VQAv2 test set, intended to test the robustness of the model to rephrasings. On VQA-Rephrasings, self-taught data augmentation induces a \(2.1\%\) performance improvement relative to the baseline model, though both the baseline model and augmented models were initialized from from the same weights learned on the VQAv2 training set prior to finetuning on A-OKVQA.
### Domain Generalization
We hypothesize that self-taught data augmentation may improve domain generalization, because the student model has been exposed to a greater diversity of questions and answers. To test this, we compare the generalization of the baseline model and models trained with self-taught data augmentation on unseen test sets from three different domains. Concretely, we treat the natural-image based A-OKVQA task as the source task, and evaluate on VQA datasets from three target domains: medical, fine art, and remote sensing. For medical VQA, we use the PathVQA [14] dataset containing question and answers on pathology images. For fine art, we used the previously described AQUA [12] dataset for visual question answering on art images. For remote sensing, we use the RSVQA dataset [32], containing question and answers on satellite images. We display the results in Table 6. Across all three domains, self-taught data augmentation improves domain generalization over the baseline model. The improvement is greatest on fine art images, as the fine art domain is closest to the natural image domain with respect to the images, questions, and answers.
### Numerical Reasoning
Numerical reasoning is required to answer questions such as "how many sheep are looking at the camera". Naive transfer learning from VQAv2 to A-OKVQA results in catastrophic forgetting of numerical reasoning, and naive finetuning on A-OKVQA results in models with poor numerical reasoning. In Table 7, we show that _SeITDA_ significantly aids numerical reasoning when finetuning on a small-scale VQA dataset such as A-OKVQA. We measure numerical reasoning using questions labeled as requiring numerical answers on VQAv2 and the VQA-Rephrasings datasets. When transfer learning from VQAv2 (first half of Table 7), self-taught data augmentation results in an absolute increase of \(29.81\)% and \(24.71\)% on numerical questions on VQAv2 and VQA-Rephrasings. When finetuning directly on A-OKVQA (2nd half of Table 7), self-taught data augmentation results in an absolute increase of \(3.63\)% and \(10.57\)%. These results suggest that self-taught data augmentation can prevent catastrophic forgetting of numerical reasoning when transfer learning, and improve numerical reasoning significantly, even when the dataset used to train the teacher model has few numerical reasoning questions. One reason for this is that the the word "how" is a high-probability word to start a question with, and is naturally followed by "many" (Fig 6) resulting in numerical questions being generated.
## 5 Conclusion & Future Work
We present _SeITDA_, a framework for self-improving large VLMs on small-scale visual question answering tasks with unlabeled data. The limitations of _SeITDA_ suggest several opportunities for further work. First, the pseudo-QA pairs can be noisy. Combining _SeITDA_with methods for fact-checking based on external knowledge [40], logically consistent self-reasoning [16], or chain-of-thought prompting [56] to rationalize answers may result in higher quality pairs for self-training. Second, learning the teacher model may fail for specialized domains (e.g. medical), because the vocabulary is too specialized. Third, biases in the VLM or pretraining data may be amplified by self-training, and addressing these biases may reduce multimodal shortcut learning. Finally, self-training is yet to be explored with recently developed billion-parameter VLMs [9, 25].
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \multicolumn{3}{c}{Target (0-shot)} \\ \cline{2-4} Model & ArtVQA & PathVQA & RSVQA \\ \hline Baseline (BLIP) & 31.65 & 25.09 & 37.78 \\ BLIP + _SeITDA_ & 38.03 & 26.76 & 38.99 \\ \hline \% gain w.r.t baseline & +6.38 & +1.67 & +1.1 \\ \hline \hline \end{tabular}
\end{table}
Table 6: _SeITDA_improves domain generalization from natural images (A-OKVQA) to art QA, medical QA, and remote sensing QA.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{2}{c}{\# Training Pairs} & \multicolumn{3}{c}{Numerical Reasoning} \\ \cline{2-5} Initialization & Real & Synth & VQAv2 & VQA-Rephrasings \\ \hline BLIP\({}_{VQAv2}\) & 17000 & 0 & 13.49 & 13.06 \\ BLIP\({}_{VQAv2}\) & 17000 & 2000 & 38.73 & 33.74 \\ BLIP\({}_{VQAv2}\) & 17000 & 4500 & 40.4 & 35.91 \\ BLIP\({}_{VQAv2}\) & 17000 & 8000 & 42.9 & 36.5 \\ BLIP\({}_{VQAv2}\) & 17000 & 12000 & **43.3** & **37.77** \\ \hline \multicolumn{5}{c}{max \% gain w.r t baseline} & +29.81 & +24.71 \\ \hline BLIP & 17000 & 0 & 1.42 & 1.29 \\ BLIP & 17000 & 17000 & 4.53 & 11.44 \\ BLIP & 17000 & 34000 & **5.05** & 11.77 \\ BLIP & 17000 & 51000 & 4.26 & **11.86** \\ \hline \multicolumn{5}{c}{max \% gain w.r.t baseline} & +3.63 & +10.57 \\ \hline \hline \end{tabular}
\end{table}
Table 7: _SeITDA_ improves numerical reasoning when finetuning on a small-scale dataset (A-OKVQA). BLIP\({}_{VQAv2}\) indicates transfer learning from VQAv2, and BLIP indicates direct finetuning. |
2304.13380 | Coordinate Descent Full Configuration Interaction for Excited States | An efficient excited state method, named xCDFCI, in the configuration
interaction framework, is proposed. xCDFCI extends the unconstrained nonconvex
optimization problem in coordinate descent full configuration
interaction~(CDFCI) to a multicolumn version, for low-lying excited states
computation. The optimization problem is addressed via a tailored coordinate
descent method. In each iteration, a determinant is selected based on an
approximated gradient, and coefficients of all states associated with the
selected determinant are updated. A deterministic compression is applied to
limit memory usage. We test xCDFCI applied to H2O and N2 molecules under the
cc-pVDZ basis set. For both systems, five low-lying excited states in the same
symmetry sector are calculated together with the ground state. xCDFCI also
produces accurate binding curves of carbon dimer in the cc-pVDZ basis with
chemical accuracy, where the ground state and four excited states in the same
symmetry sector are benchmarked. | Zhe Wang, Zhiyuan Zhang, Jianfeng Lu, Yingzhou Li | 2023-04-26T08:46:17Z | http://arxiv.org/abs/2304.13380v2 | # Coordinate Descent Full Configuration Interaction for Excited States
###### Abstract
An efficient excited state method, named xCDFCI, in the configuration interaction framework, is proposed. xCDFCI extends the unconstrained nonconvex optimization problem in CDFCI to a multicolumn version, for low-lying excited states computation. The optimization problem is addressed via a tailored coordinate descent method. In each iteration, a determinant is selected based on an approximated gradient, and coefficients of all states associated with the selected determinant are updated. A deterministic compression is applied to limit memory usage. We test xCDFCI applied to H\({}_{2}\)O and N\({}_{2}\) molecules under the cc-pVDZ basis set. For both systems, five low-lying excited states in the same symmetry sector are calculated together with the ground state. xCDFCI also produces accurate binding curves of carbon dimer in the cc-pVDZ basis with 10\({}^{-2}\) mHa accuracy, where the ground state and four excited states in the same symmetry sector are benchmarked.
## 1 Introduction
Excited state computations are of great importance in understanding and predicting many phenomena in photochemistry, spectroscopy, and others. Compared to the ground state computation, excited state computations are more challenging for wavefunction ansatz based methods, including Hartree-Fock methods,[1, 2] configuration interaction methods,[3] and coupled cluster methods,[4, 5, 6] etc. The excited states in general have multi-reference characters, and the wavefunction ansatz in these methods limit the representation of dynamic correlations. Similarly, density functional theory (DFT) methods[7, 8, 9, 10, 11] and time-dependent DFT methods[12, 13] find it more challenging in calculating the excited states than the ground state. Under the full configuration interaction (FCI) framework, it is also considered more challenging to calculate the excited states but is not as severe as the aforementioned methods. In general, there are two types of challenges for excited state computations under FCI. First, due to the natural multi-reference features of excited states, the discretization basis set should be of larger sizes than that in ground state computation and the corresponding FCI matrix size would be larger. Second, the energy gaps between excited states are in general smaller than that between the ground state and the first excited state, which would lead to more iterations in iterative eigensolvers before converging. In this paper, we propose xCDFCI for excited state computation under the FCI framework. The method is closely related to the recently developed efficient FCI solver, by three of the authors, coordinate descent FCI (CDFCI).[14]
Many modern FCI solvers have been devel
oped for ground state computation in the past two decades, together with their extensions to excited state computations. Density matrix renormalization group (DMRG) [15, 16, 17, 18] uses matrix product state as the underlying ansatz for the ground state wavefunction and solves the FCI problem via the iterative sweeping procedure. Various strategies [19, 20] are proposed to address excited states one by one. FCI quantum Monte Carlo (FCIQMC) and its variants [21, 22, 23] use many walkers to represent quantized configuration coefficients and randomly move them along the sparse graph generated by the Hamiltonian matrix. In its extension to excited state computations, [24] several groups of walkers are used to represent excited states, and an orthogonal projection is introduced between iterations to prevent groups from collapsing into the ground state. Selected-CI is a group of FCI solvers based on sequential configuration selections, including adaptive configuration interaction (ACI), [25] heat-bath configuration interaction (HCI), [26, 27] and adaptive sampling configuration interaction (ASCI). [28] For these methods, various perturbation-inspired expressions are used to sequentially select important configurations in the full CI space. Traditional iterative eigensolver, like Davidson, is applied to the reduced Hamiltonian matrix, i.e., Hamiltonian matrix restricted to selected configurations. Extending selected-CI methods to excited state computations is straightforward. After a small modification of the selection criteria, [28, 29, 30] the excited states are computed by solving the low-lying eigenstates of the reduced Hamiltonian matrix. A post-perturbation process is almost always applied to improve the accuracy of the variational energies. [26, 27, 31] FCI fast random iteration (FCI-FRI) [32] adopts a bias-free and small-variance sampling procedure to compress the wavefunction under the power method framework. In the excited state version of FCI-FRI, [33] the iterative method is a multicolumn version power method, where the normalization is carried out every iteration and the orthogonalization is carried out every few iterations. Coordinate descent FCI [14] reformulates the eigenvalue problem as an unconstrained optimization problem and applies a coordinate-descent method with a tailored compression strategy to solve it.
Moreover, FCI problems have also attracted attention from the numerical linear algebra community in recent years. Many algorithms and analyses [34, 35, 36, 37, 38, 39, 40] influence the developments above. Other works attempt to incorporate machine learning and reinforcement learning technique to accelerate the FCI calculation. [41, 42]
In this paper, we extend CDFCI to excited state computations and name the method as xCDFCI. The unconstrained optimization problem in CDFCI is extended to a multicolumn version to accommodate low-lying excited states. The coordinate-descent method used to optimize the objective function is replaced by a row-block descent scheme in xCDFCI and the compression is still carried out in an entrywise way. The multi-column vector in xCDFCI does not converge to the ground state and low-lying excited states directly. Instead, it converges to a subspace formed by the ground and low-lying excited states. The eigenvectors can be recovered by a post-processing procedure. Most importantly, all desired features of the original CDFCI are preserved. Symmetries, including time-reversal symmetry and angular momentum symmetry, are implemented to reduce both computational and memory costs when the computation is restricted to a symmetry sector. Finally, numerical results on H\({}_{2}\)O, N\({}_{2}\) are included to demonstrate the efficiency of xCDFCI for excited state computations. We also report the binding curve of C\({}_{2}\) obtained using xCDFCI for both singlet and triplet.
The rest of the paper is organized as follows. Section 2 introduces xCDFCI for excited state computations and other related discussions. Section 3 provides numerical examples of xCDFCI. The paper is concluded in Section 4.
## 2 xCDFCI
We introduce xCDFCI in this section and discuss some implementation details. Notations are kept the same as that in Wang et al. [14] as much as possible. In the following, we first propose the unconstrained optimization prob
lem for excited state computations, then explain the xCDFCI algorithm step-by-step, and finally discuss its implementation details: initialization, stopping criteria, and symmetry.
### Optimization formula for excited state computations
Given a spin-orbital set \(\{\chi_{p}\}\), we denote the creation and annihilation operator as \(\hat{a}_{p}^{\dagger}\) and \(\hat{a}_{q}\) respectively. The Hamiltonian operator, under the second quantization, is given by
\[\widehat{H}=\sum_{p,q}t_{pq}\hat{a}_{p}^{\dagger}\hat{a}_{q}+\sum_{p,r,q,s}v_{ prqs}\hat{a}_{p}^{\dagger}\hat{a}_{r}^{\dagger}\hat{a}_{s}\hat{a}_{q},\]
where \(t_{pq}\) and \(v_{prqs}\) are one-body and two-body integrals respectively. The \(K\) low-lying states of the time-independent Schrodinger equation can be obtained by solving,
\[\widehat{H}\ket{\Phi_{k}}=E_{k}\ket{\Phi_{k}} \tag{1}\]
for \(k=0,1,\ldots,K-1\), where \(E_{0}\) is the smallest eigenvalue associated with the ground state \(\ket{\Phi_{0}}\), \(E_{1}\) is the second smallest eigenvalue associated with the first excited state \(\ket{\Phi_{1}}\), and so on, \(\{\ket{\Phi_{k}}\}_{k=0}^{K-1}\) are orthogonal to each other.1 Throughout this paper, we assume that all \(E_{0},E_{1},\ldots,E_{K-1}\) are negative. This assumption can be made without loss of generality, as otherwise, we can shift the Hamiltonian by a constant. We further denote the Slater determinants as \(\{\ket{D_{i}}\}_{i=1}^{N}\) for \(N=N_{\text{FCI}}\) being the size of the entire electron-preserving configuration space. Using \(\{\ket{D_{i}}\}_{i=1}^{N}\) as the basis, the ground state and excited states are discretized as,
Footnote 1: With some abuse of terminology, we will also refer the ground state as the 0-th excited state when it is convenient to do so.
\[\ket{\Phi_{k}}=\sum_{i}V_{i,k}\ket{D_{i}}, \tag{2}\]
and coefficients \(V_{i,k}\) forms a matrix \(V\) of size \(N\times K\) satisfying the orthonormality constraint, \(V^{\top}V=I\) for \(I\) being an identity matrix of size \(K\times K\). The Hamiltonian operator is discretized as the Hamiltonian matrix \(H\) with its \((i,j)\)-th entry being \(H_{ij}=\left\langle D_{i}\middle|\widehat{H}\middle|D_{j}\right\rangle\). After the discretization, solving (1) is to solve the low-lying \(K\) eigenpairs of \(H\), where the major computational difficulty comes from the factorial scaling of \(N_{\text{FCI}}\) with respect to the number of spin-orbitals and electrons.
Now we extend the unconstrained optimization problem in CDFCI [14] to excited states. The optimization problem is extended as,
\[\min_{C\in\mathbb{R}^{N\times K}}f(C)=\left\|H+CC^{\top}\right\|_{\text{F}}^ {2}, \tag{3}\]
where \(N\) is the problem size and \(K\) is the number of desired eigenstates. When \(K=1\), (3) is the same as the optimization problem in Wang et al. [14] The gradient of \(f(C)\) admits,
\[G=\nabla f=4HC+4C\big{(}C^{\top}C\big{)}. \tag{4}\]
As has been analyzed in Gao et al. [38], the unconstrained optimization problem (3) has enormous stationary points, but has no spurious local minimum. All local minima are global minima of the form,
\[V\sqrt{-\Lambda}Q, \tag{5}\]
where \(\Lambda\in\mathbb{R}^{K\times K}\) is a diagonal matrix with its diagonal entries being \(E_{0},E_{1},\ldots,E_{K-1}\), \(V\in\mathbb{R}^{N\times K}\) is the corresponding eigenvector matrix as defined in (2), and \(Q\in\mathbb{R}^{K\times K}\) is an arbitrary orthogonal matrix such that \(Q^{\top}Q=QQ^{\top}=I\).
Generally, gradient-based first-order methods, including the coordinate descent method, avoid saddle points and converge to a global minimum almost surely [43]. We remark that the minimizers of (3) only give the eigenspace due to the arbitrary \(Q\) in (5). To get eigenvectors, we need a post-processing step to retrieve eigenvectors when needed. The post-processing part is computationally cheap and costs no additional memory.
### Algorithm
The algorithm we propose for excited state computations is a coordinate descent method applying to (3), where some specifics are designed to fully incorporate the properties of FCI problems. We introduce our algorithm step by
step. Throughout the algorithm, two matrices \(C\) and \(B\) are kept: \(C\) is the iterator targeting (5) and \(B\) is used to track \(HC\), i.e., \(B\approx HC\). Further, we use superscript in the parenthesis to denote iteration index, e.g., \(C^{(\ell)}\) denotes the iterator at the \(\ell\)-th iteration. Colon notation is used to denote the entire row or column, e.g., \(C^{(\ell)}_{i,\cdot}\) denotes the \(i\)-th row of \(C^{(\ell)}\).
The xCDFCI algorithm is composed of an iterative part with 5 steps and a post-processing step. At each iteration, the first step selects a determinant with maximum absolute value in an approximated gradient of (3). The second step then conducts a linesearch and updates \(C\), where a fourth-order polynomial is minimized to determine the optimal stepsize. In the third and fourth steps, the corresponding updates to \(B\) is calculated with compression and a row of \(B\) is recalculated to improve accuracy with minimal additional cost. In the last step of the iterative part, energies are estimated via a generalized Rayleigh quotient procedure. When the iteration converges according to some stopping criteria, a post-processing step could be carried out to obtain the ground state vector and excited state vectors. In the following, we will explain each step of xCDFCI in detail.
### Step 1: Determinant select
This step aims to select a determinant for the update, which potentially leads to greatest decay in \(f(C)\). The determinant selection strategy is as follows,
\[i^{(\ell+1)}=\operatorname*{arg\,max}_{\begin{subarray}{c}j\in\mathcal{I}_{H} (i^{(\ell)})\\ 0\leq k<K\end{subarray}}\biggl{|}4B^{(\ell)}_{j,k}+4C^{(\ell)}_{j,\cdot}\Bigl{[} \bigl{(}C^{(\ell)}\bigr{)}^{\top}C^{(\ell)}\Bigr{]}_{:,k}\biggr{|}, \tag{6}\]
where \(i^{(\ell+1)}\) is the argument \(j\) achieving the maximum value. Here \(\mathcal{I}_{H}(i^{(\ell)})\) denotes the set of determinants connected to \(i^{(\ell)}\) via \(H\), i.e., for any \(j\in\mathcal{I}_{H}(i^{(\ell)})\), \(H_{i^{(\ell)}j}\) is nonzero and for any \(j\not\in\mathcal{I}_{H}(i^{(\ell)})\), \(H_{i^{(\ell)}j}\) is zero. Due to the existence of zeros in one- and two-body integrals, \(\mathcal{I}_{H}(i^{(\ell)})\) is a subset of the single and double excitations from \(i^{(\ell)}\) determinant. The intuition behind (6) is related to the gradient of \(f(C)\) (4). Comparing (6) and (4), we notice that the determinant is selected to be the row containing the absolutely largest gradient entry, so that it potentially leads to the greatest reduction of the objective function.
### Step 2: Coefficient update
Given a selected determinant \(i^{(\ell+1)}\), we seek the best stepsize \(\tau\) and move the \(i^{(\ell+1)}\)-th row of the coefficient matrix \(C^{(\ell)}\) along the gradient direction with the stepsize. The best stepsize \(\tau\) is achieved via solving,
\[\tau=\operatorname*{arg\,min}_{\tilde{\tau}}f\bigl{(}C^{(\ell)}+\tilde{\tau}e _{i^{(\ell+1)},\cdot}\widetilde{G}_{i^{(\ell+1)},\cdot}\bigr{)}, \tag{7}\]
where \(e_{i^{(\ell+1)}}\) is a vector with \(i^{(\ell+1)}\)-th entry being one and zero otherwise, and
\[\widetilde{G}_{i^{(\ell+1)},\cdot}=4B^{(\ell)}_{i^{(\ell+1)},\cdot}+4C^{(\ell )}_{i^{(\ell+1)},\cdot}\bigl{(}C^{(\ell)}\bigr{)}^{\top}C^{(\ell)}\]
is the \(i^{(\ell+1)}\)-th row of the approximated gradient (4). Solving (7) is actually minimizing a fourth-order polynomial of \(\tilde{\tau}\) and all polynomial coefficients can be evaluated in \(O(K^{2})\) operations (details can be found in Appendix A). Once the stepsize \(\tau\) is determined, we update \(C^{(\ell)}\) as follows,
\[C^{(\ell+1)}_{i,\cdot}=\begin{cases}C^{(\ell)}_{i,\cdot}+\tau\widetilde{G}_{i,\cdot}&\text{if }i=i^{(\ell+1)};\\ C^{(\ell)}_{i,\cdot}&\text{otherwise}.\end{cases}\]
### Step 3: Coefficient compression
Throughout the algorithm, we keep all entries of \(C\). While, for \(B=HC\) without compression, the number of nonzeros in \(HC\) is much larger than that in \(C\). We cannot afford for doing so in memory. Hence, we compress the representation of \(B\).
We use \(\operatorname*{supp}\bigl{(}B\bigr{)}\) to denote the set of determinants containing at least one nonzero coefficient, i.e., \(\operatorname*{supp}\bigl{(}B\bigr{)}=\{i:\max_{k}|B_{i,k}|>0\}\). Then we update and compress \(B^{(\ell)}\) as follows, for \(i=i^{(\ell+1)}\),
\[B^{(\ell+1)}_{j,\cdot}=\begin{cases}B^{(\ell)}_{j,\cdot}+\tau H_{j,i} \widetilde{G}_{i,\cdot}&\text{if }j\in\operatorname*{supp}\bigl{(}B^{(\ell)} \bigr{)}\\ \tau H_{j,i}\widetilde{G}_{i,\cdot}&\text{if }j\not\in\operatorname*{supp} \bigl{(}B^{(\ell)}\bigr{)}\text{ and }\\ \max_{k}|\tau H_{j,i}\widetilde{G}_{i,k}|>\varepsilon\end{cases}, \tag{8}\]
where \(\varepsilon\) is the pre-defined compression threshold. Equation (8) indicates that: for all pre-existing determinants in \(B\), the coefficients are updated accurately; while for new determinants, the coefficients are added only if they contain an important update. Obviously, the compression limits the growth of nonzeros in \(B\), and thus the data storage cost.
Now we explain the indirect connection to the compression of \(C\). According to (6), when a determinant is not in \(\,\mathrm{supp}\big{(}B^{(\ell)}\big{)}\), the corresponding gradient is zero, hence the determinant will not be selected, which in turn limits the growth of nonzeros in \(C\). Therefore, all compressions are explicitly applied to \(B\) only, and then indirectly limit the growth of nonzeros in \(C\).
#### Step 4: Coefficient recalculation
In (8), we already compute all nonzero entries in the \(i^{(\ell+1)}\)-th column of \(H\). Now we reuse these results to refine coefficients in \(B\). The \(i^{(\ell+1)}\)-th row in \(B\) is recalculated as follows,
\[B_{i^{(\ell+1)},:}^{(\ell+1)} =\sum_{j\in\mathcal{I}_{H}(i^{(\ell+1)})}H_{i^{(\ell+1)},j}C_{j,:} ^{(\ell+1)}\] \[=\sum_{j\in\mathcal{I}_{H}(i^{(\ell+1)})}H_{j,i^{(\ell+1)}}C_{j,: }^{(\ell+1)},\]
where the second equality is due to the symmetry property of the Hamiltonian. 2 This recalculation of \(B_{i^{(\ell+1)},:}^{(\ell+1)}\) is of essential importance when the \(i^{(\ell+1)}\)-th determinant is added to \(C^{(\ell)}\) for the first time. It removes potential errors made by compressions from earlier iterations and, together with (8), keeps \(B_{i^{(\ell+1)},:}\equiv H_{i^{(\ell+1)},:}C\) for all later iterations. From a numerical analysis viewpoint, the recalculation also preserves numerical accuracy. Since the number of iterations in xCDFCI could easily goes beyond \(10^{8}-10^{10}\), the accumulation of the numerical error caused by the finite precision computations would destroy the accuracy of energies. Regularly recalculating \(B_{i^{(\ell+1)},:}^{(\ell+1)}\) keeps \(B_{i^{(\ell+1)},:}\equiv H_{i^{(\ell+1)},:}C\) at a low level of numerical errors.
Footnote 2: If the Hamiltonian matrix is complex Hermitian, then a complex conjugate is needed in the equation.
#### Step 5: Energy estimation
Given a coefficient matrix \(C^{(\ell+1)}\), the energy estimation is conducted through a generalized Rayleigh quotient of second-order accuracy, which solves a generalized eigenvalue problem of matrix pair \(\Big{(}\big{(}C^{(\ell+1)}\big{)}^{\top}HC^{(\ell+1)},\big{(}C^{(\ell+1)} \big{)}^{\top}C^{(\ell+1)}\Big{)}\), i.e.,
\[\Big{(}\big{(}C^{(\ell+1)}\big{)}^{\top}HC^{(\ell+1)}\Big{)}U=\Big{(}\big{(}C^ {(\ell+1)}\big{)}^{\top}C^{(\ell+1)}\Big{)}U\Gamma, \tag{9}\]
for \(U\) being eigenvectors and \(\Gamma\) being the eigenvalue matrix 3. Since only the coefficients of a determinant are updated, both matrices can be updated accordingly,
Footnote 3: We assume \(U\) is a \(\big{(}(C^{(\ell+1)})^{\top}C^{(\ell+1)}\big{)}\) orthonormalized eigenvector matrix, i.e., \(U^{\top}\big{(}(C^{(\ell+1)})^{\top}C^{(\ell+1)}\big{)}U=I\).
\[\big{(}C^{(\ell+1)}\big{)}^{\top}C^{(\ell+1)}=\big{(}C^{(\ell)} \big{)}^{\top}C^{(\ell)}\] \[+\tau\Big{(}\big{(}C^{(\ell)}_{i^{(\ell+1)},:}\big{)}^{\top} \widetilde{G}_{i^{(\ell+1)},:}+\widetilde{G}_{i^{(\ell+1)},:}^{\top}C^{(\ell +1)}_{i^{(\ell+1)},:}\Big{)}\] \[+\tau^{2}\widetilde{G}_{i^{(\ell+1)},:}^{\top}\widetilde{G}_{i^{( \ell+1)},:},\]
and,
\[\big{(}C^{(\ell+1)}\big{)}^{\top}HC^{(\ell+1)}=\big{(}C^{(\ell)} \big{)}^{\top}HC^{(\ell)}\] \[+\tau\Big{(}\big{(}B^{(\ell+1)}_{i^{(\ell+1)},:}\big{)}^{\top} \widetilde{G}_{i^{(\ell+1)},:}+\widetilde{G}_{i^{(\ell+1)},:}^{\top}B^{(\ell +1)}_{i^{(\ell+1)},:}\Big{)}\] \[-\tau^{2}H_{i^{(\ell+1)}i^{(\ell+1)}}\widetilde{G}_{i^{(\ell+1)},: }^{\top}\widetilde{G}_{i^{(\ell+1)},:}.\]
Since \(B^{(\ell+1)}_{i^{(\ell+1)},:}\) was recalculated in the previous step, both matrices are numerically accurate and not affected by our compression. The updated matrix \(\big{(}C^{(\ell+1)}\big{)}^{\top}C^{(\ell+1)}\) is also involved and reused in the gradient computation of the next iteration. After the energy estimation, we check the stopping criteria. If the criteria are satisfied, we move on to the next post-processing; otherwise, we go back to the first step.
#### Post-processing
When the algorithm converges, energies of low-lying excited states are already available in \(\Gamma\). If excited states are needed for the down stream tasks, e.g., reduced density matrix com
putations, the coefficient matrix \(C\) needs to be transformed back to eigenvectors \(V\) and the transformation is as simple as,
\[V\approx CU, \tag{10}\]
where \(U\) is the eigenvector matrix in (9).
### Implementation
We now discuss some implementation details, including the data structure of \(C\) and \(B\), the stopping criteria, and the symmetry of molecular systems in the following.
#### Data structure
In Wang et al. [14] several data structures have been implemented and discussed, including the hash table, black-red tree, etc. Among these data structures, the hash table is the one achieving the best computational performance for CDFCI. Thus for xCDFCI, we also adopt hash tables as our overall data structure. For single thread version of our implementation, Robin Hood hash table is adopted [44], whereas for multi threads version, Cuckoo hash table is adopted [45, 46]. In both hash tables, the keys are the binary representations of the determinants. Given a key corresponding to a determinant with index \(i\), the bucket of the hash table is composed of two vectors, \(B_{i,:}\) and \(C_{i,:}\). Based on our tests of CDFCI, the hash table access costs nearly half of the runtime. Hence, in designing the algorithm and data structure of xCDFCI, we balance the number of hash table accesses and the number of entry updates. For each iteration in xCDFCI, where the number of hash table accesses is the number of nonzeros in the column of \(H\), we update the entire row of \(B\) and \(C\), i.e., update both ground state and excited states of the selected determinant. In xCDFCI, the hash table access costs less than half of the runtime, and the per-iteration cost of xCDFCI is less than \(K\) times of that of CDFCI. The drawback of our data structure implementation is that it ignores the sparsity across states. For example, consider the scenario that for a given determinant, the value of an excited state is non-compressible while values of other states are all compressible. Our implementation would treat the values of all states as non-compressible and allocate memory for them. In the trade-off of hash table access cost and memory efficiency, we lean against the former in the implementation of xCDFCI.
#### Stopping criteria
The stopping criteria for coordinate descent methods are usually more complicated than that for general gradient descent methods. In gradient descent methods, the norm of the gradient is often used as the stopping criterion. For non-stiff problems, when the norm is sufficiently small, we are confident that the iteration is close to a first-order stationary point. However, for coordinate descent methods, we often cannot afford to check through the entire gradient vector, as in xCDFCI. It is also risky to stop when the entry update \(\tau\widetilde{G}_{i,:}\) is small. Hence, in our implementation, we adopt accumulated entry updates as the stopping criterion, i.e.,
\[\mathrm{tol}_{\ell}=\sum_{\ell=1}^{n}\beta^{n-\ell}\Big{\|}\tau^{(\ell)} \widetilde{G}_{i^{(\ell)},:}\Big{\|},\]
where \(n\) is the current iteration index, \(\beta\) is a discounting factor strictly smaller than one, and \(\tau^{(\ell)}\) is the best stepsize at \(\ell\)-th iteration. The accumulated entry updates could be evaluated iteratively,
\[\mathrm{tol}_{\ell}=\Big{\|}\tau^{(n)}\widetilde{G}_{i^{(n)},:}\Big{\|}+ \beta\cdot\mathrm{tol}_{\ell-1},\]
and only a single tol needs to be kept in memory. Throughout, the discounting factor \(\beta\) is left as a hyperparameter. The suggested value for \(\beta\) would be in the range of \([0.99,0.999]\).
#### Symmetry
The symmetry in the basis set is mainly implemented via the Hartree-Fock calculation. As a default, we adopt Psi4 [47] as our Hartree-Fock solver. In Psi4 calculation, we can first determine the irreducible representations of the molecular point group of the target molecular system. Then the symmetry is configured
through DOCC and SOCC hyperparameters. Once the symmetry is embedded in the Hartree-Fock calculation, the one-body and two-body integrals dumped in a file are calculated under the defined symmetry. Hence, the calculation carried out by xCDFCI also obeys the symmetry. More precisely, in our experiments, the singlet calculation is realized by setting the molecule as a singlet, then Psi4 automatically determines its irreducible representation and produces the calculation. The triplet calculation is done in a similar fashion.
## 3 Numerical Results
In this section, we perform a sequence of numerical experiments for H\({}_{2}\)O, C\({}_{2}\), and N\({}_{2}\) under the cc-pVDZ basis set. In all experiments, the one-body and two-body integrals are calculated by Psi4.[47] The FCI excited states are calculated by our homebrewed package CDFCI.[48] All energies are reported in Hartree (Ha).
### H\({}_{2}\)O excited states
This section calculates the excited states of H\({}_{2}\)O at equilibrium geometry. The OH bonds are of length 0.9751 A, and the HOH bond angle is 110.565\({}^{\circ}\). The maximum memory for CDFCI calculation is 480 GB and the compression tolerance is 0 (no compression). With the cc-pVDZ basis set, there are 10 electrons and 24 orbitals involved in the calculation. Throughout, the reference energy of the ground state is \(-\)76.2418601 Ha, and reference energies of excited states are numerical results at one hundred million iterations of xCDFCI.
From Table 1 and Figure 1, we shall see that the energy error drops quickly to the level of 10\({}^{-4}\) mHa accuracy at the beginning. It then has a slower but steady decay. According to Figure 1, in general, energies associated with lower excited states are of better accuracy. The only exception for H\({}_{2}\)O is the energy associated with the third excited state, which achieves better accuracy than the first and second excited state energies. From Table 1, we find that each state can quickly converge to the chemical accuracy. After a burn-in stage (first few thousand iterations), the runtime is linear with respect to the number of iterations. Hence if Figure 1 is redone for energy errors against the runtime, the curves would behave similarly and the decays remain linear against the runtime after the burn-in stage.
### N\({}_{2}\) excited states
This section calculates the excited states of N\({}_{2}\) at equilibrium geometry. Nitrogen dimer N\({}_{2}\) is more challenging than H\({}_{2}\)O because its correlation is stronger, so we use a threshold 10\({}^{-5}\) for compression. The N\({}_{2}\) molecule is with bond length 1.12079. The maximum memory in this section is limited to 960 GB. With the cc-pVDZ basis set, there are 14 electrons and 28 orbitals. The results of N\({}_{2}\) are reported in Table 2 and Figure 2. Throughout, the reference energy of the ground state is \(-\)109.28210 Ha, and reference energies of excited states are numerical results of xCDFCI at one hundred million iterations.
The convergence trend of N\({}_{2}\) is similar to that of H\({}_{2}\)O except that the convergence rate in N\({}_{2}\) is slower. Similarly, after the first million iterations, xCDFCI converges linearly and the convergence rates are quite stable for both the ground state and excited states. Therefore, we conclude that xCDFCI is stable and efficient for
Figure 1: Convergence of energies of six low-lying excited states of H\({}_{2}\)O against the number of iterations.
various chemistry systems with different correlation strengths. For N\({}_{2}\), xCDFCI takes about ten thousand seconds to achieve chemical accuracy. Convergence rates of all states are approximately the same. Unlike H\({}_{2}\)O, where the runtime scales linearly with respect to the number of iterations, for N\({}_{2}\), the runtime scales sublinearly. This is mainly due to the compression. When the compression criterium is activated, the computational cost for compressed determinants is far less than that of uncompressed ones. Comparing Table 1 and Table 2, we notice that the runtime of N\({}_{2}\) is smaller than that of H\({}_{2}\)O. Although the computational system of N\({}_{2}\) is larger, the compression with tolerance \(10^{-5}\) reduces a lot of computations and the runtime is also reduced. Therefore, the compression technique is efficient and reliable.
\begin{table}
\begin{tabular}{c c c c c} \hline \multirow{2}{*}{Energy (Ha)} & \multicolumn{4}{c}{Number of Iterations} \\ \cline{2-5} & \(10^{4}\) & \(10^{7}\) & \(2\cdot 10^{7}\) & \(5\cdot 10^{7}\) \\ \hline Ground State & -76.2_312241_ & -76.24185_69_ & -76.24185_94_ & -76.241860\(0\) \\
1st Excited State & -75.8_803222_ & -75.89433_36_ & -75.89433_64_ & -75.89433_71_ \\
2nd Excited State & -75.8_452881_ & -75.86048_22_ & -75.86048_51_ & -75.860485\(8\) \\
3rd Excited State & -75.6_550559_ & -75.67311_55_ & -75.67311_87_ & -75.67311_95_ \\
4th Excited State & -75.5_669476_ & -75.58467_40_ & -75.58467_75_ & -75.584678\(3\) \\
5th Excited State & -75.3_466894_ & -75.4844_768_ & -75.48448_24_ & -75.484483\(6\) \\ \hline Wall time (sec) & 67.14 & 19414.28 & 38477.38 & 90759.04 \\ \hline \end{tabular}
\end{table}
Table 1: Convergence of energy of H\({}_{2}\)O. Italics indicate inaccurate digits.
\begin{table}
\begin{tabular}{c c c c c} \hline \multirow{2}{*}{Energy (Ha)} & \multicolumn{4}{c}{Number of Iterations} \\ \cline{2-5} & \(10^{5}\) & \(10^{6}\) & \(10^{7}\) & \(5\cdot 10^{7}\) \\ \hline Ground State & -109.2_6836_ & -109.28_077_ & -109.282_04_ & -109.2821\(5\) \\
1st Excited State & -108.7_1546_ & -108.7_3196_ & -108.733_90_ & -108.7340\(7\) \\
2nd Excited State & -108.6_4376_ & -108.66_050_ & -108.662_91_ & -108.6631\(1\) \\
3rd Excited State & -108.6_3613_ & -108.65_935_ & -108.660_83_ & -108.6609\(6\) \\
4th Excited State & -108.6_0848_ & -108.62_885_ & -108.631_10_ & -108.6313\(0\) \\
5th Excited State & -108.5_8040_ & -108.60_141_ & -108.603_70_ & -108.6039\(2\) \\ \hline Wall time (sec) & 315.75 & 2140.65 & 15129.69 & 57403.06 \\ \hline \end{tabular}
\end{table}
Table 2: Convergence of energy of N\({}_{2}\).
Figure 2: Convergence of energies of six low-lying excited states of N\({}_{2}\) against the number of iterations.
### Carbon dimer binding curves
In this section we test C\({}_{2}\) with bond length form 1 to 2.6. We computed five low-lying energies of singlet and triplet of C\({}_{2}\). The maximum memory in this section is 120 GB and the tolerance is 0. With the cc-pVDZ basis set, there are 12 electrons and 56 orbitals. We iterate 1 million iterations for xCDFCI. In all configurations, the accuracies for all states are at the level of chemical accuracy.
The energies of five low-lying states of C\({}_{2}\) in singlet and triplet are shown in Table 3, Table 4. Binding curves are depicted in Figure 3 and Figure 4 for singlet and triplet respectively. We can see that the energy of the triplet state is higher than that of the singlet state as a whole, this is consistent with our intuition for C\({}_{2}\). In general, we observe that the binding curves for lower energy states are smoother in Figure 3 and Figure 4. In both figures, we find a lot of cross-over points. Each cross-over point corresponds to a configuration whose energies are degenerate. Lower energy binding curves have fewer cross-over points. The binding curve for the fourth excited state has many cross-over points with binding curves of higher excited states though they are not calculated.
## 4 Conclusion and Discussion
We proposed xCDFCI in this paper as an efficient low-lying excited states solver under the FCI framework. xCDFCI adopts an extension of the objective function in the CDFCI method. More precisely, xCDFCI extends the single-column version (ground state) to a multi-column version (low-lying excited states) and leads to (3). Then a tailored coordinate descent method is applied to address (3). xCDFCI first selected a determinant with the largest entry in magnitude in the approximated gradient, and then the selected row of the iteration variable \(C\) is updated, i.e., the coefficients of a determinant for all states are updated. To avoid memory overflow, a hard-thresholding type compression is applied to \(B\approx HC\) for \(H\) being the Hamiltonian matrix, which in turn limits the growth of nonzeros in \(C\). Finally, we carefully maintain the double precision accuracy of \(C^{\top}C\) and \(C^{\top}HC=C^{\top}B\), and estimate the eigenvalues through a generalized Rayleigh quotient procedure. Based on numerical linear algebra conclusion,[49] the ground state and low-lying excited states are of the first order accuracy, whereas the ground state energy and excited state energies are of the second order accuracy. In summary, xCDFCI extends CDFCI to calculating low-lying excited states and inherits almost all desired properties of CDFCI. Numerical results on various chemistry systems demonstrate the
Figure 4: Low-lying potential energy surfaces of carbon dimer in triplet in the cc-pVDZ basis.
Figure 3: Low-lying potential energy surfaces of carbon dimer in singlet the cc-pVDZ basis.
efficiency of xCDFCI.
There are a few promising future directions. First of all, xCDFCI has not fully exploited the sparsity of the low-lying excited states. Due to the nature of (3), the objective function is rotation invariant, i.e., the objective function remains the same for \(C\) and \(CQ\) with \(Q\) being an orthogonal matrix. Hence, xCDFCI can converge to the eigenspace formed by desired ground state and low-lying excited states. While it is not guaranteed to converge to the sparse eigenvectors directly. Some recent works [38, 39, 40] provide promising paths to address the sparsity issue. Second, the basis sets, so far, remain the Hartree Fock molecular orbitals. Applying orbital optimization methods like CASSCF [50] or OptOrbFCI [51] together with xCDFCI would be an interesting future direction. Lastly, we did not fully incorporate the compressed evaluation of the Hamiltonian matrix and other perturbative approximations as in other FCI excited state work [29, 33], which could be combined with xCDFCI to further accelerate the proposed method.
**Acknowledgement** The work of ZW is supported by the US National Science Foundation under awards DMS-1454939 and OAC-1450280. This work is part of ZW's PhD thesis at Duke University. YL is supported in part by Na
\begin{table}
\begin{tabular}{c c c c c c} \hline \multirow{2}{*}{R(Å)} & \multicolumn{5}{c}{Energy of five low-lying states (Ha)} \\ \cline{2-7} & 1st & 2nd & 3rd & 4th & 5th \\ \hline
1.0 & -74.97988 & -74.89885 & -74.84309 & -74.69436 & -74.65199 \\
1.1 & -75.10432 & -75.09015 & -75.0109 & -74.87853 & -74.835 \\
1.2 & -75.19382 & -75.18473 & -75.06591 & -74.97813 & -74.93362 \\
1.3 & -75.24464 & -75.23045 & -75.06598 & -75.02661 & -74.98179 \\
1.4 & -75.26426 & -75.24972 & -75.04558 & -75.045 & -75.00319 \\
1.5 & -75.26637 & -75.25299 & -75.0469 & -75.02119 & -75.01464 \\
1.6 & -75.25934 & -75.24766 & -75.04087 & -75.02075 & -75.0046 \\
1.7 & -75.24796 & -75.23809 & -75.03234 & -75.02249 & -75.00545 \\
1.8 & -75.23588 & -75.22774 & -75.02666 & -75.02315 & -75.00045 \\
1.9 & -75.22402 & -75.21741 & -75.02409 & -75.02374 & -74.99841 \\
2.2 & -75.19435 & -75.19093 & -75.03627 & -75.03534 & -75.03191 \\
2.5 & -75.17333 & -75.17156 & -75.0603 & -75.05961 & -75.04945 \\ \hline \end{tabular}
\end{table}
Table 4: Energy of five low-lying states of C\({}_{2}\) in triplet.
\begin{table}
\begin{tabular}{c c c c c c} \hline \multirow{2}{*}{R(Å)} & \multicolumn{5}{c}{Energy of five low-lying states (Ha)} \\ \cline{2-5} & 1st & 2nd & 3rd & 4th & 5th \\ \hline
1.0 & -75.55231 & -75.37074 & -75.34005 & -75.25824 & -75.24635 \\
1.1 & -75.67528 & -75.52584 & -75.52314 & -75.42099 & -75.40454 \\
1.2 & -75.7246 & -75.6188 & -75.61144 & -75.51174 & -75.46344 \\
1.3 & -75.73152 & -75.66195 & -75.65091 & -75.55151 & -75.4995 \\
1.4 & -75.71569 & -75.67459 & -75.66213 & -75.56135 & -75.5052 \\
1.5 & -75.68951 & -75.67034 & -75.65703 & -75.55471 & -75.49432 \\
1.6 & -75.66102 & -75.65712 & -75.64203 & -75.53995 & -75.47353 \\
1.7 & -75.64014 & -75.63676 & -75.62022 & -75.52375 & -75.4532 \\
1.8 & -75.62201 & -75.6169 & -75.59619 & -75.5117 & -75.45362 \\
1.9 & -75.60453 & -75.59944 & -75.57506 & -75.50693 & -75.44587 \\
2.2 & -75.56245 & -75.55941 & -75.53746 & -75.51137 & -75.46051 \\
2.5 & -75.53929 & -75.53814 & -75.52593 & -75.51654 & -75.49545 \\ \hline \end{tabular}
\end{table}
Table 3: Energy of five low-lying states of C\({}_{2}\) in singlet.
tional Natural Science Foundation of China (12271109) and Science and Technology Commission of Shanghai Municipality (22TQ017).
|
2301.09506 | OvarNet: Towards Open-vocabulary Object Attribute Recognition | In this paper, we consider the problem of simultaneously detecting objects
and inferring their visual attributes in an image, even for those with no
manual annotations provided at the training stage, resembling an
open-vocabulary scenario. To achieve this goal, we make the following
contributions: (i) we start with a naive two-stage approach for open-vocabulary
object detection and attribute classification, termed CLIP-Attr. The candidate
objects are first proposed with an offline RPN and later classified for
semantic category and attributes; (ii) we combine all available datasets and
train with a federated strategy to finetune the CLIP model, aligning the visual
representation with attributes, additionally, we investigate the efficacy of
leveraging freely available online image-caption pairs under weakly supervised
learning; (iii) in pursuit of efficiency, we train a Faster-RCNN type model
end-to-end with knowledge distillation, that performs class-agnostic object
proposals and classification on semantic categories and attributes with
classifiers generated from a text encoder; Finally, (iv) we conduct extensive
experiments on VAW, MS-COCO, LSA, and OVAD datasets, and show that recognition
of semantic category and attributes is complementary for visual scene
understanding, i.e., jointly training object detection and attributes
prediction largely outperform existing approaches that treat the two tasks
independently, demonstrating strong generalization ability to novel attributes
and categories. | Keyan Chen, Xiaolong Jiang, Yao Hu, Xu Tang, Yan Gao, Jianqi Chen, Weidi Xie | 2023-01-23T15:59:29Z | http://arxiv.org/abs/2301.09506v1 | # OvarNet: Towards Open-vocabulary Object Attribute Recognition
###### Abstract
In this paper, we consider the problem of simultaneously detecting objects and inferring their visual attributes in an image, even for those with no manual annotations provided at the training stage, resembling an open-vocabulary scenario. To achieve this goal, we make the following contributions: (i) we start with a naive two-stage approach for open-vocabulary object detection and attribute classification, termed CLIP-Attr. The candidate objects are first proposed with an offline RPN and later classified for semantic category and attributes; (ii) we combine all available datasets and train with a federated strategy to fine-tune the CLIP model, aligning the visual representation with attributes, additionally, we investigate the efficacy of leveraging freely available online image-caption pairs under weakly supervised learning; (iii) in pursuit of efficiency, we train a Faster-RCNN type model end-to-end with knowledge distillation, that performs class-agnostic object proposals and classification on semantic categories and attributes with classifiers generated from a text encoder; Finally, (iv) we conduct extensive experiments on VAW, MSCOCO, LSA, and OVAD datasets, and show that recognition of semantic category and attributes is complementary for visual scene understanding, i.e., jointly training object detection and attributes prediction largely outperform existing approaches that treat the two tasks independently, demonstrating strong generalization ability to novel attributes and categories.
## 1 Introduction
Understanding the visual scene in terms of objects has been the main driving force for development in computer vision, for example, in object detection, the goal is to localise objects in an image and assign one of the pre-defined semantic labels to them, such as a 'car', 'person' or 'bus', despite tremendous success has been made by the community, such task definition has largely over-simplified our understanding of the visual world, as a visual object can often be characterised from many aspects other than semantic category, for example, a bus can be 'yellow' or 'black', a shirt can be'striped' or 'unpatterned', learning attributes can thus complement category-level recognition, acquiring more comprehensive visual perception.
In the literature, numerous work has shown that understanding the objects' attributes can greatly facilitate object
recognition and detection, even with few or no examples of visual objects [6, 18, 25, 43, 53], for example, Farhadi _et al._ proposed to shift the goal of object recognition from 'naming' to 'description', which allows naming familiar objects with attributes, but also to say something about unfamiliar objects ("hairy and four-legged", not just "unknown") [6]; Lampert _et al._ considered the open-set object recognition, that aims to recognise objects by human-specified high-level description, _e.g._, arbitrary semantic attributes, like shape, color, or even geographic information, instead of training images [18]. However, the problem considered in these seminal work tends to be a simplification from today's standard, for example, attribute classification are often trained and evaluated on object-centric images under the close-set scenario, _i.e._, assuming the bounding boxes/segmentation masks are given [13, 29, 38], or sometimes even the object category are known as a prior [26, 29].
In this paper, we consider the task of simultaneously detecting objects and classifying the attributes in an open-vocabulary scenario, _i.e._, the model is only trained on a set of base object categories and attributes, while it is required to generalise towards ones that are unseen at training time, as shown in Fig. 1. Generally speaking, we observe three major challenges: _First_, in the existing foundation models, _e.g._, CLIP [33] and ALIGN [15], the representation learned from image-caption pairs tends to bias towards object category, rather than attributes, which makes it suffer from feature misalignment when used directly for attribute recognition. We experimentally validate this conjecture by showing a significant performance drop in attribute recognition, compared to category classification; _Second_, there is no ideal training dataset with three types of annotations, object bounding boxes, semantic categories, and attributes; as far as we know, only the COCO Attributes dataset [28] provides such a degree of annotations, but with a relatively limited vocabulary size (196 attributes, 29 categories); _Third_, training all three tasks under a unified framework is challenging and yet remains unexplored, _i.e._, simultaneously localising ('where'), classifying objects' semantic categories and attributes ('what') under the open-vocabulary scenario.
To address the aforementioned issues, we start with a naive architecture, termed as CLIP-Attr, which first proposes object candidates with an offline RPN [37], and then performs open-vocabulary object attribute recognition by comparing the similarity between the attribute word embedding and the visual embedding of the proposal. To better align the feature between attribute words and proposals, we introduce learnable prompt vectors with parent attributes on the textual encoder side and finetune the original CLIP model on a large corpus of the freely available image-caption datasets. To further improve the model efficiency, we present OvarNet, a unified framework that performs detection and attributes recognition at once, which is trained by leveraging datasets from both object detection and attribute prediction, as well as absorbing knowledge from CLIP-Attr to improve the performance and robustness of unseen attributes. As a result, our proposed OvarNet, being the first scalable pipeline, can simultaneously localize objects and infer their categories with visual attributes in an open-vocabulary scenario. Experimental results demonstrate that despite only employing weakly supervised image-caption pairs for distillation, OvarNet outperforms previous the state-of-the-art on VAW [29], MSCOCO [22], LSA [30] and OVAD [4] datasets, exhibiting strong generalization ability on novel attributes and categories.
## 2 Related Work
**Attribute Prediction.** Visual attribute aims to describe one object/scene from various aspects, for example, color, texture, shape, material, state, etc, allowing to represent object categories in a combinatorial manner. However, annotating attributes can be very time-consuming, early efforts only focus on specific domains such as fashion [47, 48], face [10, 49], animals [1, 41], posing severe limitations for real-world deployment. With the release of large-scale datasets including COCO Attributes [28], Visual Genome [17], and VAW [29], recent work considers building models for large-vocabulary attributes classification [29, 44]. Nonetheless, these methods only perform multi-class classification on pre-computed image patches, which not only fail to acquire object localization ability but also endure extra computation overhead due to redundant feature extraction passes. Additionally, other methods such as SCoNE [29] require object category as input to perform attribute prediction, leading to extra complexity in practice. In this work, we aim to build a unified framework that can jointly settle object localization, category prediction, and attribute prediction in an open-vocabulary scenario, relieving the aforementioned practical limitations.
**Open-vocabulary Object Detection.** Open-vocabulary object detection strives to detect all objects, including those that are unseen at the training stage. Existing approaches [2, 7, 9, 52] achieve open-vocabulary capability by replacing the detector's classifier with object category word embedding from the pre-trained visual-language model, _e.g._, CLIP, and perform category classification via embedding matching. In specific, OVR-CNN [45] proposes an efficient training approach with image-caption pairs that can be easily obtained from the website. ViLD [9] adopts distillation to infuse open-vocabulary knowledge into a two-stage detector, Detic [52] increases the size of detector's vocabulary to twenty-thousand by exploiting the large dataset (ImageNet-21K) with image-level annotations. PromptDet [7] leverages the pre-trained CLIP [33] and
aligns the detector's visual embedding with text embedding with learnable prompts. However, none of these models considers simultaneously inferring attributes for detected objects.
**Zero-shot Learning.** Zero-shot learning aims to extend the model's capability towards recognising objects beyond those seen categories at training time [2, 34, 35, 9]. In the context of object detection, early zero-shot solutions rely on visual attributes to infer unseen categories [14, 19, 24, 43], aiming to represent category by attributes, such that it can generalize from seen to unseen category. Recent methods adopt vision-language feature alignment to achieve zero-shot learning, based on similarity computation between the visual feature and text concepts.
## 3 Methodology
In this section, we start by introducing the problem scenario (Sec. 3.1), followed by describing a naive architecture for open-vocabulary attribute classification by steering a pre-trained CLIP model, dubbed CLIP-Attr (Sec. 3.2), and finally, we further distill the knowledge from CLIP-Attr into a more efficient two-stage detection architecture called OvarNet, which can perform detection and attribute prediction in a unified framework (Sec. 3.3).
### Problem Scenario
Assuming we are given a training dataset, _i.e._, \(\mathcal{D}_{\text{train}}=\{(\mathcal{I}_{1},y_{1}),\dots,(\mathcal{I}_{N},y_ {N})\}\), where \(\mathcal{I}_{i}\in\mathbb{R}^{H\times W\times 3}\) refers to an image, and \(y_{i}=\{b_{i},c_{i},a_{i}\}\) denotes its corresponding ground-truth annotations, with the coordinates for \(n\) object bounding boxes (\(b_{i}\in\mathbb{R}^{n_{i}\times 4}\)), their corresponding semantic categories (\(c_{i}\in\mathbb{R}^{n_{i}\times\mathcal{C}_{\text{low}}}\)), and a set of binary attributes for each object (\(a_{i}\in\{0,1\}^{n_{i}\times\mathcal{A}_{\text{low}}}\)). Our goal is to train a model that can process any image from a test set (\(\mathcal{I}_{k}\sim\mathcal{D}_{\text{test}}\)), simultaneously localising the objects and inferring their semantic categories, and visual attributes:
\[\{\hat{b}_{k},\hat{c}_{k},\hat{a}_{k}\}=\Phi_{\text{CLS}}\circ\Phi_{\text{LOC} }(\mathcal{I}_{k})\]
where the image is progressively processed by a class-agnostic object localization, and open-vocabulary attributes classification, to produce the \(\hat{b}_{k}\in\mathbb{R}^{n_{k}\times 4},\hat{c}_{k}\in\mathbb{R}^{n_{k} \times\mathcal{C}_{\text{test}}}\) and \(\hat{a}_{k}\in\{0,1\}^{n_{k}\times\mathcal{A}_{\text{low}}}\). Note that, at inference time, the objects may be of unseen/novel semantic categories or attributes, _i.e._, \(\mathcal{C}_{\text{test}}=\mathcal{C}_{\text{base}}\cup\mathcal{C}_{\text{ novel}}\), \(\mathcal{A}_{\text{test}}=\mathcal{A}_{\text{base}}\cup\mathcal{A}_{\text{ novel}}\), thus the considered problem falls into open-vocabulary object attributes recognition. For simplicity, we will omit the subscript \(k\) while describing the proposed models. To avoid redundancies, we treat the category as a super-attribute for modeling our pipeline unless otherwise specified.
Figure 2: An overview of the proposed method. **Left:** the two-step training procedure for finetuning the pre-trained CLIP to get CLIP-Attr that better aligns the regional visual feature to attributes. **Step-I:** naive federate training by base attribute annotations. **Step-II:** training by image-caption pairs. We first conduct RPN on the whole image to get box-level crops, parse the caption to get noun phrases, categories, and attributes, and then match these fine-grained concepts for weakly supervised training. **Right:** the proposed one-stage framework OvarNet. We inherit the CLIP-Attr for open-vocabulary object attribute recognition. Regional visual feature is learned from the attentional pooling of proposals; while attribute concept embedding is extracted from the text encoder. Solid lines declare the standard federated training regime. Dashed lines denote training by knowledge distillation with CLIP-Attr.
### Two-stage Object Attribute Recognition
In this section, we describe a two-stage open-vocabulary attribute classification method, termed CLIP-Attr, that first uses a class-agnostic region proposal network (RPN) to generate object candidates, then verifies the candidates with category and attributes using a finetuned CLIP:
\[\{\hat{b}_{k}\} =\Phi_{\text{LOC}}=\Phi_{\text{crpn}}(\mathcal{I})\] \[\{\hat{c}_{k},\hat{a}_{k}\} =\Phi_{\text{CLS}}=\Phi_{\text{cls}}\circ\Phi_{\text{clip-v}}\circ \Phi_{\text{crop}}(\mathcal{I},\{\hat{b}_{k}\})\]
where \(\Phi_{\text{crpn}}(\mathcal{I})\) is a class-agnostic RPN, \(\Phi_{\text{cls}}(\cdot)\) represents attributes classification, \(\Phi_{\text{clip-v}}(\cdot)\) denotes the CLIP visual encoder, and \(\Phi_{\text{crop}}(\cdot)\) is an operation that crops \(\hat{b}_{k}\) the box region from input image.
#### 3.2.1 Object-centric Visual Encoding
**Class-agnostic Region Proposal.** To propose the candidate regions that potentially have objects situated, we employ a Faster-RCNN [37] based region proposal network that parametrises the anchor classification and bounding box regression in a class-agnostic manner, _i.e.,_\(\Phi_{\text{crpn}}(\cdot)\) shares parameters for all categories. Inspired by the observation in [7, 52, 16], we train the proposal network only on base categories offline, and it shows sufficient generalization ability towards unseen categories.
**RoI Visual Pooling.** Given the pre-defined object boxes, we acquired the image crops (\(\Phi_{\text{crop}}(\cdot)\)) and feed them into the CLIP image encoder (\(\Phi_{\text{clip-v}}(\cdot)\)) to compute regional visual embeddings \(\hat{v}_{i}\in\mathbb{R}^{1\times D}\), \(i\) denotes the \(i\)th region.
#### 3.2.2 Open-vocabulary Attributes Classification
**Generating Attribute Embedding.** To compute attribute embeddings, we employ the pre-trained text encoder from CLIP (\(\Phi_{\text{clip+}}(\cdot)\)), and use two variants of prompts for better aligning the attribute with the visual region features: (i) for each attribute, we employ prior knowledge of ontologies, and encode their parent-class words along with the attribute, for example, the embedding for the 'wet' attribute can be expanded as: \(\Phi_{\text{clip+}}(\text{wet},\text{state})\) to better distinguish from \(\Phi_{\text{clip+}}(\text{water},\text{material})\), or \(\Phi_{\text{clip+}}(\text{in water},\text{place})\); (ii) we augment it with multiple learnable prompt vectors, as a consequence, the attribute embeddings can be computed as:
\[\begin{split}\hat{t}_{j}=\Phi_{\text{clip+}}([p_{0},\cdots,p_{i},\text{g}(\text{attribute}),p_{i+1},\cdots,p_{j},\\ \text{g}(\text{parent-attribute}),p_{j+1},\cdots,p_{k}])\end{split} \tag{1}\]
where \(\text{g}(\cdot)\) denotes the tokenisation procedure, and \(p_{i}\) (\(i\in{0,1,\cdots,k}\)) has the same dimension with the attribute word embeddings, denoting the learnable prompt vectors, that are shared across all attributes, and can generalize towards unseen attributes at inference time.
**Attribute Classification.** Attribute prediction can be obtained by computing the similarity between visual region feature and attribute concept embedding as:
\[\hat{s}_{ij}=\Phi_{\text{cls}}(\hat{v}_{i},\hat{t}_{j})=\sigma(\langle\hat{v}_ {i}^{T},\hat{t}_{j}\rangle/\tau), \tag{2}\]
where both \(v_{i}\) and \(t_{j}\) are L2 normalised, and \(\hat{s}_{ij}\) denotes the likelihood that the \(i\)th region contains the \(j\)th attribute. \(\tau\) is a temperature parameter and \(\sigma\) denotes sigmoid function.
#### 3.2.3 Training Procedure
In this section, we describe the training procedure for open-vocabulary attributes classification, which strives to better align the regional visual feature to the attribute description.
**Step-I: Federated Training.** In order to align the regional visual feature to attributes, an ideal training dataset should contain three types of annotations, namely, object bounding boxes, semantic categories, and attributes, as far as we know, the COCO Attributes dataset [28] is the only one that provides such a level of annotations, but with a very limited vocabulary size (196 attributes, 29 categories).
To fully exploit the annotations in existing datasets, we combine the detection dataset, _e.g.,_ COCO [22], and attribute prediction dataset, _e.g.,_ VAW [29]. Specifically, we follow standard procedure for training the class-agnostic region proposal network with images from COCO, _i.e.,_ SmoothL1 loss and Binary Cross Entropy (BCE) are applied for box coordinates regression and objectness prediction; while for training attribute/category classification, as illustrated in the top-left part of Fig. 2, we employ ground-truth bounding boxes to crop the objects, and compute their visual embeddings with a pre-trained visual encoder from CLIP, we finetune CLIP's **text encoder** by optimising BCE loss with multi-label attribute classification, as follows,
\[\mathcal{L}_{\text{cls}}=\frac{1}{N}\sum\nolimits_{i=1}^{N}w_{i}\cdot\text{ BCE}(\hat{s}_{i},s_{i}) \tag{3}\]
where \(N=|\mathcal{C}_{\text{base}}|+|\mathcal{A}_{\text{base}}|\), denotes the class number of categories and attributes, \(i\) denotes the \(i\)-th category/attribute, \(\hat{s}_{i}\) is the predicted probability, and \(s_{i}\in\{0,1,\text{unk}\}\) denotes an attribute label being negative, positive or missing. By default, for the missing attributes we treat them as negative with a re-weight factor, _i.e.,_\(s_{i}=0\) during training. \(w_{i}\propto{1}/{f_{i}}^{\gamma}\), \(\sum\nolimits_{i=1}^{N}w_{i}=N\), where \(f_{i}\) indicates the occurrence frequency of the \(i\)-th attribute in the training set, \(\gamma=0.25\) is a smoothing factor. As a result, this step ends up with a finetuned CLIP text encoder that better aligns the regional visual feature to attributes, referred to \(\Phi_{\text{CLIP-Attr}}(\cdot)\).
**Step-II: Training with Image-caption Dataset.** To further improve the alignment, especially for novel attributes, we also consider using freely available image-caption
datasets, _e.g._, \(\mathcal{D}_{\text{img-cap}}=\{\{\mathcal{I}_{1},s_{1}\},\ldots,\{\mathcal{I}_{N}, s_{N}\}\}\), where the \(\mathcal{I}_{i},s_{i}\) refer to image and caption sentence respectively. We detect all the objects in each image with a class-agnostic object proposal as described in Sec. 3.2.1. We keep the largest box proposal (\(b^{*}\)) and those with top-K objectness scores (\(b^{k}\)), and crop original images with the inferred bounding boxes. We pass these crops through \(\Phi_{\text{CLIP-Attr}}(\cdot)\), to get the predictions for semantic categories and attributes, and keep those with confidence scores higher than 0.7 as pseudo-positive labels. In addition, for caption sentences, we use TextBlob [23] to parse all captions into'semantic category', 'attribute', and 'noun phrases' based on COCO and VAW dictionaries. For example, the sentence "A striped zebra is eating green grass" is processed and converted to {category: 'zebra'}, {attribute: 'green','striped'}, {noun phrase:'striped zebra', 'green grass'}.
To this end, we continue finetuning the alignment model (\(\Phi_{\text{CLIP-Attr}}(\cdot)\)) with the pseudo groundtruths obtained from the pre-processing stage. In detail, we compute the visual and textual embeddings as in **Step-I**, however, as the labels obtained from captions or the model's prediction are not guaranteed to be correct, that requires special actions. We adopt multi-instance contrastive learning (MIL-NCE) [27], that maximizes the accumulated similarity score of positive matches between the visual and textual embeddings as follows:
\[\mathcal{L}_{\text{MIL-NCE}}=-\log\frac{\sum\limits_{(v,t)\in\mathcal{P}} \text{exp}\big{(}\frac{\langle v^{T}_{i}t\rangle}{\tau}\big{)}}{\sum\limits_{( v,t)\in\mathcal{P}}\text{exp}\big{(}\frac{\langle v^{T}_{i}t\rangle}{\tau} \big{)}+\sum\limits_{(v^{\prime},t^{\prime})\sim\mathcal{N}}\text{exp}\big{(} \frac{\langle v^{\prime}{}^{T}_{i}t^{\prime}\rangle}{\tau}\big{)}} \tag{4}\]
where \(\mathcal{P}\) is a set of _positive_ pairs with image crop feature and textual concept embeddings, \(\mathcal{N}\) conversely refers to an associated set of _negative_ pairs. Here, we pair the largest box (\(b^{*}\)) with the given caption, _i.e._, noun phrases, attributes, and semantic categories. While for the other top-K boxes (\(b^{k}\)), we treat the **model inferred** categories and attributes as positives. Here, we continue training both **visual and text encoders** in \(\Phi_{\text{CLIP-Attr}}\) by optimising the following loss:
\[\mathcal{L}_{\text{cls}}=1/K\cdot\sum\limits_{k=0}^{K}\mathcal{L}_{\text{MIL- NCE}}^{k} \tag{5}\]
where \(\mathcal{L}_{\text{MIL-NCE}}^{k}\) denotes MIL-NCE loss over the \(k\)th box and the corresponding textual concepts (here, we treat the largest box \(b^{*}\) as the 0th). An overview is shown in the bottom-left of Fig. 2.
### Distilled Object Attribute Recognition
Although open-vocabulary object attribute prediction can be realised by the above proposed \(\Phi_{\text{CLIP-Attr}}\) with the pre-computed proposals, the inference procedure is time-consuming, because every cropped region is fed into the visual encoder. In this section, we aim to address the slow inference speed, and train a Faster-RCNN type model end-to-end for object detection and attribute prediction, termed as OvarNet (Open-vocabulary attribute recognition):
\[\{\hat{b}_{k},\hat{c}_{k},\hat{a}_{k}\}=\Phi_{\text{Ovar}}=\Phi_{\text{cls}} \circ\Phi_{\text{crpn}}\circ\Phi_{\text{v-enc}}(\mathcal{I})\]
where the image is sequentially processed by a visual encoder, class-agnostic region proposal, and open-vocabulary attributes classification, as illustrated in the right of Fig. 2.
**Visual Encoder.** To start with, the input image is fed into a visual backbone, obtaining multi-scale feature maps:
\[\mathcal{F}=\{f^{1},\ldots,f^{l}\}=\Phi_{\text{v-enc}}(\mathcal{I}) \tag{6}\]
where \(f^{i}\) refers to the feature map at \(i\)-th level, we adopt the visual encoder from \(\Phi_{\text{CLIP-Attr}}\).
**Class-agnostic Region Encoding.** To extract regional visual embeddings for candidate objects, we make the class-agnostic region encoding as follows,
\[\{\hat{v}_{1},\ldots,\hat{v}_{n}\}=\Phi_{\text{crpn}}=\Phi_{\text{attn-pool}} \circ\Phi_{\text{ri-align}}\circ\Phi_{\text{prn}}(\mathcal{F}) \tag{7}\]
specifically, the feature pyramid is used in the region proposal network to fuse multi-scale features. The ROI-align's output (\(\mathbb{R}^{14\times 14\times 256}\)) is firstly down-sampled with a convolutional layer (stride \(2\) and kernel size \(2\times 2\)), and then passed into a block with 4 Transformer encoder layers with a learnable token, acting as attentional pooling. As a result, \(\hat{v}_{i}\in\mathbb{R}^{1\times D}\) refers to the feature embedding of the \(i\)-th candidate object. We train the proposal network only on base categories as described in Sec. 3.2.1
**Open-vocabulary Attributes Classification.** We extracted attribute concept embeddings as in Sec. 3.2.2. After obtaining the embeddings for each of the proposed objects, we can classify them into arbitrary attributes or categories by measuring the similarity between visual and attribute embeddings (Eq. 2).
**Federated Training.** We combine both COCO and VAW, and adopt a similar federated training strategy as in CLIP-Attr, with the key difference being that we jointly supervise localization for class-agnostic region proposal and classification for attribute prediction. The overall loss function can be formulated as: \(\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{cls}}+\lambda_{\text{RPN}}\cdot \mathcal{L}_{\text{RPN}}\), \(\lambda_{\text{RPN}}\) is a re-weighted parameter.
Intuitively, if the embedding spaces for visual and textual can be well-aligned by training on a limited number of base categories/attributes, the model should enable open-vocabulary object attribute recognition with the aforementioned training procedure, however, in practice, we observe
unsatisfactory performance on the novel categories and attributes. We further incorporate additional knowledge distillation from the CLIP-Attr model described in Sec. 3.2.3 to improve the model's ability for handling unseen categories and attributes.
**Training via Knowledge Distillation.** In addition to the federated training loss \(\mathcal{L}_{\text{total}}\), we introduce an extra distillation item \(\mathcal{L}_{\text{dist}}\), that encourages similar prediction between \(\Phi_{\text{CLIP-Attr}}(\cdot)\) and \(\Phi_{\text{Ovar}}(\cdot)\):
\[\mathcal{L}_{\text{dist}}(\hat{s},s)=\frac{1}{N}\sum\nolimits_{i=1}^{N}\text {KL}(\hat{s}_{i},s_{i}), \tag{8}\]
where \(\hat{s}\) is prediction probabilities over all attributes from OvarNet and \(s\) is the prediction by using image crops from the aligned \(\Phi_{\text{CLIP-Attr}}\). KL denotes the Kullback-Leibler divergence loss.
## 4 Experimental Setup
### Datasets
Here, we introduce the datasets for training and evaluation of our proposed models for open-vocabulary object attribute recognition. Note that, while training the model, we have to consider two aspects of the openness evaluation, one is on semantic category, and the other is on the attributes.
**MS-COCO**[22]. We follow the setup for generalized zero-shot detection as proposed in ZSD [2]: 48 classes are selected as base classes (\(\mathcal{C}_{\text{base}}\)), and 17 classes are used as unseen/novel classes (\(\mathcal{C}_{\text{novel}}\)). The train and minival sets are the same as standard MS-COCO 2017. At the training stage, only the images with base category objects are used.
**VAW**[29]. For attributes recognition, VAW is constructed with VGPhraseCut [42] and GQA [11], containing a large vocabulary of 620 attributes, for example, color, material, shape, size, texture, action, _etc_. Each instance is annotated with positive, negative, and missing attributes. In our experiments, we sample half of the 'tail' attributes and 15% of the'medium' attributes as the novel set (\(\mathcal{A}_{\text{novel}}\), 79 attributes) and the remaining as the base (\(\mathcal{A}_{\text{base}}\), 541 attributes). More details are included in the supplementary material.
**Image-Caption Datasets**. Conceptual Captions 3M (CC-3M) [39] contains 3 million image-text pairs harvested from the web with wide diversities, and COCO Caption (COCO-Cap) [5] comprises roughly 120k images and 5-way image-caption curated style annotations. We only keep images whose pairing captions have overlapped attributes or categories in the COCO and VAW dictionaries. We refer to the two subsets as CC-3M-sub and COCO-Cap-sub.
**LSA**[30]. A recent work by Pham _et al._ proposed the Large-Scale object Attribute dataset (LSA). LSA is constructed with all the images and their parsed objects and attributes of the Visual Genome (VG) [17], GQA [11], COCO-Attributes [28], Flickr30K-Entities [31], MS-COCO [22], and a portion of Localized Narratives (LNar) [32]. Here, we evaluate the effectiveness of our proposed method with the same settings proposed in the original paper: LSA common (4921 common attributes for the base set, 605 common attributes for the novel set); LSA common \(\rightarrow\) rare (5526 common attributes for the base set, 4012 rare attributes for the novel set).
**OvAD**[4]. OvAD introduces the open-vocabulary attributes detection task with a clean and densely annotated attribute evaluation benchmark (no training set is provided). The benchmark defines 117 attribute classes for over 14,300 object instances.
**Summary**. We have constructed the COCO-base and VAW-base datasets for training, and COCO-novel and VAW-novel for evaluation purposes, with the former for _object category classification_, and the latter for _object attributes classification_. To align regional visual features with attributes in CLIP-Attr, we use COCO-base and VAW-base for **Step-I** training, and then use CC-3M-sub and COCO-Cap-sub for **Step-II** finetuning. Later, COCO-base and VAW-base are employed in distilling knowledge from CLIP-Attr to OvarNet for efficiency. On the OvAD benchmark, training data is not provided, we directly evaluate the OvarNet that is trained with COCO, VAW, and COCO-Cap-sub. On the LSA dataset, we train OvarNet with the base attribute annotations in LSA common and LSA common \(\rightarrow\) rare for evaluation purposes. We refer the reader to a more detailed table with dataset statistics in the supplementary material.
### Evaluation Protocol and Metrics
Our considered open-vocabulary object attribute recognition involves two sub-tasks: open-vocabulary object detection and classifying the attributes for all detected objects. We evaluate the two sub-tasks in both **box-given** and **box-free** settings on COCO for category detection and VAW, LSA, and OvAD for attribute prediction. Specifically, the box-given setting is widely used in attribute prediction and object recognition communities [8, 29, 38, 40], where the ground-truth bounding box annotations are assumed to be available for all objects, and the protocol only evaluates object category classification and multi-label attribute classification with mAP metric; In contrast, the box-free setting favors a more challenging problem, as the model is also required to simultaneously localise the objects, and classify the semantic category and attributes.
Note that, the annotations on existing attribute datasets, such as VAW, LSA, are **not exhaustive or object-centric**, (i) not all the objects are labeled in an image, (ii) some annotations are on stuffs, that represents uncountable amorphous regions, such as sky and grass. We have to strike a balance in the box-free setting for attributes by matching the pre
dicted boxes to the ground-truth box with the largest IoU, and then evaluate the attribute predictions using mAP. We consider the aforementioned metrics over base set classes, novel set classes, and all classes.
### Implementation details
**CLIP-Attr Training.** We use the pre-trained R50-CLIP as the visual backbone to get object-centric visual features; all cropped regions are resized to \(224\times 224\) based on the short side with the original aspect ratio kept. Similar to Dectic [52], we use sigmoid activation and multi-label binary cross-entropy loss for classification. We adopt the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.001, a weight decay of 0.0001, and a momentum of 0.9. In **Step-I** training, we train the prompt vectors and text encoder for 50 epochs. In **Step-II** training with the image-caption dataset, we further finetune the entire model (both visual and textual encoders) for another 40 epochs. We select 30 top-K proposals while pre-possessing COCO-Cap-sub, and 15 for CC-3M-sub, as the images are often object-centric in the latter case.
**OvarNet Training.** In OvarNet, we initialise its visual backbone with the trained CLIP-Attr (Resnet50 without AttentionPool2d layer) and keep its text encoder frozen for efficiency. We adopt the AdamW optimizer with a learning rate of 0.0001. The models are trained for 30 epochs with the distillation term, and 60 epochs without distillation. We employ 640-800 scale jittering and horizontal flipping, and the temperature parameter \(\tau\) is configured to be trainable. Following the observation and related prior, we empirically set: \(\gamma=0.25\), and \(\lambda_{\text{RPN}}=1\). All the experiments are conducted on 8 NVIDIA A100 GPUs.
**Prompt Engineering.** We have experimented with different numbers of prompt vectors, empirically, we take 30 vectors and divide them into 10 inserting before, between, and after the attribute and the parent-class attributes. In terms of prompts used for encoding noun phrases, we use 16 learnable vectors, _i.e._, 8 before and 8 after phase embedding.
### Ablation Study
We conduct ablation studies on VAW and COCO datasets, to thoroughly validate the effectiveness of proposed components, including prompt learning with the parent-class attribute, different losses for training CLIP-Attr, and the effect of **Step-I** and **Step-II** training. Finally, we validate the effectiveness of knowledge distillation.
**Prompt Learning with Parent-class Attribute.** In attribute embedding, we employ two variants of prompts for better aligning the attribute with the visual region features. We compare the learned prompt to the manual prompt while training \(\Phi_{\text{CLIP-Attr}}\) with the pre-annotated object boxes. As shown in Tab. 1, comparing to the results from only using plain attribute words, using carefully designed prompt [9, 51], for example, "It is a photo of [category]" and "The attribute of the object is [attribute]" for the category and attribute words, indeed delivers improvements; While adding the parent-class word to the prompt template, _i.e._, use the "The attribute of the object is [attribute], and it is a [parent-attribute]", the lexical ambiguity can be alleviated, leading to a considerable improvement on novel categories and attributes. Finally, our proposed prompt learning with parent-class attribute words further brings a performance improvement by 3.61/3.85 AP on novel attributes/categories, compared to the manual prompt with parent-class words.
**Effect of Step-I and Step-II Training.** We first compare the performance of attributes classification on regional visual feature (assuming ground-truth object boxes are given) before and after **Step-I** training. As illustrated in Tab. 2, the original plain CLIP model has certainly exhibited attribute classification on an elementary level. We can see a substantial improvement by further training on COCO-base and VAW-base, for example, from 46.15 to 57.39 AP for novel attribute classification, and 41.13 to 45.82 AP
\begin{table}
\begin{tabular}{c c c|c c|c c} \hline \hline \multirow{2}{*}{**Attribute**} & \multicolumn{2}{c|}{**Parent**} & \multirow{2}{*}{**M/L**} & \multicolumn{2}{c|}{**VAW**} & \multicolumn{2}{c}{**COCO**} \\ & & & & **AP\({}_{\text{rand}}\)** & **AP\({}_{\text{all}}\)** & **AP\({}_{\text{must}}\)** & **AP\({}_{\text{all}}\)** \\ \hline ✓ & ✗ & none & 52.15 & 59.16 & 40.53 & 49.84 \\ ✓ & ✗ & M & 53.64 & 62.22 & 41.65 & 52.35 \\ ✓ & ✓ & M & 53.78 & 62.76 & 41.97 & 52.81 \\ ✓ & ✗ & L & 55.73 & 64.54 & 42.77 & 53.80 \\ ✓ & ✓ & L & 57.39 & 66.92 & 45.82 & 55.21 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation study on prompt engineering with CLIP-Attr model. **M/L** denotes whether manually designed prompts or learnable prompts are used.
\begin{table}
\begin{tabular}{c c c|c c|c c} \hline \hline \multicolumn{2}{c|}{**Method**} & \multicolumn{2}{c|}{**Training Data**} & \multicolumn{2}{c|}{**VAW**} & \multicolumn{2}{c}{**COCO**} \\ & & & **AP\({}_{\text{max}}\)** & **AP\({}_{\text{rand}}\)** & **AP\({}_{\text{all}}\)** & **AP\({}_{\text{max}}\)** & **AP\({}_{\text{goal}}\)** & **AP\({}_{\text{all}}\)** \\ \hline Plain CLIP & none & 47.69 & 46.15 & 47.53 & 38.56 & 41.13 & 39.45 \\ \hline \(\Phi_{\text{CLIP-Attr}}\) & COCO-base & 49.03 & 47.07 & 48.75 & 99.33 & 42.49 & 53.93 \\ \(\Phi_{\text{CLIP-Attr}}\) & VAN-base & 67.71 & 57.68 & 65.32 & 38.90 & 42.54 & 39.98 \\ \(\Phi_{\text{CLIP-Attr}}\) & COCO-base \(\rightarrow\) VAW-base & 67.90 & 57.39 & 66.92 & 58.26 & 45.82 & 55.21 \\ \hline \(\Phi_{\text{CLIP-Attr}}\) & \(+\) CC-3M-sub & 69.79 & 59.16 & 68.87 & 65.79 & 48.90 & 61.36 \\ \(\Phi_{\text{CLIP-Attr}}\) & \(+\) COCO-Cap-sub & 70.24 & 57.73 & 69.03 & 69.62 & 52.61 & 65.17 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Oracle test for Step-I and Step-II training with **objects’ boxes given**. ‘Plain CLIP’ directly classifies cropped images with a manual prompt.
\begin{table}
\begin{tabular}{c c c c|c c|c c} \hline \hline \multicolumn{2}{c|}{**MI-NCE**} & \multicolumn{2}{c|}{**VAW**} & \multicolumn{2}{c}{**COCO**} \\ \(\mathbf{\text{*}}\)**-cap.** & \(b^{*}\)**-phr.** & \(b^{*}\)**-attr.** & \(b^{*}\)**-attr.** & \(\mathbf{b^{*}}\)**-attr.** & \(\mathbf{AP_{\text{goal}}}\) & **AP\({}_{\text{goal}}\)** & **AP\({}_{\text{goal}}\)** \\ \hline ✓ & & & & 57.39 & 66.92 & 45.82 & 55.21 \\ ✓ & ✓ & & & 57.45 & 66.94 & 45.87 & 55.36 \\ ✓ & ✓ & & & 57.42 & 67.87 & 48.29 & 57.92 \\ ✓ & ✓ & ✓ & & 57.61 & 69.33 & 51.83 & 63.80 \\ ✓ & ✓ & ✓ & & 57.73 & 69.03 & 52.61 & 65.17 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The effect of different weakly supervised loss terms in Step-II training. We conduct ablation studies with COCO-Cap-sub dataset. \(b^{*}\) and \(b^{k}\) refer to the largest object proposal and top-K objectness proposals of an image respectively.
for novel category classification. Furthermore, by incorporating image-caption datasets in **Step-II** training, the performance has been improved to 69.03/65.17 AP on all attributes/categories. In the following experiments, we employ the model \(\Phi_{\text{CLIP-Attr}}\) that exploits the COCO-Cap-Sub for Step-II training.
**Effect of Different Losses for Step-II Training.** We investigate the performance variance by adjusting the different supervisions in Step-II training (Eq. 5). As illustrated in Tab. 3, performance tends to grow monotonically with the increased supervision terms, from 66.92/55.21 to 69.03/65.17 AP on all attributes/categories, indicating that all supervision signals count.
**Knowledge Distillation.** We validate the necessity for knowledge distillation while training OvarNet. Specifically, we experiment by training the model with federated loss only, and two other knowledge distillation approaches, _i.e._, the regional visual features (Feat.) from the visual encoder of \(\Phi_{\text{CLIP-Attr}}\), like ViLD [9], and the prediction probability (Prob.) over all attributes from the matching scores of \(\Phi_{\text{CLIP-Attr}}\). We achieve the distillation by constraining the regional visual feature or prediction probability of OvarNet to be the same as that of \(\Phi_{\text{CLIP-Attr}}\), employing L2/L1 loss on features, and KL loss on probability. As shown in Tab. 4, we make two observations, _first_, knowledge distillation is essential; _second_, knowledge gained from attribute prediction probabilities is more beneficial to improving performance on novel sets, _e.g._, from 51.87/33.17 to 56.43/54.10 AP when compared to L2 loss on visual features, in particular for semantic classification on COCO.
**Different Architectures in \(\Phi_{\text{CLIP-Attr}}\).** We have evaluated the performance on different pre-trained CLIP architectures for attribute classification, such as R50 and ViT-B/16, and then conducted knowledge distillation from the different \(\Phi_{\text{CLIP-Attr}}\) models. As seen in Tab. 5, both architectures perform competitively, with transformer-based architectures consistently outperforming the ConvNet ones.
**Updating OvarNet's Visual Backbone.** We experiment by updating or freezing the OvarNet's visual backbone from different initializations at training. As shown in Tab. 6, initialising the visual backbone from aligned \(\Phi_{\text{CLIP-Attr}}\) is advantageous, whereas finetuning or freezing it makes little difference. For efficiency, we opt to freeze the visual backbone in other experiments.
RCNN [45], ViLD [9], Region CLIP [50], PromptDet [7], and Detic [52], our best model obtains 54.10/35.17 AP for novel categories, surpassing the recent state-of-the-art ViLD-ens [9] and Detic [52] by a large margin, showing that attributes understanding is beneficial for open-vocabulary object recognition. Fig. 3 shows some prediction results of OvarNet.
**Cross-dataset Transfer on OVAD Benchmark.** We compare with other state-of-the-art methods on OVAD benchmark [4], following the same evaluation protocol, we conduct zero-shot cross-dataset transfer evaluation with CLIP-Attr and OvarNet trained on COCO Caption dataset. Metric is average precision (AP) over different attribute frequency distributions, 'head','medium', and 'tail'. As shown in Tab. 8, our proposed models largely outperform other competitors by a noticeable margin.
**Evaluation on LSA Benchmark.** We evaluate the proposed OvarNet on the same benchmark proposed by Pham _et al._[30]. As OpenTAP employs a Transformer-based architecture with object category and object bounding box as the additional prior inputs, we have evaluated two settings. One is the original OvarNet without any additional input information; the other integrates the object category embedding as an extra token into the transformer encoder layer in Sec. 3.3. As shown in Tab. 9, OvarNet outperforms prompt-based CLIP by a large margin and surpasses OpenTAP (proposed in the benchmark paper) under the same scenario, _i.e._, with additional category embedding introduced. 'Attribute prompt' means the prompt designed with formats similar to "A photo of something that is [attribute]", while 'object-attribute prompt' denotes "A photo of [category] [attribute]". For the 'combined prompt', the outputs of the 'attribute prompt' and the 'object-attribute prompt' are weighted average.
## 5 Conclusion
In the paper, we consider the problem of open-vocabulary object detection and attribute recognition, _i.e._, simultaneously localising objects and inferring their semantic categories and visual attributes. We start with a naive two-stage framework (CLIP-Attr) that uses a pre-trained CLIP to classify the object proposals, to better align the object-centric visual feature with attribute concepts, we use learnable prompt vectors on the textual encoder side. On the training side, we adopt a federated training strategy to exploit both object detection and attribute prediction datasets, and explore a weakly supervised training regime with external image-text pairs to increase the robustness for recognising novel attributes. Finally, for computational efficiency, we distill the knowledge of CLIP-Attr into a Faster-RCNN type model (termed as OvarNet), while evaluating on four different benchmarks, _e.g._, VAW, MS-COCO, LSA, and OVAD, we show that jointly training object detection and attributes prediction is beneficial for visual scene understanding, largely outperforming the existing approaches that treat the two tasks independently, demonstrating strong generalization ability to novel attributes and categories.
|
2302.08348 | A robust statistical framework for cyber-vulnerability prioritisation
under partial information in threat intelligence | Proactive cyber-risk assessment is gaining momentum due to the wide range of
sectors that can benefit from the prevention of cyber-incidents by preserving
integrity, confidentiality, and the availability of data. The rising attention
to cybersecurity also results from the increasing connectivity of
cyber-physical systems, which generates multiple sources of uncertainty about
emerging cyber-vulnerabilities. This work introduces a robust statistical
framework for quantitative and qualitative reasoning under uncertainty about
cyber-vulnerabilities and their prioritisation. Specifically, we take advantage
of mid-quantile regression to deal with ordinal risk assessments, and we
compare it to current alternatives for cyber-risk ranking and graded responses.
For this purpose, we identify a novel accuracy measure suited for rank
invariance under partial knowledge of the whole set of existing
vulnerabilities. The model is tested on both simulated and real data from
selected databases that support the evaluation, exploitation, or response to
cyber-vulnerabilities in realistic contexts. Such datasets allow us to compare
multiple models and accuracy measures, discussing the implications of partial
knowledge about cyber-vulnerabilities on threat intelligence and
decision-making in operational scenarios. | Mario Angelelli, Serena Arima, Christian Catalano, Enrico Ciavolino | 2023-02-16T15:05:43Z | http://arxiv.org/abs/2302.08348v4 | # Cyber-risk Perception and Prioritization for Decision-Making and Threat Intelligence
###### Abstract
Cyber-risk assessment is gaining momentum due to the wide range of research and innovation sectors that can benefit from the prevention of cyber-incidents. The increasing connectivity of digital and (cyber-)physical systems requires more attention to cyber-security to enhance the integrity, confidentiality, and availability of data.
We introduce a general framework supporting the prioritization of cyber-vulnerabilities, using flexible regression models that enhance the interpretability of the analysis for decision-making. We take advantage of Mid-Quantile regression as a robust method to deal with ordinal severity assessment, and we compare it to the state-of-the-art models for cyber-risk ranking and graded responses, identifying a novel accuracy measure suited for the decision-maker's prioritization.
Our model is grounded on real data from selected databases that support the exploitation of cyber-vulnerabilities in real contexts. The variety of information arising from such datasets allows us to compare multiple models based on their predictive performance, showing how accessible information can influence perception and, hence, decision-making in operational scenarios. Applications for threat intelligence functionalities are discussed too.
Keywords: Cyber-risk; Ranking; Risk perception; Mid-Quantile regression;
## 1 Introduction
Cyber-vulnerabilities of devices, networks, or Information and Communication Technologies (ICTs) can generate system failures or pave the way to different types of cyber-attacks, including Denial-of-Service, Malware injection, and data exfiltration. These incidents can also be enhanced by social engineering, with secondary cascading effects in complex ICTs (system-of-systems, see, e.g., [10]) that may compromise or interrupt service supply, undermining the operational continuity of critical infrastructures.
New vulnerabilities are emerging from the increasing number of connections among digital systems, now including personal devices, sensors, and (computational or storage) cloud services,
which represent an access point to other information systems through privilege escalation [3, 26]; the latter amplifies the severity of cyber-vulnerabilities and represents a weakness when local access points may lead to violations of classified information at the national level, as is the case of public administration [4].
Cyber-incidents lead to economic losses, risks to safety, reputational damage, and violation of personal rights such as privacy, right-to-be-anonymous, and proper use of personal or sensitive data. The effect of these damages is not always measurable, due to the intangible nature of reputational and social effects and the lack of high-quality data, which are often kept secret to prevent additional reputational issues [12].
Cyber-risk assessment refers to a set of methods, standards, approaches, and good practices aimed at informed decision-making in the management of cyber domains, in particular, cyber-vulnerabilities. Currently, the standards of cyber-risk assessment are based on severity levels assessed by institutions, such as the National Institute of Standards and Technology (NIST) and national Computer Security Incident Response Teams (CSIRTs). While NIST provides a harmonized approach to evaluate the general impact of a cyber-vulnerability, contextual factors (e.g., exposure of a vulnerable technology and its identifiability) may influence attackers' perception of exploitability and, hence, affect the actual risk. These factors often arise in reserved reports, data collections, or expert evaluations that are not disclosed. In addition to this limited knowledge, multiple cyber-vulnerabilities can be relevant to individuals and organizations, which have to prioritize them in order to better allocate their cyber-security (economic, temporal, professional) resources based on accessible information and personal criteria.
We propose an alternative statistical framework to address the need for flexible and interpretable models relating to cyber-vulnerability assessment and their prioritization, supporting in this way adaptive decision-making. Flexibility is required to allow different users to adapt the framework based on the information they have, e.g., adding explanatory variables or considering different response variables based on their own ranking. Interpretability is needed to prompt appropriate interventions, i.e., counteractions to fix vulnerabilities or prevent their exploitation.
The paper is organized as follows: the notions on cyber-security and cyber-vulnerabilities that are relevant for this work are described in the following Section 2, where we also describe the main databases (Subsections 2.1-2.2) that are used for the specification and the validation of the proposed model. Section 3 introduces the statistical models used in the paper, with special reference to rank transform and mid-quantile regression models. In particular, the real data are used in Subsection 3.2 to briefly discuss continuous quantile regression as a means to prioritize explanatory variables, interpreted as lines of interventions, rather than cyber-vulnerabilities themselves. Our main proposal is presented and motivated in Section 4, also discussing the appropriate index to assess performance and model comparison suited for our research questions in the cyber-risk domain. In Section 5, following a descriptive analysis of the data, we summarize and comment the results of simulations and the exploration of the real dataset in terms of prioritization of cyber-vulnerabilities. After the discussion of the outcomes in Section 6, conclusions are drawn in Section 7, where we point out future work and applications of the present proposal.
Cyber-risk and Data sources
Cyber-risk assessment is a well recognized problem that has a key-role in different domains, e.g., safety and security in cyber-physical systems [24], industry [7], the management of critical infrastructures (e.g., energy supply systems [23]). Different approaches have been proposed in the literature [12, 13] to foster proper cyber-risk assessment and cyber-security analyses. On the other hand, peculiar aspects of operational scenarios may cause limitations of the efficacy of cyber-risk modelling [23] and, furthermore, require an appropriate trade-off between validity of the assessment model and usability for decision-making.
In a cyber-risk context, one should distinguish between cyber-vulnerability and cyber-incident: a vulnerability is an access point, but this does not necessarily entail a cyber-incident, that is, an actual (intentional or not) damage to a digital system. An _exploit_ is defined as software that can be directly executed to perform a cyber-attack. We talk about a \(0\)_-day_ when the vulnerability has not been disclosed before and there are no available solutions to patch it.
Cyber-risk assessment aims at an informed use of digital resources, as proactive defense is subject to bounded resources: time constraints, verification costs, specific effort for proprietary software, limits to automation, contextual security analysis in highly connected systems.
Proactive defense aims at increasing resilience at the individual and network level (preventing criticalities), preserving individuals and community rights in the cyber-space (privacy, GDPR compliance, right-to-be-anonymous), and supporting efficient management of resources and ICT maintenance. There exist several techniques to enhance cybersecurity: Vulnerability assessment (VA); Penetration testing (PT); Static analysis of applications; Dynamic analysis of applications; (semi-)automatic tools and Deep Learning applications. Regarding the latter, we point out that Deep Learning techniques are gaining increasing attention [8]. However, they do not provide complete protection against malware attacks: in a recent work [5], we showed that CNN classification can be deceived by masking malware with a goodware component to bypass automatic controls, also suggesting feasible counteractions. This approach is called _polymorphism_ and is a software property often used in cyber guerrilla attacks [27].
Each known cyber-vulnerability is uniquely identified by a Common Vulnerability Exposure (CVE) code: within datasets provided by the NIST, the CVE acts as a primary key to retrieve both the impacts (in terms of CIA dimensions) and the severity assessment of relevant intrinsic characteristics of the vulnerability. These features will be described in detail in the next subsection. Focusing on cyber-risk in relation to cyber-vulnerabilities, the current approach is driven by appropriate scoring systems, in line with NIST's methodology [25, 15].
_Exposure_ refers to the number of exposed hosts (devices or systems) for a given CVE, i.e., devices or systems where a given vulnerability has been recognized. Exposure, together with exploit availability and their cost, concurs to define targets and feasible attacks.
### Data Sources for cyber-risk analysis
Several databases can be used to assess the cyber-security of a digital system. Among the most widely used by practitioners, there are:
* the NIST assesses vulnerabilities' severity in terms of data impact dimensions (Confiden
tiality, Integrity, Availability) and 3 additional technical features describing the accessibility prompted by the cyber-vulnerability, namely, Access Vector (AV), Access Complexity (AC), and Authentication (Au). The severity assessments of these six components compose the _attack vector_1. Footnote 1: [https://nvd.nist.gov/vuln/search](https://nvd.nist.gov/vuln/search)
* The Shodan database2 reports exposed hosts or IP addresses affected by known vulnerabilities, which may represent a relevant driver for attackers' intervention. The Shodan database can be queried by specifying a CVE and the Country of exposed hosts. Data are collected by the Shodan monitor platform combining different techniques such as crawling, IP lookups, and metadata analysis. Footnote 2: [https://exposure.shodan.io](https://exposure.shodan.io)
* Reported exploits for CVEs can be extracted from ExploitDB3. Footnote 3: [https://www.exploit-db.com/](https://www.exploit-db.com/)
* Information about exploits can be further refined from VulnDB4, a database that collects information on the price range of exploits associated with a CVE. The fields extracted from VulnDB include the price range of 0-day, the price at the time of querying, and the exploitability Footnote 4: [https://vuldb.com/](https://vuldb.com/)
* Tenable5 risk factor _interprets_ CVSSs and assigns an ordinal risk factor through threat/vulnerability analyses. Footnote 5: [https://www.tenable.com/cve/search](https://www.tenable.com/cve/search)
Python scripts were used to automatically extract information from all the aforementioned databases through APIs. Specifically, starting from CVEs acquired from Shodan, we obtained NIST's attack vectors, exploits from ExploitDB and VulnDB, and Tenable's risk factors.
Running these Python scripts, the final dataset for model validation consists of \(n=714\) units.
### Data Description
The above data manipulation procedure leads to a dataset with the following variables:
1. categorical (ordinal or numeric) regressors extracted from vulnerability characteristic assessment (attack vector);
2. Exposure provides count data, but their variety lets us consider a continuous approximation of this variable.
3. For each CVE, the existence or the absence of an exploit is encoded in a Boolean variable;
4. risk factors are ordinal response conditioned by different types of explanatory variables.
For the present investigation, we select \(p=7\) explanatory variables returned by the procedure described above, namely, the variables \(X_{\text{C}}\), \(X_{\text{I}}\), \(X_{\text{A}}\), \(X_{\text{AV}}\), \(X_{\text{AC}}\), \(\log N_{\text{exp}}\), and \(q_{\text{expl}}\) whose
interpretation is summarized in Table 2.1. They will be related to an ordinal variable assessing the severity of a cyber-vulnerability, here represented by Tenable's risk factor.
We stress that, beyond Tenable's evaluation, the present model can involve different risk factors by experts of individual organizations, which makes the model suited to the specific objectives of decision-makers and fosters the scalability of the statistical framework.
As a final remark, exposure is one of the factors that contribute to the evaluation of the severity in this model, therefore, it appears as a regressor in this work; however, exposure could be considered as a response variable in order to highlight non-trivial grouping effects and relating them from the intrinsic features of vulnerabilities. We will briefly address this aspect at the end of this work, devoting a separate work to its detailed investigation.
## 3 Methodologies
In order to provide an answer to the research questions mentioned in the Introduction, we specify the methodological bases of our proposal and the models that are used for its comparison.
Before discussing the two specific models addressed in this work in the cybersecurity domain, we briefly review the ordered logit model as a benchmark of regression with ordinal responses
\begin{table}
\begin{tabular}{|c|l|l|l|l|} \hline
**Source** & **Variables** & **Type** & **Interpretation** & **Values** \\ \hline \multirow{4}{*}{**NIST**} & \(X_{\text{C}}\) & & Severity for Confidentiality & • "none: 0” \\ \cline{2-5} & \(X_{\text{I}}\) & & Severity for Integrity & • "partial: 0.275” \\ \cline{2-5} & \(X_{\text{A}}\) & Discrete & Severity for Availability & • "complete: 0.660” \\ \cline{2-5} & \multirow{4}{*}{\(X_{\text{AV}}\)} & \multirow{4}{*}{Ordinal} & & • "Hequires local access: \\ & & & & 0.395 \\ \cline{3-5} & & & Type and severity & • "Local Network accessible: \\ & & & of the access vector & 0.646” \\ \cline{3-5} & & & & • "Network accessible: 1” \\ \cline{2-5} & \multirow{4}{*}{\(X_{\text{AC}}\)} & \multirow{4}{*}{Type and severity of access complexity} & & • "high: 0.35 \\ \cline{3-5} & & & & • "medium: 0.61” \\ \cline{3-5} & & & & • "low: 0.71” \\ \cline{3-5} & & & & • "Requires no authentication: \\ \cline{3-5} & & & & 0.704” \\ \cline{3-5} & & & Type and severity of authentication & • "Requires single instance \\ \cline{3-5} & & & & of authentication: 0.56” \\ \cline{3-5} & & & & • "Requires multiple instances \\ \cline{3-5} & & & & of authentication: 0.45 \\ \hline
**Shodan** & \(N_{\text{exp}}\) & Count data & Number & Integers \\ \hline \multirow{2}{*}{**ExploitDB**} & \multirow{2}{*}{\(q_{\text{expl}}\)} & \(\bullet\) Binary & • Existence (binary/Boolean) & • \(\{0,1\}\) (binary) \\ & & • Count data & • Number of exploits (count) & • Integers (count) \\ \hline \multirow{4}{*}{**VulnDB**} & \multirow{2}{*}{\(p_{\text{expl}}\)} & \(\bullet\) Discrete & • State of the exploit & • "No defined” \\ & & ordinal & (interpreted as ordinal) & • "Unproven" \\ \cline{3-5} & & & • "Proof-of-Concept" \\ \cline{3-5} & & & • Price range of the exploit & • "Highly functional" \\ \hline \multirow{3}{*}{**Tenable**} & \multirow{2}{*}{\(Y\)} & Discrete & Risk factor following & “Low” \\ & & Ordinal & threat/vulnerability analysis & “Medium”, \\ \cline{3-5} & & & threat/vulnerability analysis & “High”, \\ \cline{3-5} & & & & “Critical” \\ \hline \end{tabular}
\end{table}
Table 2.1: Variable main attributes and their interpretation for statistical modelling. For each set of variables, the data source is provided in the leftmost column.
[21]. As well known, it is a generalized linear model suited to deal with cumulative probability distributions for ordinal responses conditioned to explanatory variables. Specifically, let \(y_{1},\ldots,y_{n}\) be a sample of \(n\) ordinal responses, and \(\mathbf{X}\) be a set of explanatory variables. The model aims at describing the effect of covariates on the odds
\[\log\frac{P(y\leq h|\mathbf{X})}{P(y\geq h|\mathbf{X})}=\alpha_{h}-\beta\cdot \mathbf{X},\quad h_{1}\leq h_{2}\Leftrightarrow\alpha_{h_{1}}\leq\alpha_{h_{2}}. \tag{3.1}\]
In this way, the log-ratio of the odds on the left-hand side depends on the ordinal level \(h\) only through the scale coefficient \(\alpha_{h}\), and not through the variables \(\mathbf{X}\) (proportional odds assumption). This model is well adapted, among all, to qualitative (ordinal) assessments, including the severity of cyber-vulnerabilities associated with the risk factor described in Subsection 2.1.
We will use the ordered logit model as the data generation mechanism in our simulation, comparing it with two alternative models. Indeed, despite the wide applicability of ordered logit or probit, more general approaches can be envisaged to overcome limitations from the potential violation of model assumptions, e.g., proportional odds in ordered logit.
Another motivation stimulating the research for new methodologies to deal with ordinal responses is the reduced interpretability of parameter estimates of GLMs with respect to simpler linear regression; this aspect is relevant in operational scenarios, where decision-makers should be able to interpret and quantify the impact of an explanatory variable without assuming background knowledge on the underlying statistical model. For this reason, the use of a regression model with ordinal responses in cyber-risk assessment was proposed by Giudici and Raffinetti [12], which we briefly present in the following.
### Rank Transform in Linear Regression
A recent approach of [12] involves a linear regression model for data regarding cyber-_incidents_ and is based on the rank transform of a \(n\)-dimensional ordinal variable \(Y\) with \(k\) levels, that is, the set of ranks for each observation with a given prescription to handle ties [14]; formally:
\[Y\text{ ordinal response}\to R(Y)\in\{r_{1},r_{2},\ldots,r_{k}\} \tag{3.2}\]
where the values of the ranks transform are
\[r_{1}=1,\quad r_{h+1}=r_{h}+\#Y^{(-1)}(\{h+1\}),\quad h\in\{1,\ldots,k-1\} \tag{3.3}\]
and \(\#Y^{(-1)}(\{h+1\})\) denotes the cardinality of the components of \(Y\) that are the pre-image of \(h+1\). The fit of the regression model
\[R_{i}=\beta_{0}+\beta\cdot\mathbf{X}_{i}+\varepsilon_{i},\quad\varepsilon \sim\mathcal{N}(0,\sigma^{2}) \tag{3.4}\]
is compared with respect to the Rank Graduation Accuracy (RGA) [12]
\[\text{RGA}:=\sum_{i=1}^{n}\frac{n}{i}\cdot\left(\frac{1}{n\overline{y}}\cdot \sum_{j=1}^{i}y_{\bar{\mu}_{j}}-\frac{i}{n}\right)^{2} \tag{3.5}\]
where test data \(y\) are ranked using the estimated ranks \(\hat{r}\) obtained by fitting (3.4).
As anticipated, the choice of model (3.2)-(3.4) in [12] is argued to provide more interpretable results supporting decision-making with respect to GLMs. On the other hand, we note that the use of linear regression with rank transform may not be suited to deal with cyber-vulnerabilities: contrary to cyber-incidents, which actually happened, vulnerabilities are subject to different types of uncertainty mentioned above, especially in the cyber-guerrilla context [27].
From the methodological perspective, this means that several assumptions underlying the linear regression model may not be fulfilled when dealing with cyber-vulnerabilities. In particular, linear models rely on the normality assumption for the residuals, which may be not met in network of digital systems; in fact, evidence shows that some relevant features of data breach datasets are well described by heavy-tail distributions [9]. Even the homoscedasticity assumption may not be fulfilled and; class unbalancing, which we shall observe in cyber-vulnerability, makes the linear model more sensitive with respect to this violation, while quantile regression does not assume homoscedasticity.
A final remark comes from a practical requirement, since we aim at providing a prioritization method that is local, i.e., it is based on the severity assessment of individual vulnerabilities in order to support decision-making based on available information; on the other hand, the values assumed by the rank transform are informative only related to the number of observations, which is well-defined for cyber-incidents, but may be not representative of the full set of vulnerabilities as argued above.
### Quantile Regression: remarks for cyber-risk assessment
Both the ordered logit and linear regression models rely on assumptions that can be not verified in real datasets; in the specific cyber-security domain, such hypotheses may actually be not verifiable, due to the already mentioned confidentiality and restrictions to data sharing. For this reason, it is appropriate to consider distribution-free approaches to make the analysis more robust against violations of statistical assumptions, which leads us to quantile regression.
Let \(Q_{\tau}:=\inf_{y}\{y:\,\tau\leq F(y)\}\) be the \(\tau\)-th quantile for a RV \(y\) with CDF \(F\). Quantile regression estimates \(Q_{\tau}\) conditioning on \(k\) regressors ([16])
\[Q_{\tau}(y_{i}|\mathbf{X}_{i},\beta)=\mathbf{X}_{i}^{\mathsf{T}}\cdot\beta( \tau),\quad i\in\{1,\ldots,n\}. \tag{3.6}\]
Estimates \(\beta(\tau)\) comes from the minimization of the loss function
\[\hat{\beta}(\tau) := \operatorname*{argmin}_{\beta\in\mathbb{R}^{k}}\sum_{i=1}^{n} \varrho_{\tau}(y_{i}-\mathbf{X}_{i}^{\mathsf{T}}\cdot\beta),\] \[\varrho_{\tau}(u) := u\cdot(\tau-\mathbb{I}(u<0)) \tag{3.7}\]
where \(\mathbb{I}(X)\) is the characteristic function of \(X\subseteq\mathbb{R}\).
In addition to increased robustness against model misspecification, the choice of the quantile regression leads to a new parameter \(\tau\) that naturally relates to the notion of Value-at-Risk (VaR) [6] (also see [24] for a discussion of VaR in the cybersecurity context), which is in line with the purposes of this work. Different estimates can arise from different choices of the quantile level,
which let us compare different rankings or prioritizations at different quantile levels by looking at parameters associated with the regressors. However, this aspect may lead to ambiguities if not properly linked to risk evaluation and decision-making.
We provide evidence of such ambiguity using the data described in Subsection 2.2, showing that a given prioritization protocol _for the regressors_ may generate incompatible ranking orderings due to the quantile crossing phenomenon. The following remark is relevant as information about the exposure of vulnerable hosts is often considered a measure of risk by itself, i.e., the response variable in a regression model. Thus, let us suppose that a decision-maker aims at estimating the diffusion of a given vulnerable technology, considering it as a proxy of the risk associated with the vulnerability. For instance, large exposure to a vulnerability may increase the estimated number of cyber-incidents and, hence, their combined impact.
Coefficient estimates given by (3.7) provide us with a criterion to rank the attributes that describe the severity of a cyber-vulnerability based on risk perception and acceptance (the quantile level \(\tau\)). The attributes correspond to some components of the attack vector provided by the NIST, namely, a subset \(\mathcal{I}\subseteq\{X_{\mathrm{C}},\ldots,X_{\mathrm{Au}}\}\). Being associated with categorical data, we introduce
\[\Pi:=\{(p,\ell):\,p\in\{X_{\mathrm{C}},\ldots,X_{\mathrm{Au}}\},\ell\in\{1, \ldots,L_{p}-1\}\} \tag{3.8}\]
where \(L_{p}\) is the number of modalities for the \(p\)-th variable; then, we adopt an ANOVA representation considering parameters \(\beta_{\pi}(\tau)\) for indicator variables \(X_{\pi}\) indexed by \(\pi\in\Pi\) (or \(\pi\in\mathcal{S}\) with \(\mathcal{S}\).
For each specification of the quantile level \(\tau\), the set of parameters estimated through (3.7) can be used to prioritize regressors based on their effects on the exposure. Formally: for each level \(\tau_{c}\) indexed by \(c\in\{1,\ldots,T\}\), estimate \(\beta\) via (3.7); then, order the elements in \(S\) based on the relation \(\prec_{\tau}\) defined on \(\Pi\) by
\[\pi_{1}\prec_{\tau},\pi_{2}\Leftrightarrow,\hat{\beta}_{\pi_{1}}\leq_{\alpha} \hat{\beta}_{\pi_{2}}\quad\pi_{1},\pi_{2}\in S \tag{3.9}\]
where \(\leq_{\alpha}\) denotes that, for a given test regarding \(\beta_{\pi_{2}}-\beta_{\pi_{1}}\) in the following reformulation of the regression model
\[f(y|x)=\alpha_{0}+\beta_{\pi_{1}}(x_{\pi_{1}}+x_{\pi_{2}})+(\beta_{\pi_{2}}- \beta_{\pi_{1}})x_{\pi_{2}}+\sum_{\pi\in\mathcal{S}\setminus\{\pi_{1},\pi_{2} \}}\beta_{\pi}x_{\pi}. \tag{3.10}\]
is significantly greater than \(0\) at a given level \(\alpha\).
We illustrate this approach by estimating the contribution of the characteristics \(X_{\mathrm{A}}\) and \(X_{\mathrm{C}}\) at two different quantile levels: the statistics \(\max_{\pi\in\Pi}\{\hat{\beta}_{\pi}(\tau)\}\) can inform us on the attribute of the attack vector that, if positive, contributes the most to raise the \(\tau\)-quantile.
Comparing Tables 3.1 and 3.2, we see a ranking inversion following the change of the quantile level, from \(\tau=.5\) to \(\tau=.89\): parameters estimated for the "partial" levels in both Confidentiality (**C1**) and Availability (**A1**) provide different rankings depending on \(\tau\).
### Mid-Quantile Regression
The observation in the previous section leads us to quantile regression where the response itself assesses the risk or the severity of a vulnerability, so as to avoid potential inconsistencies due to ranking the attributes associated with the regressors.
Dealing with an ordinal response, we have to extend the quantile regression approach to discrete variables; to this purpose, we take advantage of _mid-quantile_ (MidQR hereafter) regression methods. Recent work by Geraci and Farcomeni [11] applies mid-quantile regression [22] to discrete data: starting from samples \((X_{i},Y_{i})\) where \(Y\sim\text{cat}(p_{k},1\leq i\leq k)\), \(\pi_{h}=\frac{1}{2}p_{h}+\sum_{\ell=1}^{h-1}p_{\ell}\), define the _mid-Cumulative Distribution Function_\(G_{Y}(y):=p(Y\leq y)-\frac{1}{2}p(Y=y)\) and _mid-quantile function_
\[H_{Y}(p)=\int_{0}^{1}\sum_{h=1}^{k}\left((1-\gamma)\cdot y_{h}+\gamma\cdot y_{h +1}\right)\cdot\delta\left((1-\gamma)\cdot\pi_{h}+\gamma\cdot\pi_{h+1}-p \right)d\gamma. \tag{3.11}\]
Estimators for unconditioned MidQR are obtained naturally, i.e., by substitution of the estimates in the expression of the mid-quantile function. Such estimators enjoy good asymptotic consistency and normality for the sampling distribution [20].
When \(H_{h(Y)|X}(p)=\mathbf{X}^{\mathsf{T}}\cdot\beta(p)\) for a given link function \(h(\cdot)\), the estimate \(\hat{G}_{Y|X}(y|x)\) can be carried out using the non-parametric estimator encompassing both continuous and discrete predictors [18]
\[\hat{F}_{Y|X}(y|x)=\frac{n^{-1}\cdot\sum_{i=1}^{n}\mathbb{I}(Y_{i}\leq y)K_{ \lambda}(X_{i},x)}{\hat{\delta}_{X}(x)},\quad\hat{m}_{Y|X}(z_{j}|x):=\hat{F}_{ Y|X}(z_{j}|x)-\hat{F}_{Y|X}(z_{j-1}|x) \tag{3.12}\]
\begin{table}
\begin{tabular}{l c c c c c} \cline{2-6} & \multicolumn{1}{c}{**Estimate**} & \multicolumn{1}{c}{**SE**} & \multicolumn{1}{c}{**P\(>|\mathbf{t}|\)**} & \multicolumn{1}{c}{**[0.025,**} & \multicolumn{1}{c}{**,0.975]**} \\ \cline{2-6} \multirow{2}{*}{Intercept} & \(5.05\cdot 10^{4}\) & \(3.22\cdot 10^{3}\) & \(0.00\) & \(4.41\cdot 10^{4}\) & \(5.68\cdot 10^{4}\) \\ \hline \(C_{1}\) & \(-6.32\cdot 10^{3}\) & \(3.10\cdot 10^{3}\) & \(0.04\) & \(-1.24\cdot 10^{4}\) & \(-2.36\cdot 10^{2}\) \\ \hline \(C_{2}\) & \(-1.27\cdot 10^{4}\) & \(1.10\cdot 10^{4}\) & \(0.250\) & \(-3.43\cdot 10^{4}\) & \(8.92\cdot 10^{3}\) \\ \hline \(A_{1}\) & \(-2.03\cdot 10^{4}\) & \(3.42\cdot 10^{3}\) & \(0.00\) & \(-2.70\cdot 10^{4}\) & \(-1.36\cdot 10^{4}\) \\ \hline \(A_{2}\) & \(-2.44\cdot 10^{4}\) & \(1.02\cdot 10^{4}\) & \(0.02\) & \(-4.46\cdot 10^{4}\) & \(-4.34\cdot 10^{3}\) \\ \hline \end{tabular}
\end{table}
Table 3.2: Summary of quantile regression at level \(\tau=.89\).
\begin{table}
\begin{tabular}{l c c c c c} \cline{2-6} & \multicolumn{1}{c}{**Estimate**} & \multicolumn{1}{c}{**SE**} & \multicolumn{1}{c}{**P\(>|\mathbf{t}|\)**} & \multicolumn{1}{c}{**[0.025,**} & \multicolumn{1}{c}{**,0.975]**} \\ \cline{2-6} \multirow{2}{*}{Intercept} & \(5.05\cdot 10^{4}\) & \(3.22\cdot 10^{3}\) & \(0.00\) & \(4.41\cdot 10^{4}\) & \(5.68\cdot 10^{4}\) \\ \hline \(C_{1}\) & \(-6.32\cdot 10^{3}\) & \(3.10\cdot 10^{3}\) & \(0.04\) & \(-1.24\cdot 10^{4}\) & \(-2.36\cdot 10^{2}\) \\ \hline \(C_{2}\) & \(-1.27\cdot 10^{4}\) & \(1.10\cdot 10^{4}\) & \(0.250\) & \(-3.43\cdot 10^{4}\) & \(8.92\cdot 10^{3}\) \\ \hline \(A_{1}\) & \(-2.03\cdot 10^{4}\) & \(3.42\cdot 10^{3}\) & \(0.00\) & \(-2.70\cdot 10^{4}\) & \(-1.36\cdot 10^{4}\) \\ \hline \(A_{2}\) & \(-2.44\cdot 10^{4}\) & \(1.02\cdot 10^{4}\) & \(0.02\) & \(-4.46\cdot 10^{4}\) & \(-4.34\cdot 10^{3}\) \\ \hline \end{tabular}
\end{table}
Table 3.1: Summary of quantile regression at level \(\tau=.5\).
where \(K_{\lambda}(X_{i},x)\) is a kernel function with bandwidth \(\lambda\). Estimates of coefficients \(\beta\) follow from the minimization of the following quadratic loss function
\[\arg\min\psi_{n}(\beta;p),\quad\psi_{n}(\beta;p):=n^{-1}\cdot\sum_{i=1}^{n} \left(p-\hat{G}_{Y|X}(h^{-1}(\mathbf{X}_{i}^{\intercal}\cdot\beta)\right)^{2}. \tag{3.13}\]
The estimation and fitting procedures can be carried out using the R package Qtools developed by the authors of [11].
MidQR will be used to deal with ordinal response variables representing severity levels, beyond NIST's assessment. Together with the statistical model, an appropriate index should be considered to evaluate its performance and compare it with other reference models. For this purpose, we adapt RGA (3.5) to the specific case of cyber-vulnerability assessment, analyzing the nature of the variables involved in the model in terms of criteria to be satisfied in the decision process. We discuss this aspect in more detail in the following section.
## 4 MidQR and a new performance index for cyber-risk estimation
For our purposes, MidQR will be used to provide estimates of the conditional quantile given a set of regressors that includes both intrinsic vulnerability characteristics and external variables (exposure and, additionally, exploit availability). Tenable's risk factor is our ordinal response variable of interest. It is worth remarking that quantile regression is also robust against class unbalance in discrete regressors, which is indeed observed in real data as we will see.
This section aims at contextualizing the ranking accuracy index within the decision problem under consideration in cyber-risk assessment.
Each quantitative index used in severity assessment should enjoy some invariance properties with respect to different attributions of quantitative values to each ordinal level, so that the accuracy only depends on their order. This requirement has a practical effect in regression models dealing with estimated ranking or, more generally, distributions of ordinal variables, such as the linear model for rank-transformed variables and MidQR. These models estimate the _conditional_ distributions, and the estimates concern the quantity
\[F_{Y|X}(Y\leq y|\mathbf{x})=\frac{P(Y\leq y\wedge\mathbf{X}=\mathbf{x})}{P( \mathbf{X}=\mathbf{x})}\]
where we focus on regressors \(\mathbf{X}\) with non-zero point mass. This quantity has an interesting interpretation, as it is a balance of the impact (\(P(Y\leq y\wedge\mathbf{X}=\mathbf{x})\)) and the rarity (\(P(\mathbf{X}=\mathbf{x})\)) of the event. As mentioned in the introduction, the latter is subject to different forms of uncertainties: among all, subjectivity in the assessment of impact dimensions \(\mathbf{X}\) and, more importantly, the uncertainty on the representativeness of the sample due to unknown vulnerabilities, 0-days, situational factors that affect the identification and the severity of a vulnerability, and non-disclosure policies that may under-report the occurrence of vulnerabilities. This paper does not aim at modeling these types of measurement errors, whose efficacy depends on the aspects of cyber-risk under consideration and the level of investigation, as already mentioned at the begin
ning of Section 2. However, the model and the performance index we propose can be discussed in relation to the aforementioned sources of uncertainty: we postpone this discussion to Section 6. Here we stress that such uncertainty about the samples space itself, which here affects the distribution of regressors and the outputs of the regression model, also arises in an algebraic framework as a way to deal with inequivalence of micro and macro statistical descriptions of a physical system [1].
Specifying this remark to the ordinal assessment of cyber-vulnerabilities, the use of such quantitative values in (3.5) should take into account the nature of variables. The evaluation of (3.5) takes into account an algebraic structure (formally, the semiring \((\mathbb{N},+,\cdot,0,1)\) of natural numbers for rankings, or the ordered field \((\mathbb{R},+,\cdot,0,1)\) from the regression) that is not necessarily linked to the original ordinal variables assessing the severity of a cyber-vulnerability. This algebraic structure is an artifact suited to the regression model and, hence, to the estimated variables (let them be the rank-transform or the mid-quantile); the only effect derived from the ordinal variables is the order defining the summands in (3.5).
Starting from the previous comments and example (4.1). We now introduce a novel goodness of fit index more suited to accommodate the characteristics of cyber-vulnerability data. We consider a _reverse_ RGA index, which we refer to as AGR, defined as \(\mathrm{RGA}(r_{\mathrm{tr}},r_{\mathrm{ext}})\), namely, we exchange the roles of the estimated \(r_{\mathrm{ext}}\) and the "true" \(r_{\mathrm{tr}}\) rankings.
To better appreciate the need for appropriate use of the RGA index for unconventional cyber-risk assessment, we consider the case of sub-sampling, i.e., known subsets of an unknown family of cyber-vulnerabilities. This emulates the partial information available regarding 0-days or unconventional cyber-incidents. For example, we can consider the following 5-dimensional rank vectors:
\[c_{\mathrm{ext}}:=(1,3,2,2.9,10),\quad c_{\mathrm{tr},1}:=(1,3,2,2,9),\quad c _{\mathrm{tr},2}:=(1,5,3,3,7) \tag{4.1}\]
where \(c_{\mathrm{ext}}\) comes from a given estimation procedure, while \(c_{\mathrm{tr},u}\), \(u\in\{1,2\}\), are two "true" rankings obtained from different knowledge about the state of a digital system and its sample space. Despite being different, the rankings \(c_{\mathrm{tr},1}\) and \(c_{\mathrm{tr},2}\) are consistent with the same attribution of ordinal levels: for the sake of concreteness, we can assume that the components of both \(c_{\mathrm{tr},1}\) and \(c_{\mathrm{tr},2}\) are generated by ranking the same ordinal assessment ("10","6","8","8","3"), where severity levels are ordered from "10" to "1". In this case, the differences between \(c_{\mathrm{tr},1}\) and \(c_{\mathrm{tr},2}\) can arise from the existence of other elements in the two ranked sample spaces, beyond the ones associated with the components of \(c_{\mathrm{tr},1}\) and \(c_{\mathrm{tr},2}\). The evaluation of the \(\mathrm{RGA}(y_{\mathrm{ext}},y_{\mathrm{tr},u})\) for \(u\in\{1,2\}\) following the definition (3.5) does not satisfy the invariance under changes of rankings that are generated by the same ordinal assessment. Indeed, we have
\[\mathrm{RGA}(c_{\mathrm{ext}},c_{\mathrm{tr},1})=0.5161\neq 0.3232=\mathrm{RGA} (c_{\mathrm{ext}},c_{\mathrm{tr},2}). \tag{4.2}\]
On the other hand, we find
\[\mathrm{AGR}(c_{\mathrm{ext}},c_{\mathrm{tr},1})=\mathrm{RGA}(c_{\mathrm{tr},1},c_{\mathrm{ext}})=0.5272=\mathrm{RGA}(c_{\mathrm{tr},2},c_{\mathrm{ext}})= \mathrm{AGR}(c_{\mathrm{ext}},c_{\mathrm{tr},2}). \tag{4.3}\]
It is immediate to see that the latter equality holds for all the choices of \(c_{\rm ext},c_{\rm tr,1},c_{\rm tr,2}\).
This shows that the AGR index resolves the lack of invariance under sub-sampling in RGA. The favorable invariance of the AGR index under rank transformations that are compatible with the same underlying ordinal assessment is in line with Luce's axiom of Independence of Irrelevant Alternatives [19], while some algebraic conditions related to this type of symmetry have been discussed in [1, 2]. Practically, this invariance is required when dealing with partial information about the space of potential cyber-vulnerabilities, which is the general situation faced by a decision-maker, due to the occurrence of unknown vulnerabilities not exploited yet, 0-days, and _unconventional_ cyber-attacks, namely, cyber guerrilla [27].
In the following section, we move to the analysis of both simulated and real data.
## 5 Results
### Descriptive analysis of the dataset
Data extracted from the databases described in Section 2.1 select \(n=714\) cyber-vulnerabilities in Italy. The time span of the CVEs is 1999-2021: this choice does not significantly affect the assessment of impact and resources, since fixing and countermeasures procedures are generally available in short times and, hence, the lifetime of each CVE is limited. More comments in this regard will be provided in Section 6.
We note that each variable in the attack vector is characterized by manifest unbalancing among the different levels, as is shown in Figures (a)a-(b)b.
When the response in a regression model is (or can be well approximated by) a continuous variable, then unbalancing could make linear regression more sensitive to deviations from homoscedasticity. This is the case of the approach described in Subsection 3.2, where the response is the exposure of vulnerable hosts: it is easily checked from the QQ-plots in Figures (a)a-(b)b that the residuals of the exposure \(N_{\rm exp}\) and its log-transform \(\log(1+N_{\rm exp})\), considered as responses in a linear model with regressors \((X_{\rm C},X_{\rm I},X_{\rm A},X_{\rm AV},X_{\rm AC})\), show strong deviations from normality.
This remark also entails that linear regression would not fit the distribution assumptions when a proxy of cyber-risk such as exposure is used as the response. We also note that even the residuals of the "free model", i.e., the QQ-plot of the exposure \(N_{\text{exp}}\) itself, violate the normality assumption (see Figure (b)b). The use of the transform \(N_{\text{exp}}\mapsto\log(N_{\text{exp}}+1)\) in the previous QQ-plots slightly reduces the deviation from normality: more importantly, it highlights multimodality in the distribution of exposure, as it is manifest in the histograms depicted in Figures (a)a-(b)b.
This suggests us the need to go beyond linear models for an appropriate description of external characteristics of cyber-vulnerabilities, starting from their intrinsic (attack vector) and extrinsic (exposure, exploits) features as regressors.
### Rankings and Mid-Quantile regression
#### Simulation study
We start specifying the preliminary simulation study to provide a general comparative analysis between the model presented in [12] and the MidQR.
* We use \(n_{tr}=320\) units for the training and \(n_{test}\) units for testing the accuracy performance
Figure 12: QQ-plots of the theoretical (Normal) quantiles compared to the empirical quantiles of residuals of \(y=\log(1+N_{\text{exp}})\) derived from the exposure \(N_{\text{exp}}\) of cyber-vulnerabilities.
Figure 13: Histograms for the empirical distributions of the exposure \(N_{\text{exp}}\) compared to \(\log(1+N_{\text{exp}})\). The corresponding continuous approximations (red dashed lines) highlight multimodality.
of the models. We start with a response variable having \(k=4\) levels, in line with Tenable's risk factor that is employed in the analysis of real data. However, we also test \(k\in\{3,6,8\}\) to evaluate the behavior and the performance of the different models when the number of levels of the response variable changes.
* Two continuous and two discrete explanatory variables are considered, each of the latter having three categories. This induces \(P:=2+2\cdot(3-1)=6\) regressors after moving to ANOVA variables.
* Following the generation of the so-specified variables, we consider \(\alpha_{h}\), \(h\in\{1,\ldots,k-1\}\) and \(\beta_{p}\), \(p\in\{1,\ldots,P\}\) parameters to obtain the corresponding probabilities based on the ordered logit model (3.1).
* This scheme is iterated to obtain \(n_{iter}=100\) samples of the response variable \(Y\). In this way, we get:
* coefficient estimates;
* the mean, over the simulation runs, of the standard error (SE) estimates for each coefficient. For midQR, we adapted a function in Qtools to overcome computational issues in the estimation of the conditional (mid-)CDF, which involves the kernel method mentioned in [11] and based on [17]. Specifically, we acted on the estimation of the covariance matrix of the coefficients, making its computation compatible with cases where the quantile level lies outside the range of the sample mid-CDF. However, the outcomes of this procedure, which is analogous to censoring, may lead to an overestimation of the SE obtained from the kernel method. For this reason, we also present two additional indicators providing information on the SE, which are defined below.
* "Regular" Standard Error (Reg.SE) of each parameter, which is defined as the average SE over the simulation runs where the parameter is significant at a given level (here, 0.05).
* Monte Carlo Standard Error (MCSE), that is the standard error calculated from the coefficient estimates.
* % of iteration runs where a given parameter is statistically significant at level 0.05. The analyses compare the three models under consideration, namely, the data-generating model (ordered logit), linear regression for rank-transformed variables, and mid-quantile regression with \(\tau\in\{0.1,0.3,0.5,0.7,0.9\}\).
* Finally, for each iteration, the RGA and AGR indices are evaluated on the test dataset.
The same analysis is subsequently carried out with the real dataset, in order to compare the relative performances of linear regression for rank-transformed variables and mid-quantile regression based on real evidence.
The use of both continuous and discrete regressors mimics the occurrence of exposure (considered continuous) and attack vector components (discrete variables). We generate
\[\mathbf{X}_{(cont)}\sim\mathcal{N}(\mu,\sigma),\quad\mathbf{X}_{(cat)}\sim p( \pi_{1},\pi_{2}),\quad\forall h\in\{1,\dots,k\}:\,p_{i,h}=\frac{\exp(\beta_{ \mathrm{true},h}\cdot\mathbf{X}_{i})}{\sum_{\ell=1}^{k}\exp(\beta_{\mathrm{ true},\ell}\cdot\mathbf{X}_{i}))} \tag{5.1}\]
where \(\mathcal{N}(\mu,\sigma^{2})\) is the normal distribution with mean \(\mu=0\) and variance \(\sigma^{2}=10\); \(p(\pi_{1},\pi_{2})\) is the categorical distribution with three support points associated with probability weights \(\pi_{1},\pi_{2},1-\pi_{1}-\pi_{2}>0\), in particular, we choose \(\pi_{1}=0.7\). Multiple simulation runs at different choices of \(\beta_{\mathrm{true}}\) have been performed with 3 or 5 quantile levels per simulation.
#### Simulation results
We start presenting the results of simulations where the response variable contains \(k=4\) possible levels: as mentioned above, this situation is in line with the real dataset structure since the Tenable risk factor involves \(k=4\) levels too.
In Tables 5.1-5.2 we show the outcomes from two different scenarios: the parameters defining the theoretical distribution from the Ordered Logit model can be tuned to obtain the uniform probability distribution on the \(k\) response levels (Table 5.1) or they can be chosen generically; in the latter case, we can get a non-uniform distribution (Table 5.2). In the tables, we report the estimates of the model parameters (Est) and the corresponding standard errors (SE) averaged over 100 simulations. We also report the Monte Carlo standard error (MCSE) in order to evaluate the stability of the estimates over the simulations. For LinReg and MidQR, we report the percentage of times in which the parameters resulted significant at 5% level (% sign.).
The resulting RGA and AGR indices are reported in Table 5.3. To provide an informative view of the RGA and AGR indices, we present the boxplots associated with each model in Figure 5.4.
Then, we move to different number of levels in order to better assess the behavior of the different methods in different decision scenarios. We address this aspect first considering \(k=3\): this is a typical scale in several operational or tactical decisions, where levels are generally interpreted as "low", "medium", and "high", respectively.
The outcomes of this set of simulation is presented in Table 5.4.
The corresponding RGA and AGR indices are shown in Table 5.5. Also in this case, we provide a graphical representation of these outcomes in Figure 5.5.
Finally, we complete the simulation study considering more than 4 levels in the response variable. Specifically, we report the results at \(k=6\) (Table 5.6) and \(k=8\) (Table 5.7), with the associated boxplots in Figure 5.6.
#### Real Dataset analysis
In parallel with the investigation of simulated data, we carry out the study of the dataset whose construction has been described in Sections 2.1-2.2. In particular, we will present the same type of indicators reported from simulations; however, here we stress that multiple datasets are constructed from the original one through its random splitting into a training set (\(n_{\mathrm{tr}}=664\)
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline & & \multicolumn{3}{c}{\(\mathbf{X_{3}}\)} & \multicolumn{3}{c}{\(\mathbf{X_{4}}\)} & \multicolumn{3}{c}{\(\mathbf{X_{1}}\)} & \multicolumn{3}{c}{\(\mathbf{X_{2}}\)} & \multicolumn{1}{c}{**Intercept**} \\ \cline{3-9} & & \multicolumn{1}{c}{\(\mathbf{X_{3}}\)} & \multicolumn{1}{c}{\(\mathbf{X_{4}}\)} & \multicolumn{1}{c}{**1**} & \multicolumn{1}{c}{**2**} & \multicolumn{1}{c}{**1**} & \multicolumn{1}{c}{**2**} & \\ \cline{3-9} & Est & -3.097 & 2.094 & 1.017 & 4.141 & -2.062 & 4.227 & \\ OrdReg & SE & 0.312 & 0.244 & 0.368 & 0.530 & 0.402 & 0.540 & \\ & MCSE & 0.032 & 0.029 & 0.033 & 0.052 & 0.042 & 0.050 & \\ \hline \multirow{4}{*}{LinReg} & Est & -37.012 & 24.156 & 14.856 & 44.762 & -27.017 & 46.337 & 98.235 \\ & SE & 2.947 & 2.818 & 7.173 & 7.107 & 7.280 & 7.302 & 6.823 \\ & MCSE & 0.230 & 0.235 & 0.566 & 0.626 & 0.651 & 0.544 & 0.489 \\ & \% sign. & 100.0\% & 100.0\% & 55.0\% & 100.0\% & 99.0\% & 100.0\% & 100.0\% \\ \hline \multirow{4}{*}{MidQR(\(\tau_{1}\))} & Est & -0.238 & 0.156 & 0.038 & 0.359 & -0.146 & 0.482 & 0.291 \\ & SE & 2.896 & 2.466 & 7.227 & 6.083 & 7.972 & 6.670 & 7.338 \\ & Reg.SE & 0.036 & 0.035 & N.D. & 0.086 & 0.092 & 0.090 & 0.089 \\ & MCSE & 0.002 & 0.002 & 0.004 & 0.007 & 0.007 & 0.007 & 0.007 \\ & \% sign. & 71.0\% & 71.0\% & 0.0\% & 70.0\% & 19.0\% & 71.0\% & 66.0\% \\ \hline \multirow{4}{*}{MidQR(\(\tau_{2}\))} & Est & -0.274 & 0.168 & 0.058 & 0.359 & -0.184 & 0.433 & 0.563 \\ & SE & 1.283 & 1.192 & 3.365 & 2.648 & 3.563 & 3.178 & 3.150 \\ \cline{1-1} & Reg.SE & 0.025 & 0.024 & 0.061 & 0.060 & 0.066 & 0.062 & 0.061 \\ \cline{1-1} & MCSE & 0.002 & 0.002 & 0.005 & 0.006 & 0.008 & 0.006 & 0.006 \\ & \% sign. & 71.0\% & 71.0\% & 12.0\% & 71.0\% & 57.0\% & 71.0\% & 71.0\% \\ \hline \multirow{4}{*}{MidQR(\(\tau_{3}\))} & Est & -0.270 & 0.163 & 0.046 & 0.300 & -0.188 & 0.344 & 0.827 \\ & SE & 705.709 & 340.703 & 372.360 & 1024.919 & 578.466 & 1078.914 & 520.001 \\ \cline{1-1} & Reg.SE & 0.022 & 0.021 & 0.058 & 0.056 & 0.061 & 0.056 & 0.057 \\ \cline{1-1} & MCSE & 0.002 & 0.002 & 0.004 & 0.005 & 0.007 & 0.005 & 0.006 \\ \cline{1-1} & \% sign. & 54.0\% & 54.0\% & 7.0\% & 54.0\% & 48.0\% & 54.0\% & 54.0\% \\ \hline \multirow{4}{*}{MidQR(\(\tau_{4}\))} & Est & -0.202 & 0.117 & 0.029 & 0.193 & -0.144 & 0.213 & 1.057 \\ & SE & 1.267 & 1.148 & 2.258 & 3.299 & 2.433 & 3.350 & 2.410 \\ \cline{1-1} & Reg.SE & 0.029 & 0.027 & N.D. & 0.067 & 0.077 & 0.067 & 0.074 \\ \cline{1-1} & MCSE & 0.001 & 0.002 & 0.003 & 0.004 & 0.006 & 0.004 & 0.005 \\ \cline{1-1} & \% sign. & 71.0\% & 70.0\% & 0.0\% & 66.0\% & 30.0\% & 67.0\% & 71.0\% \\ \hline \multirow{4}{*}{MidQR(\(\tau_{5}\))} & Est & -0.125 & 0.075 & 0.001 & 0.086 & -0.097 & 0.085 & 1.262 \\ \cline{1-1} & SE & 3.237 & 2.428 & 5.298 & 7.221 & 5.278 & 8.288 & 6.373 \\ \cline{1-1} & Reg.SE & 0.040 & 0.034 & N.D. & 0.077 & 0.094 & 0.073 & 0.100 \\ \cline{1-1} & MCSE & 0.001 & 0.001 & 0.002 & 0.003 & 0.004 & 0.003 & 0.004 \\ \cline{1-1} & \% sign. & 68.0\% & 46.0\% & 0.0\% & 2.0\% & 1.0\% & 1.0\% & 71.0\% \\ \hline \hline \end{tabular}
\end{table}
Table 5.1: Coefficient estimates from simulations with \(k=4\) levels for the response variable; parameters in the generative model are tuned in order to get the uniform probability distribution on the \(k\) possible response levels.
Figure 5.4: Boxplots for RGA and AGR when \(k=4\); both uniform and non-uniform probability distribution are considered starting from the data generating OrdLog model
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{6}{c}{\(k=4\), uniform} & \multicolumn{6}{c}{\(k=4\), non-uniform} \\ \cline{2-9} \multicolumn{1}{c}{} & \multicolumn{2}{c}{**RGA**} & \multicolumn{2}{c}{**AGR**} & \multicolumn{2}{c}{**RGA**} & \multicolumn{2}{c}{**AGR**} \\ \cline{2-9} \cline{10-10} \multicolumn{1}{c}{} & Est & SD & Est & SD & Est & SD & Est & SD \\ \hline OrdLog & 2.517 & 0.496 & 2.823 & 0.507 & 5.889 & 0.897 & 6.494 & 0.723 \\ LinReg & 3.276 & 0.578 & 1.516 & 0.193 & 6.762 & 0.796 & 3.254 & 0.282 \\ MidQR(\(\tau_{1}\)) & 3.093 & 0.551 & 3.016 & 0.394 & 6.600 & 0.767 & 4.316 & 0.348 \\ MidQR(\(\tau_{2}\)) & 3.212 & 0.555 & 3.143 & 0.391 & 6.657 & 0.768 & 4.356 & 0.342 \\ MidQR(\(\tau_{3}\)) & 3.239 & 0.562 & 3.214 & 0.389 & 6.684 & 0.773 & 4.377 & 0.343 \\ MidQR(\(\tau_{4}\)) & 3.193 & 0.565 & 3.207 & 0.398 & 6.670 & 0.797 & 4.371 & 0.349 \\ MidQR(\(\tau_{5}\)) & 3.016 & 0.573 & 3.146 & 0.418 & 6.491 & 0.862 & 4.276 & 0.370 \\ Self & 4.299 & 0.614 & 4.299 & 0.614 & 8.614 & 0.677 & 8.614 & 0.677 \\ \hline \hline \end{tabular}
\end{table}
Table 5.3: RGA and AGR from simulations with \(k=4\) levels in the response variable; columns 2-5 are generated from a model tuned to produce uniform probabilities for the \(k\) levels in the response.
Figure 5.5: Boxplots for RGA and AGR when \(k=3\)
\begin{table}
\begin{tabular}{l l c c c c c c c} & & \multicolumn{2}{c}{\(\mathbf{X_{3}}\)} & \(\mathbf{X_{4}}\) & \multicolumn{2}{c}{\(\mathbf{X_{1}}\)} & \multicolumn{2}{c}{\(\mathbf{X_{2}}\)} & \multirow{2}{*}{**Intercept**} \\ \cline{4-9} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Est} & \multicolumn{1}{c}{-3.173} & 2.083 & 1.053 & 4.249 & -2.086 & 4.193 & \\ \cline{3-9} OrdReg & SE & 0.395 & 0.298 & 0.466 & 0.745 & 0.499 & 0.755 & \\ & MCSE & 0.038 & 0.028 & 0.050 & 0.072 & 0.042 & 0.082 & \\ \hline \multirow{3}{*}{LinReg} & Est & -23.122 & 15.755 & 9.877 & 28.192 & -17.554 & 24.575 & 74.764 \\ & SE & 1.825 & 1.827 & 4.439 & 4.732 & 4.609 & 4.568 & 4.152 \\ & MCSE & 0.199 & 0.168 & 0.379 & 0.403 & 0.395 & 0.346 & 0.418 \\ & \% sign. & 100.0\% & 100.0\% & 69.0\% & 100.0\% & 99.0\% & 100.0\% & 100.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{1}\))} & Est & -0.195 & 0.114 & 0.038 & 0.270 & -0.129 & 0.291 & 0.341 \\ & SE & 12.519 & 16.381 & 27.625 & 34.324 & 37.952 & 25.530 & 31.305 \\ \cline{1-1} & Reg.SE & 0.027 & 0.028 & N.D. & 0.072 & 0.072 & 0.071 & 0.070 \\ \cline{1-1} & MCSE & 0.002 & 0.002 & 0.003 & 0.004 & 0.004 & 0.005 & 0.004 \\ & \% sign. & 70.0\% & 69.0\% & 0.0\% & 70.0\% & 25.0\% & 70.0\% & 70.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{2}\))} & Est & -0.218 & 0.138 & 0.047 & 0.259 & -0.133 & 0.277 & 0.550 \\ & SE & 8.409 & 6.741 & 17.770 & 18.895 & 19.943 & 18.942 & 16.774 \\ \cline{1-1} & Reg.SE & 0.019 & 0.019 & 0.049 & 0.050 & 0.054 & 0.048 & 0.049 \\ \cline{1-1} & MCSE & 0.001 & 0.002 & 0.003 & 0.004 & 0.005 & 0.004 & 0.004 \\ \cline{1-1} & \% sign. & 70.0\% & 70.0\% & 8.0\% & 70.0\% & 50.0\% & 70.0\% & 70.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{3}\))} & Est & -0.206 & 0.134 & 0.056 & 0.219 & -0.133 & 0.222 & 0.765 \\ & SE & 753.120 & 344.070 & 171.833 & 819.444 & 253.610 & 970.775 & 573.024 \\ \cline{1-1} & Reg.SE & 0.019 & 0.018 & 0.046 & 0.046 & 0.050 & 0.042 & 0.045 \\ \cline{1-1} & MCSE & 0.002 & 0.002 & 0.003 & 0.004 & 0.005 & 0.004 & 0.004 \\ \cline{1-1} & \% sign. & 60.0\% & 60.0\% & 16.0\% & 60.0\% & 49.0\% & 60.0\% & 60.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{4}\))} & Est & -0.129 & 0.087 & 0.040 & 0.121 & -0.086 & 0.116 & 0.924 \\ \cline{1-1} & SE & 22.573 & 11.780 & 28.125 & 42.421 & 27.902 & 57.145 & 31.146 \\ \cline{1-1} & Reg.SE & 0.028 & 0.024 & N.D. & 0.058 & 0.061 & 0.053 & 0.061 \\ \cline{1-1} & MCSE & 0.001 & 0.001 & 0.002 & 0.003 & 0.004 & 0.003 & 0.003 \\ \cline{1-1} & \% sign. & 69.0\% & 67.0\% & 0.0\% & 43.0\% & 13.0\% & 25.0\% & 70.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{5}\))} & Est & -0.061 & 0.042 & 0.029 & 0.045 & -0.045 & 0.036 & 1.036 \\ \cline{1-1} & SE & 48.199 & 25.136 & 34.366 & 61.267 & 41.685 & 82.775 & 74.481 \\ \cline{1-1} & Reg.SE & 0.030 & 0.027 & N.D. & 0.060 & N.D. & N.D. & 0.119 \\ \cline{1-1} & MCSE & 0.001 & 0.001 & 0.002 & 0.002 & 0.003 & 0.001 & 0.002 \\ \cline{1-1} & \% sign. & 20.0\% & 10.0\% & 0.0\% & 1.0\% & 0.0\% & 0.0\% & 70.0\% \\ \hline \end{tabular}
\end{table}
Table 5.4: Coefficient estimates from simulations with \(k=3\) levels for the response variable.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{**RGA**} & \multicolumn{2}{c}{**AGR**} \\ \cline{2-5} & Est & SD & Est & SD \\ \hline OrdLog & 1.439 & 0.488 & 1.545 & 0.538 \\ LinReg & 2.203 & 0.667 & 0.865 & 0.169 \\ MidQR(\(\tau_{1}\)) & 2.113 & 0.677 & 2.733 & 0.487 \\ MidQR(\(\tau_{2}\)) & 2.193 & 0.677 & 2.871 & 0.473 \\ MidQR(\(\tau_{3}\)) & 2.162 & 0.677 & 2.883 & 0.470 \\ MidQR(\(\tau_{4}\)) & 2.082 & 0.667 & 2.848 & 0.470 \\ MidQR(\(\tau_{5}\)) & 1.785 & 0.616 & 2.631 & 0.468 \\ Self & 3.499 & 0.819 & 3.499 & 0.819 \\ \hline \hline \end{tabular}
\end{table}
Table 5.5: RGA and AGR from simulations with a low number \(k=3\) of levels for the response variable.
Figure 5.6: Boxplots for RGA and AGR when \(k=6\) or \(k=8\).
\begin{table}
\begin{tabular}{l l c c c c c c c} & & \multicolumn{2}{c}{\(\mathbf{X_{3}}\)} & \multicolumn{2}{c}{\(\mathbf{X_{4}}\)} & \multicolumn{2}{c}{\(\mathbf{X_{1}}\)} & \multicolumn{2}{c}{\(\mathbf{X_{2}}\)} & \multirow{2}{*}{**Intercept**} \\ \cline{4-9} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Est} & \multicolumn{1}{c}{-3.116} & \multicolumn{1}{c}{2.064} & \multicolumn{1}{c}{1.046} & \multicolumn{1}{c}{4.120} & \multicolumn{1}{c}{-2.074} & \multicolumn{1}{c}{4.094} & \\ OrdReg & SE & 0.237 & 0.179 & 0.306 & 0.407 & 0.335 & 0.394 & \\ & MCSE & 0.024 & 0.015 & 0.029 & 0.037 & 0.035 & 0.040 & \\ \hline \multirow{3}{*}{LinReg} & Est & -61.725 & 41.627 & 23.455 & 76.997 & -48.392 & 80.101 & 108.517 \\ & SE & 3.038 & 2.943 & 7.577 & 7.588 & 7.754 & 7.355 & 6.525 \\ & MCSE & 0.202 & 0.230 & 0.635 & 0.603 & 0.716 & 0.624 & 0.521 \\ & \% sign. & 100.0\% & 100.0\% & 89.0\% & 100.0\% & 100.0\% & 100.0\% & 100.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{1}\))} & Est & -0.347 & 0.217 & 0.007 & 0.387 & -0.203 & 0.532 & 0.366 \\ & SE & 0.821 & 0.717 & 2.078 & 2.096 & 2.573 & 1.958 & 2.207 \\ \cline{1-1} & Reg.SE & 0.033 & 0.033 & N.D. & 0.084 & 0.090 & 0.084 & 0.078 \\ \cline{1-1} & MCSE & 0.001 & 0.002 & 0.004 & 0.006 & 0.005 & 0.006 & 0.004 \\ \cline{1-1} & \% sign. & 89.0\% & 89.0\% & 0.0\% & 89.0\% & 58.0\% & 89.0\% & 89.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{2}\))} & Est & -0.342 & 0.230 & 0.075 & 0.404 & -0.254 & 0.518 & 0.597 \\ & SE & 0.388 & 0.395 & 0.984 & 0.993 & 1.054 & 0.944 & 0.935 \\ \cline{1-1} & Reg.SE & 0.023 & 0.022 & 0.056 & 0.058 & 0.063 & 0.055 & 0.051 \\ \cline{1-1} & MCSE & 0.001 & 0.002 & 0.004 & 0.005 & 0.005 & 0.005 & 0.004 \\ \cline{1-1} & \% sign. & 89.0\% & 89.0\% & 18.0\% & 89.0\% & 89.0\% & 89.0\% & 89.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{3}\))} & Est & -0.314 & 0.213 & 0.089 & 0.363 & -0.230 & 0.437 & 0.830 \\ \cline{1-1} & SE & 2.967 & 2.491 & 2.748 & 5.410 & 1.992 & 6.140 & 4.738 \\ \cline{1-1} & Reg.SE & 0.023 & 0.021 & 0.053 & 0.053 & 0.062 & 0.049 & 0.052 \\ \cline{1-1} & MCSE & 0.002 & 0.002 & 0.004 & 0.005 & 0.005 & 0.004 & 0.004 \\ \cline{1-1} & \% sign. & 81.0\% & 80.0\% & 31.0\% & 81.0\% & 83.0\% & 80.0\% & 85.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{4}\))} & Est & -0.253 & 0.173 & 0.080 & 0.285 & -0.192 & 0.337 & 1.077 \\ \cline{1-1} & SE & 0.320 & 0.293 & 0.689 & 0.675 & 0.695 & 0.688 & 0.555 \\ \cline{1-1} & Reg.SE & 0.026 & 0.025 & 0.066 & 0.064 & 0.075 & 0.057 & 0.064 \\ \cline{1-1} & MCSE & 0.001 & 0.002 & 0.003 & 0.004 & 0.005 & 0.003 & 0.004 \\ \cline{1-1} & \% sign. & 89.0\% & 89.0\% & 8.0\% & 89.0\% & 69.0\% & 89.0\% & 89.0\% \\ \hline \multirow{3}{*}{MidQR(\(\tau_{5}\))} & Est & -0.185 & 0.130 & 0.059 & 0.186 & -0.131 & 0.219 & 1.347 \\ \cline{1-1} & SE & 0.500 & 0.409 & 0.962 & 1.050 & 0.936 & 1.179 & 0.846 \\ \cline{1-1} & Reg.SE & 0.036 & 0.035 & N.D. & 0.086 & 0.103 & 0.075 & 0.090 \\ \cline{1-1} & MCSE & 0.001 & 0.002 & 0.003 & 0.004 & 0.005 & 0.003 & 0.004 \\ \cline{1-1} & \% sign. & 89.0\% & 89.0\% & 0.0\% & 58.0\% & 6.0\% & 88.0\% & 89.0\% \\ \hline \end{tabular}
\end{table}
Table 5.6: Coefficient estimates from simulations with \(k=6\) levels for the response variable.
and a test set (\(n_{\text{test}}=50\)). We generated 10 random extraction of test sets, whose complements return the associated training sets, in order to evaluate averaged parameter estimates, standard errors, and predictive performance indices; 16 quantile levels equally spaced between 0.1 and 0.9 are considered in this case.
We start from parameter estimates, which are shown in Table 5.9: here, the whole set of variables described in 2.2 is used to implement the regression models. Then, we restrict these models by considering only technical (\(X_{\text{AC}}\), \(X_{\text{AV}}\)) and contextual (exposure, exploit) variables; the corresponding outcomes are presented in Table 5.10.
Moving to the performance indices, both RGA and AGR for all the regression models under examination are reported in Table 5.11. In addition, we provide two graphical representations regarding the behavior of the predictive performance at different quantile levels: the boxplots in Figure 5.7 and the plots of average RGA and AGR for all the 16 quantile levels in Figure 5.8.
## References
* [1] A. B. K. K. K. (1989) The \(\alpha\)-ray spectrum of the \(\alpha\)-ray spectrum. _Journal of the Optical Astronomy Observatories_**10**, 1-10 (1989).
* [2] A. B. K. K. K. (1990)
## References
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{6}{c}{Full set of regressors} & \multicolumn{4}{c}{Only technical regressors} \\ \cline{2-9} \multicolumn{1}{c}{} & \multicolumn{2}{c}{**RGA**} & \multicolumn{2}{c}{**AGR**} & \multicolumn{2}{c}{**RGA**} & \multicolumn{2}{c}{**AGR**} \\ \cline{2-9} \cline{10-10} & Est & SD & Est & SD & Est & SD & Est & SD \\ \hline OrdLog & 0.542 & 0.248 & 0.000 & 0.000 & 0.716 & 0.565 & 0.000 & 0.000 \\ LinReg & 1.006 & 0.839 & 0.042 & 0.025 & 1.006 & 1.005 & 0.042 & 0.028 \\ MidQR(\(\tau_{1}\)) & 0.982 & 0.724 & 0.358 & 0.172 & 0.825 & 0.764 & 0.130 & 0.059 \\ MidQR(\(\tau_{2}\)) & 0.929 & 0.837 & 0.328 & 0.150 & 0.786 & 0.649 & 0.127 & 0.065 \\ MidQR(\(\tau_{3}\)) & 0.926 & 0.786 & 0.283 & 0.121 & 0.733 & 0.389 & 0.129 & 0.062 \\ MidQR(\(\tau_{4}\)) & 0.916 & 0.735 & 0.253 & 0.117 & 0.786 & 0.559 & 0.153 & 0.102 \\ MidQR(\(\tau_{5}\)) & 0.909 & 0.728 & 0.207 & 0.116 & 0.812 & 0.475 & 0.162 & 0.084 \\ MidQR(\(\tau_{6}\)) & 0.950 & 0.795 & 0.208 & 0.122 & 0.839 & 0.479 & 0.159 & 0.084 \\ MidQR(\(\tau_{7}\)) & 1.008 & 0.821 & 0.215 & 0.142 & 0.881 & 0.579 & 0.171 & 0.099 \\ MidQR(\(\tau_{8}\)) & 1.034 & 0.845 & 0.204 & 0.139 & 0.899 & 0.625 & 0.175 & 0.107 \\ MidQR(\(\tau_{9}\)) & 1.082 & 0.865 & 0.203 & 0.147 & 0.932 & 0.659 & 0.177 & 0.111 \\ MidQR(\(\tau_{10}\)) & 1.085 & 0.894 & 0.197 & 0.147 & 0.945 & 0.734 & 0.182 & 0.116 \\ MidQR(\(\tau_{11}\)) & 1.163 & 1.001 & 0.199 & 0.160 & 0.937 & 0.762 & 0.180 & 0.117 \\ MidQR(\(\tau_{12}\)) & 1.147 & 0.979 & 0.181 & 0.137 & 0.966 & 0.824 & 0.181 & 0.114 \\ MidQR(\(\tau_{13}\)) & 1.191 & 1.008 & 0.174 & 0.125 & 0.974 & 0.829 & 0.175 & 0.108 \\ MidQR(\(\tau_{14}\)) & 1.159 & 1.050 & 0.159 & 0.106 & 0.989 & 0.844 & 0.179 & 0.111 \\ MidQR(\(\tau_{15}\)) & 1.132 & 1.094 & 0.151 & 0.096 & 1.005 & 0.830 & 0.185 & 0.117 \\ MidQR(\(\tau_{16}\)) & 0.905 & 0.798 & 0.146 & 0.086 & 1.017 & 0.824 & 0.185 & 0.107 \\ Self & 6.120 & 0.880 & 6.120 & 0.880 & 5.991 & 1.182 & 5.991 & 1.182 \\ \hline \end{tabular}
\end{table}
Table 5.11: RGA and AGR indices from real data analysis. Columns 2-5 refer to models with full set of regressors; columns 6-9 follow from the restriction to technical (AV, AC) and contextual (exposure, exploit) variables as regressors.
Figure 5.7: Boxplots for RGA and AGR when \(k=3\)
Figure 5.8: Average RGA and AGR for the 16 quantile levels \(\tau\) under consideration. The \(y\)-intercepts of the dotted and dot-dashed lines represent the value of the index from the ordered logit and the linear regression on rank-transformed variables, respectively.
AGR as an appropriate measure of predictive accuracy.From simulations, we see that the data-generating models are generally associated with a higher AGR value, while its RGA is often worse than other models (see Figures 5.4, 5.5, and 5.6).
It is plausible that the specific model underlying the data generation process provides higher predictive performance when compared to other models. This criterion identifies AGR as a more appropriate performance index for our purposes, since it better distinguishes the data-generating model in terms of predictive capacity, as it is manifest from the above-mentioned figures.
In addition, AGR enjoys the invariance property under sub-sampling, as discussed in Section 4, which is desirable since the relative order between two vulnerabilities is not affected by other vulnerabilities in the sample. In this way, we can better prioritize the relative risk factors for the vulnerabilities under consideration, without incurring order reversal due to new vulnerabilities not previously detected. From a different perspective, such new information may be needed to update individual risk factors and adapt to the dynamic behavior of the cyber-space, as will be discussed in the following paragraph.
Vulnerabilities vs. incidents: dynamical and subjective cyber-risk assessment.We already pointed out the distinction between cyber-incidents and cyber-vulnerability, recalling that the analysis in [12] focuses on the former. This distinction is relevant for decision-makers, namely, cyber-security experts and ICT managers, Security Operational Centers - SOCs, National Agencies, _etc_. The analysis of cyber-incidents is fundamental for cyber-forensic activities, but the prevention of _new_ cyber-incidents in operational scenarios should use all fungible information to better manage the security resources and take appropriate counteractions.
This condition underlies a dynamic environment characterizing cyber-risk assessment, which affects both regressors and responses. Indeed, for each statistical unit identified by a CVE, both intrinsic characteristics (NIST's evaluation of the attack vector) and external variables (exposure, exploits) can change in time; furthermore, risk factors may vary due to internal priorities in the organization and the evolution of the overall digital system (new products, legislation, _etc_.).
Direct consequences of the dynamics of cyber-risk assessment concern the prioritization of fixing activities and the specification of subjective cyber-risk. Being fixing resource-expensive, decision-makers have to allocate their effort based on their current state of knowledge. The driver for such choices is the individual _risk perception_. While the present work uses Tenable's risk factor for the analysis, each decision-maker can customize the present risk assessment model (as well as the quantile level), adapting it in time to get new estimates and quantile effects, or comparing different risk factors attributions (e.g., derived from different criteria) in terms of their predictive power.
These observations mainly relate to cyber-security data and their usefulness for distinct decision-making stages, which led us to select the databases described in 2.1. Information granularity in data from cyber-incidents does not often suffice to extract useful insights on the current threats: as recalled in [12], the type of data that is likely to be accessed when analyzing cyber-incidents is rarely disclosed. This leads to data aggregation and censoring that could not allow cyber-security operational experts to prioritize the current vulnerabilities, as is the
case in the classification of attack technique reported in [12] where multiple types of attack are grouped together (e.g., SQL injection is a particular attack model upon which a malware can be based, and a malware can exploit one or more 0-days). This has to be interpreted as complementarity between the analyses on cyber-incidents, like the one conducted in [12], and the present one: they serve different phases (strategic, tactic, or operative) of a process with a common objective, and each phase should identify appropriate data for its scope.
Implications of midQR on secure information disclosure.As a consequence of the observations in the last paragraph, we draw attention to the information the individual decision-maker has, uses, and communicates about cyber-risk.
Agencies such as NIST share their evaluation through dedicated information channels: however, this information can be acquired by potential attackers too, which can use them to prioritize their own objectives. Indeed, resources are needed also by attackers (e.g., costs for exploit acquisition, time and effort for detection of vulnerable hosts, integration of multiple components to avoid countermeasures), and information on risk factors from different organizations can be useful to suggest relevant criticalities.
Our proposal addresses this issue in two ways: first, as already recalled, midQR enhances robustness against violation of assumptions in parametric methods; censored data on cyber-vulnerabilities, due to no-disclosure policies, may limit or distort the verification of such assumptions, which results in misleading analysis and results.
Secondly, AGR better identifies the added information content provided by the variables for the cyber-vulnerability data: referring to Table 5.11, two different models are considered, the full one (all the relevant variables in the dataset derived from Table 2.1 are involved) and a restricted one, where the "CIA" components of attack vectors are excluded. We note that LinReg does not distinguish these two models (up to the third significant digit), while midQR always does (except for midQR(\(\tau_{12}\)) using AGR).
Furthermore, restricting our attention to MidQR, we see that AGR is more sensitive than RGA with respect to the choice of the quantile level in terms of model discrimination: formally, let us consider the ratios
\[\varrho_{\text{RGA}}:=\frac{\text{RGA}_{\text{tech}}}{\text{RGA}_{\text{full} }},\quad\varrho_{\text{AGR}}:=\frac{\text{AGR}_{\text{tech}}}{\text{AGR}_{ \text{full}}} \tag{6.1}\]
of RGA and AGR evaluated for the technical and the full models, respectively. Excluding \(\tau_{16}\), for the remaining MidQR models we can observe \(\varrho_{\text{RGA}}\in[0.792;0.893]\), while \(\varrho_{\text{AGR}}\in[0.363;1.225]\).
These observations confirm the utility of the combined use of RGA and AGR to get a more representative view of cyber-risk models and, in this specific case, of the potential value of information communicated when sharing risk assessment.
Dependence of MidQR performance on \(k\).A final remark regarding the comparison of RGA and AGR involves the number of chosen levels of the response variable: MidQR better performs when \(k\) is small (less than 6), as can be seen comparing 5.4-5.5 with 5.6. In the latter case, AGR highlights a divergence between the data-generating model (OrdLog) and alternative models (LinReg or MidQR); on the other hand, RGA returns comparable performance of LinReg
and MidQR.
SE of the estimates.As remarked in the previous section, an arbitrary choice of the quantile level may lead to overestimating the parameter SE through the kernel approach based on [18]; this is confirmed in the simulations. When this overestimation happens, the remaining indices (i.e., the regular SE and the MCSE) provide a more informative picture of the sampling distribution.
Real data and implications for cyber-threat intelligence.While the different models considered in this work are comparable in terms of RGA performance on real data, using AGR we can see that OrdLog poorly performs, since the predicted values are restricted to the set \(\{1,\ldots,k\}\); when the dataset has small variability, the estimated values collapse to a typical value, which contains no information and drastically reduces the predictive performance. This also suggests a severe deviation from the OrdLog model assumptions in the present cyber-vulnerability dataset.
Another indirect test of the deviation of real data from the OrdLog model comes from the relative magnitude of RGA and AGR: in Tables 5.1, 5.2, 5.4, 5.6, and 5.7, which refer to data simulated starting from the ordered logit model, AGR is comparable with RGA (i.e., with the same order of magnitude). At low values of \(k\), especially at \(k=3\), AGR is larger than RGA when we focus on MidQR and on the data-generating model. On the other hand, real data lead to a different behavior, where the ratios AGR/RGA lie in \([0.1334;0.365]\) for the full model and in \([0.088;0.238]\) for the "technical" model. While these ratios are useful as an additional check of the deviation from the OrdLog model used in simulations, AGR and RGA indices for the same model should not be compared, as they measure different performance aspects of a given model.
As a final observation, we stress that the choice of the quantile level plays an important role: this choice can be driven by the empirical distribution of severity levels, but it can also be seen as a latent trait to be estimated through Bayesian approaches.
## 7 Conclusion and Future Work
This work is a preliminary study on statistical modeling for threat intelligence, with particular attention to the information resources regarding cyber-vulnerabilities and to the effects of risk acceptance/aversion.
The statistical models adopted and adapted for the discussion of cyber-vulnerability assessment are complementary to other approaches developed for cyber-risk assessment based on cyber-incidents, e.g., [12]. In line with the objective of this paper, different models should be considered to highlight specific aspects, whose relevance for cyber-risk assessment depends on the context and the decision-maker's purposes.
The actual realization of cyber-attacks does not only rely on technical features represented by attack vectors, but also on different information sources that can promote or mitigate cyber-attacks. It is plausible that access to information plays a relevant role in this regard: as mentioned in [12], data on cyber-security are subject to limited disclosure and underestimation,
both to adhere to security standards and to prevent reputational loss. On the other hand, open data provided by Organizations may be used not only to prevent cyber-incidents but also to guide cyber-attackers.
Practical applications of this research mainly involve increased attention to information disclosure on cyber-vulnerabilities, leading to the exploration of effects of cyber-risk perception and acceptance to avoid indirect signals on relevant/critical cyber-vulnerabilities. Indeed, the knowledge of cyber-vulnerabilities may be of interest for (and known to) both attackers and defenders, so we focused on the perception of vulnerabilities since the ranking of vulnerabilities may drive attackers' decisions in terms of a trade-off between the use of resources and potential benefits. In this regard, future work will consider specific models for the quantitative assessment of latent traits in cyber-risk perception, as well as Bayesian approaches (e.g., global-local priors), to shrink weak signals and highlight relevant risk responses.
Finally, a deeper investigation is needed to explore the relationship between statistical (partial) ranking models, formal decision criteria, and sources of uncertainty that may give rise to multiple orders of priority in the cyber-security domain. A better understanding of this topic could support its integration with information-theoretic methods for the analysis of secure disclosure properties.
|
2303.07352 | Sequential Spatial Network for Collision Avoidance in Autonomous Driving | Several autonomous driving strategies have been applied to autonomous
vehicles, especially in the collision avoidance area. The purpose of collision
avoidance is achieved by adjusting the trajectory of autonomous vehicles (AV)
to avoid intersection or overlap with the trajectory of surrounding vehicles. A
large number of sophisticated vision algorithms have been designed for target
inspection, classification, and other tasks, such as ResNet, YOLO, etc., which
have achieved excellent performance in vision tasks because of their ability to
accurately and quickly capture regional features. However, due to the
variability of different tasks, the above models achieve good performance in
capturing small regions but are still insufficient in correlating the regional
features of the input image with each other. In this paper, we aim to solve
this problem and develop an algorithm that takes into account the advantages of
CNN in capturing regional features while establishing feature correlation
between regions using variants of attention. Finally, our model achieves better
performance in the test set of L5Kit compared to the other vision models. The
average number of collisions is 19.4 per 10000 frames of driving distance,
which greatly improves the success rate of collision avoidance. | Haichuan Li, Liguo Zhou, Zhenshan Bing, Marzana Khatun, Rolf Jung, Alois Knoll | 2023-03-12T17:43:32Z | http://arxiv.org/abs/2303.07352v1 | # Sequential Spatial Network for Collision Avoidance in Autonomous Driving
###### Abstract
Several autonomous driving strategies have been applied to autonomous vehicles, especially in the collision avoidance area. The purpose of collision avoidance is achieved by adjusting the trajectory of autonomous vehicles (AV) to avoid intersection or overlap with the trajectory of surrounding vehicles. A large number of sophisticated vision algorithms have been designed for target inspection, classification, and other tasks, such as ResNet, YOLO, etc., which have achieved excellent performance in vision tasks because of their ability to accurately and quickly capture regional features. However, due to the variability of different tasks, the above models achieve good performance in capturing small regions but are still insufficient in correlating the regional features of the input image with each other. In this paper, we aim to solve this problem and develop an algorithm that takes into account the advantages of CNN in capturing regional features while establishing feature correlation between regions using variants of attention. Finally, our model achieves better performance in the test set of L5Kit compared to the other vision models. The average number of collisions is 19.4 per 10000 frames of driving distance, which greatly improves the success rate of collision avoidance.
Collision Avoidance, Computer Vision, Autonomous Driving, Trajectory Prediction, Sequential Spatial
## I Introduction
In the past many years, researchers have focused on how to turn vehicles from assisted driving to more intelligent autonomous driving. Due to the iteration of intelligent hardware and the improvement of chip computing power, a large amount of data collected by sensors can be quickly converted and fed into models to make decisions. In the driving process, the safety factor is the first consideration for users and researchers. Therefore, how AV should avoid collisions has become a top priority. Concepts such as probabilistic methods (eg.: Markov chains [1] and Monte Carlo [2]), safety distance-based control methods [3], and trajectory prediction methods [3] have been designed in recent years to cope with complex traffic conditions. In terms of vision, CNN [4] has made outstanding contributions and has been applied to a large number of road condition inspection tasks due to its excellent regional feature extraction capabilities. The local feature information obtained by CNN will be used for obstacle detection. Secondly, because the motion trajectory is planned for AV, the relationship between each local feature of the image obtained by CNN needs to be established. Some strategies are based on CNN
Fig. 1: Three different situations of collisions. The front collision case is shown in (a). The side collision case is shown in (b). The rear collision case is shown in (c). Our purpose is to let autonomous driving vehicles avoid collisions such as these.
plus RNN [5] so that they can deal with sequential graphs as input, eg.: STDN [6].
Although the above strategies have performed well in a large number of vision tasks, their performances are still far inferior to similar-sized convolutional neural networks counterparts, such as EfficientNets [7] and RepVGG [8]. We believe this is due to the following aspects. First, the huge differences between the sequential tasks of NLP and the image tasks of CV are ignored. For example, when the local feature information acquired in a two-dimensional image is compressed into one-dimensional time series information, how to achieve accurate mapping becomes a difficult problem. Second, it is difficult to keep the original information of inputs since after RNN layers, we need to recover the dimension from one to three. Besides, due to the several transformations between different dimensions, that process becomes even harder, especially since our input size is 224x224x5. Third, the computational and memory requirement of switching between layers are extremely heavy tasks, which also becomes a tricky point for the algorithm to run. Higher hardware requirements as well as more running time arise when running the attention part.
In this paper, we propose a new network structure based on CNN and attention to vision tasks in autonomous driving. The new network structure overcomes these problems by using Sequential Spatial Network (SSN) blocks. As shown in Fig. 3, input images first go through the convolution stem for fine-grained feature extraction, and are then fed into a stack of SSN blocks for further processing. The Upsampling Convolutional Decreasing (UCD) blocks are introduced for the purpose of local information enhancement by deep convolution, and in SSN block of features generated in the first stage can be less loss of image resolution, which is crucial for the subsequent trajectory adjustment task.
In addition, we adopt a staged architecture design using five convolutional layers with different kernel sizes and steps gradually decreasing the resolution (sequence length) and flexibly increasing the dimensionality. Such a design helps to extract local features of different scales and, since the first stage retains high resolution, our design can effectively reduce the resolution of the output information in the first layer at each convolutional layer, thus reducing the computational effort of subsequent layers. The Reinforcement Region Unit (RRU) and the Fast MultiHead Self-Attention (FMHSA) in the SSN block can help obtain global and local structural information within the intermediate features and improve the normalization capability of the network. Finally, average pooling is used to obtain better trajectory tuning.
Extensive experiments on the lykit dataset demonstrate the superiority of our SSN network in terms of accuracy. In addition to image classification, SSN block can be easily transferred to other vision tasks and serve as a versatile backbone.
## II Related Works
Over the past few decades, autonomous driving has flourished in the wave of deep learning, where a large number of solution strategies are based on computer vision, using images as the primary input. The prevailing visual neural networks are typically built on top of a basic block in which a series of convolutional layers are stacked sequentially to capture local information in intermediate features. However, the limited receptive field of the small convolution kernel makes it difficult to obtain global information, which hinders the high performance of the network on highly feature-dependent tasks (such as trajectory prediction and planning). In view of this dilemma, many researchers have begun to deeply study self-attention-based [9] networks with the ability to capture long-distance information. Here, we briefly review traditional CNNs and recently proposed visual networks. Convolutional neural network. The first standard CNN was proposed by LeCun [10] et al. was used for handwritten character recognition. Based on this foundation, a large number of visual models have achieved cross-generational success in a variety of tasks with images as the main input. Google Inception Net [11] and DenseNet [12] showed that deep neural networks consisting of convolutional and pooling layers can yield adequate results in recognition. SENet [13] and MobileNetV3 [14] demonstrate the effectiveness of multiple paths within a basic block.
ResNet [15] is a classic structure that has a better generalization ability by adding shortcut connections to the underlying network. To alleviate the limited acceptance domain in previous studies, some studies used the attention mechanism as an operator for adapting patterns.
## III Approach
### _Network Structure Overview_
Our strategy is to take advantage of both CNN and attention by building a hybrid network. An overview of ResNet-50 [15], RepVGG [8], ViT [16] and our network are shown in Fig. 2 and Fig. 3.
Resnet-50 consists of five stages, stage0 consists of a convolutional layer, a batch normalization layer and a maxpooling layer. stage1, stage2, stage3, and stage4 consist of bottleneck blocks, and the output is fed to a fully connected layer for classification. The advantage of this design is that resnet50 can efficiently handle the classification problem between different images. However, since the stage0 only uses one convolutional layer with a large convolutional kernel, the large field of view can quickly complete the initial processing of the input image, but the local information capture of the image is slightly insufficient. For this reason, we use the main input block to deal with this limitation. The main input block is composed of five different kernel sizes and steps convolutional layers, so that the purpose of stepping down the convolution kernel is to extract the input information quickly when the output size is large, and then use the small convolution kernel to extract the local information after the input size becomes smaller.
After taking into account the fast processing and local information processing of the main input block, the input information is transferred to the subsequent blocks for subsequent processing. Furthermore, between each block, we add a UCD layer, which consists of a convolutional layer with
1x1 kernel size and a downsampling layer which a sampling ratio is 0.5. The UCD layer allows us to speed up the network without reducing the amount of information in the input but maintaining the ratio between the information, and the size of the input is reduced to half of the original size after the UCD layer. Afterwards, the feature extraction is performed by a network layer composed of different numbers of SSN blocks, while maintaining the same resolution of the input. Due to the existence of the self-attention mechanism, SSN can capture the correlation of different local information features, so as to achieve mutual dependence between local information and global information. Finally, the results are output through an average pooling layer and a projection layer as well as a classifier layer.
Through the historical images of driving trips, we can obtain information such as position, yaw and environment. SSN is similar to a typical vision network and can adjust the stride size of the middle layer to obtain feature maps of different sizes according to the requirements, which can be applied to downstream tasks with different inputs, such as trajectory prediction and image classification.
### _SSN blcok_
The proposed SSN module consists of a Reinforcement Region Unit (RRU), a Fast Multi-Head Self-Attention (FMHSA) module and an Information Refinement Unit (IRU), as shown in Fig. 3. We will describe these four components in the following.
Reinforcement region unit. In vision tasks, data augmentation is usually essential to improve model generalization effectively by training the augmented data. Common data augmentation methods such as flip, rotation and scaling, etc, but adding augmented data should not weaken the final performance of the model.
In other words, a good model should maintain effective operating output for similar but variant data as well so that the model has better input acceptability. However, the absolute position encoding used in the common attention was originally designed to exploit the order of the tokens, but it breaks the input acceptability because each patch adds a unique position encoding to it [17]. Moreover, the concatenation between the local information obtained by the information capture module at the beginning of the model and the structural information inside the patch [18] is ignored. In order to maintain input acceptability, the Reinforcement Region Unit (RRU) is designed to extract the local information from the input to the "SSN" module, defined as:
\[RRU(X)=Conv(Conv(X)). \tag{1}\]
Fast Multiple Head Self-Attention (FMHSA) module consists of one convolutional layer, one linear layer and one multi-head self-attention. With this scheme, we can build up the connection between different local information which results from RRU. By this way, the collision avoidance task is able to get an outstanding result since on each frame the trajectory is composed of continuously predicted positions. Moreover, these positions are sequentially related which means autonomous driving vehicles must arrive at the first target position then they can move to the next target position. Our FMHSA module is suitable to solve this problem because it can transfer local information between areas.
Fig. 2: Three popular network structures in vision areas. The structure of ResNet-50 is shown in (a). The structure of RepVGG is shown in (b). The structure of ViT is shown in (c).
The Information Refinement Unit (IRU) is used to efficiently extract the local information obtained by FMHSA, and after processing by this unit, the extracted local information is fed into the pooling and classifier layers. The original FFN proposed in ViT consists of two linear layers separated by the GELU activation [19]. First, expand the input dimension by 4 times, and then scale down the expanded dimension:
\[FFN(X)=GELU(XW1+b1)W2+b2. \tag{2}\]
This has the advantage of using a linear function for forward propagation before using the GELU() function, which greatly improves the efficiency of the model operation. However, this strategy leads to a certain performance sacrifice when the network is propagating fast in this region. Our design concept can be used to deal with this problem. First, we use the convolutional layer of a larger convolution kernel to obtain the characteristics of the input information with a large field of view, then use the linear function layer to conduct quickly, and finally use the convolutional layer of a small convolution kernel. The convolutional layer obtains refined information, thus taking into account both operational efficiency and model performance. The expression of the information refinement unit (IRU) can be written as
\[IRU(X)=Conv(L(Conv(X))), \tag{3}\]
where L(X)=WX+b. After designing the above three unit modules, the SSN block can be formulated as:
\[\begin{split} A&=RRU(X)\\ B&=FMHSA(A)\\ C&=IRU(B)+B\end{split} \tag{4}\]
In the experiment part, we will prove the efficiency of SSN network.
## IV Experiment
In this section, we investigate the effectiveness of the SSN architecture by conducting experiments on an autonomous driving obstacle avoidance task based on a driving map as the main input. We compare the proposed SSN with other popular models before showing in Fig. 2, and then compare the experimental results to draw an analytical conclusion. We defined three different types of collisions which are front
Fig. 3: The overview of SSN network structure. RRU is Reinforcement Region Unit. FMHSA is Fast Multi-Head Self-Attention module. IRU is Information Refinement Unit.
collision, rear collision and side collision. These situations are caused by different unsuitable physic parameters and in Fig. 1.
### _Dataset and description_
We use l5kit dataset [20] as our data source which contains over 1,000 hours of data. This was collected by a fleet of 20 autonomous vehicles along a fixed route in Palo Alto, California, over a four-month period. It consists of 170,000 scenes, where each scene is 25 seconds long and captures the perception output of the self-driving system, which encodes the precise positions and motions of nearby vehicles, cyclists, and pedestrians over time. On top of this, the dataset contains a high-definition semantic map with 15,242 labeled elements and a high-definition aerial view over the area.
### _Data preprocessing_
The data mainly includes the following main concepts: Scenes, Frames, and Agents. A scene is identified by the host (i.e. which car was used to collect it) and a start and end time. It consists of multiple frames (=snapshots at discretized time intervals). The scene datatype stores reference to its corresponding frames in terms of the start and end index within the frames array (described below). The frames in between these indices all correspond to the scene (including the start index, excluding the end index.
A frame captures all information that was observed at a time. This includes the timestamp, which the frame describes; data about the ego vehicle itself such as rotation and position; a reference to the other agents (vehicles, cyclists and pedestrians) that were captured by the ego's sensors; a reference to all traffic light faces (see below) for all visible lanes. An agent is an observation by the AV of some other detected object. Each entry describes the object in terms of its attributes such as position and velocity, and gives the agent a tracking number to track it over multiple frames (but only within the same scene!) and its most probable label.
The input of this dataset is images of Ego car, which is one of properties of Ego Dataset. And the output of our model are position and yaw which are properties of EgoDataset as well. By this way, we can simulate vehicles' driving as human driving actions. During human driving process, drivers control accelerator and driving wheels to move vehicles, accelerator is used for velocity and driving wheel for yaw. The output of our model is also velocity and yaw. Thus, we use this method to simulate the trajectories of vehicles so that we can change velocity and yaw to avoid collisions during.
### _Result_
The tables of test results which are processed by four different network structures are shown in Tab. I. Compared with other transformer-based and convolution-based counterparts, our model achieved better accuracy and faster processing speed. In particular, our model achieves 2.6 times on front collision which is 13.6 times less than RepVGG, 5.8 times less than ViT, and 12.6 times less than ResNet50, indicating the benefit of SSN block for capturing both local and global information. We can see that SSN consistently outperforms other models by a large margin.
## V Conclusion
This paper proposes a novel hybrid architecture named SSN for vision-based autonomous driving tasks and other vision tasks. The designed SSN architectures take advantages of both CNNs and self-attention to capture local and global information, improving the ability of the sequentially related inputs. Extensive experiments on lykit dataset demonstrate the effectiveness and superiority of the proposed SSN architecture.
|
2306.15188 | One-class systems seamlessly fit in the forward-forward algorithm | The forward-forward algorithm presents a new method of training neural
networks by updating weights during an inference, performing parameter updates
for each layer individually. This immediately reduces memory requirements
during training and may lead to many more benefits, like seamless online
training. This method relies on a loss ("goodness") function that can be
evaluated on the activations of each layer, of which can have a varied
parameter size, depending on the hyperparamaterization of the network. In the
seminal paper, a goodness function was proposed to fill this need; however, if
placed in a one-class problem context, one need not pioneer a new loss because
these functions can innately handle dynamic network sizes. In this paper, we
investigate the performance of deep one-class objective functions when trained
in a forward-forward fashion. The code is available at
\url{https://github.com/MichaelHopwood/ForwardForwardOneclass}. | Michael Hopwood | 2023-06-27T04:14:03Z | http://arxiv.org/abs/2306.15188v1 | # One-class systems seamlessly fit in the forward-forward algorithm
###### Abstract
The forward-forward algorithm [2] presents a new method of training neural networks by updating weights during an inference, performing parameter updates for each layer individually. This immediately reduces memory requirements during training and may lead to many more benefits, like seamless online training. This method relies on a loss ("goodness") function that can be evaluated on the activations of each layer, of which can have a varied parameter size, depending on the hyperparamaterization of the network. In the seminal paper, a goodness function was proposed to fill this need; however, if placed in a one-class problem context, one need not pioneer a new loss because these functions can innately handle dynamic network sizes. In this paper, we investigate the performance of deep one-class objective functions when trained in a forward-forward fashion. The code is available at [https://github.com/MichaelHopwood/ForwardForwardOneclass](https://github.com/MichaelHopwood/ForwardForwardOneclass).
## 1 Introduction
The Forward-Forward algorithm [2] is a new learning procedure for neural networks that updates network parameters immediately after the forward pass of a layer. An objective (aka, "goodness") function is evaluated on the layer's latent output representations \(G(h^{[l]}|\mathcal{I})\) conditioned upon some data integrity \(\mathcal{I}\). Integrity is broken down into positive and negative data; positive data is often thought of as correct data while negative data is incorrect data. When positive data is passed into the model, weights that support the data (aka, neurons that fire with large weights) are awarded. The assignment of these positive and negative data is subject to creativity with one of the most common practices being placing incorrect class assignments in the negative data.
In a one-class problem context, it is assumed that the majority of the training dataset consists of "normal" data, and the model is assigned with determining the normality of the input data. **Therefore, negative data is not required, and the objective function can be simplified to \(G(h^{[l]})\)**. Many deep learning methods answer this anomaly detection problem via inspirations from support vector machines [13] like Deep SVDD [14] or Deep OC-SVM [15].
## 2 Methodology
For a layer \(l\) we compute a forward pass
\[h^{[l]}=\text{ReLU}\left(xW^{[l]}+b^{[l]}\right)\]
where \(x\in\mathbb{R}^{n,p}\) is the data from the previous layer, \(h\in\mathbb{R}^{n,q}\) is the transformed data, and \(W^{[l]}\in\mathbb{R}^{p,q}\) and \(b^{[l]}\in\mathbb{R}^{q}\) are the trained weights and biases. A forward pass of normal class data can be used to calculate the loss function at layer \(l\) following some \(G(h^{[l]})\). These \(G(h^{[l]})\) can be any convex function; in the following table, we produce some candidate goodness functions.
The network's weights are updated sequentially, where inputs \(h^{[l-1]}\) are passed through the layer to compute \(h^{[l]}\), the loss \(\mathcal{L}(h^{[l]})\) is calculated, and used to backpropagate using gradient descent
\[W^{[l]} =W^{[l]}+\frac{\lambda}{n}\frac{\partial G}{\partial W^{[l]}}\] \[b^{[l]} =b^{[l]}+\frac{\lambda}{n}\frac{\partial G}{\partial b^{[l]}}\]
To convert the final embeddings \(h^{[L]}\in\mathbb{R}^{n\times q}\) into an outlier probability, we pass them into the loss function to ascertain a distance value \(D=\mathcal{L}(h^{[L]})\in\mathbb{R}^{n}\) for each sample and then convert these distances to probabilities by normalizing by the maximum value, so \(P=\frac{D}{\max(D)}\in\mathbb{R}^{n}\). In order to deem the sample an outlier, a threshold is deduced during training by evaluating \(t=P_{(1-\nu)th\%}\). Therefore, an outlier is flagged via \(I_{P>t}\). We utilize a \(\nu=0.05\) for all settings. This method of ascertaining a threshold naturally reduces our chances of achieving 100% accuracy, but it also reduces the chances of a type 2 error, which is important for outlier detection problems.
The code is written in PyTorch to leverage its built-in autodifferentiation tool. For the Forward-Forward implementation, gradients are computed at the end of each layer and the weights are updated according to the calculated autodifferentiated gradients and the optimizer. The normal backpropagation implmenetation conducts the weight update process for the weights in all layers after completing the forward pass on the last layer. So, while the forward-forward implementation has \(L\) instantiated optimizers, the normal backpropagation method has 1 instantiated optimizer. For both cases, a stochastic gradient descent optimizer was used with no momentum and weight decay (see equations above). Early stopping is implemented by checking whether the backpropagation
\begin{table}
\begin{tabular}{|c|l|} \hline Method & Derivation \\ \hline Goodness & \(\mathcal{L}(h^{[l]};\mathcal{W})=\sum_{i=1}^{N}\sigma(||h^{[l]}||^{2}-C)\) \\ \hline GoodnessAdjusted & \(\mathcal{L}(h^{[l]};\mathcal{W})=\sum_{i=1}^{N}\log(1+\exp(||h||^{2}-C))\) \\ \hline HB-SVDD & \(\mathcal{L}(h^{[l]};\mathcal{W})=\sum_{i=1}^{N}||h^{[l]}-\mathbf{a}||^{2}\) \\ \hline SVDD [Ruff et al., 2018] & \begin{tabular}{l} minimize \\ subject to \\ and \\ \end{tabular} & \(||h^{[l]}-\mathbf{a}||^{2}\leq R^{2}+\xi_{i},\) \(\quad i=1,2,...,N\) \\ \cline{2-3} & \(\mathcal{L}(h^{[l]};R,\mathcal{W})=R^{2}+C\sum_{i=1}^{N}\max\bigl{(}0,||h^{[l] }-\mathbf{a}||^{2}-R^{2}\bigr{)}\) \\ \hline LS-SVDD &
\begin{tabular}{l} minimize \\ subject to \\ \end{tabular} & \(R^{2}+\frac{C}{2}\sum_{i=1}^{N}\xi_{i}^{2}\) \\ \cline{2-3} & \(\text{subject to}\) & \(||h^{[l]}-\mathbf{a}||=R^{2}+\xi_{i},\) \(\quad i=1,2,...,N\) \\ \cline{2-3} & \(\mathcal{L}(h^{[l]};R,\mathcal{W})=R^{2}+\frac{C}{2}\sum_{i=1}^{N}\Bigl{(}||h^ {[l]}-\mathbf{a}||^{2}-R^{2}\Bigr{)}^{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Derivations of deep learning one-class “goodness” functions. Note that \(\mathbf{a}=\frac{1}{N}\sum_{i=1}^{N}h^{[l]}_{i,j}\).
In order to make the experiments reproducible, random seeds were implemented. Across the 50 independent trials which were run for each parameter setting, a seed was \(s=1...50\) was used when initializing the model parameters (e.g. weights and biases). For all independent trials, the same data split (e.g. train, valid, test) was used. This step is imperative, especially given the importance of the weight initialization for oneclass problem settings.
### Data
The banknote authentication dataset (Dua and Graff, 2017) was used for evaluating the different methods. This data comprises images of both authentic and counterfeit banknotes captured using an industrial camera typically utilized for print inspection. The resulting images had a resolution of 400 x 400 pixels, and due to the object lens and distance to the subject, grayscale images with a resolution of approximately 660 dpi were obtained. The Wavelet Transform tool was employed to extract features from the images, resulting in 4 continuous features total, 3 features containing statistics of the Wavelet Transformed image (variance, skewness, kurtosis), and also the entropy of the image. The response variable is a binary value; 610 of the 1372 samples were deemed fake.
### Evaluation
This data was split into train, validation, and test splits. The training data trained the network weights. The validation data was used to decide early stopping. The test data was used to evaluate the model using accuracy, F1, and AUC. A grid search was conducted across the 5 loss functions (Table 1), across 4 neural network architectures. Each setting was evaluated using 50 independent tests across different seeds, which impacted the network random initializations.
## 3 Results
### Forward Forward (FF) v. Normal Backpropagation (BP)
The tabulated results are provided in Tables 2 & 3. The average accuracy for all experiments using BP was 57.6047%; the FF experiments had an average value of 56.6287%. Therefore, on average, BP experiments were 1% more accurate. Similarly, BP was around 0.01 (i.e. 1%) better in AUC with average BP and FF values of 0.549 and 0.538, Additionally, BP was around 0.025 (i.e. 2.5%) better in the F1 score with average BP and FF values of 0.299 and 0.276. respectively. However, given the volatility of training deep oneclass models, it is worthwhile to compare the performance of the best models as opposed to the average model performance. Looking at all metrics, the best models achieve higher performance when trained using a FF pipeline; accuracy improves from 93.45% to 94.18%, F1 score improves from 0.9274 to 0.9375, and AUC improves from 0.9354 to 0.9461.
### Loss function evaluation
In the forward forward evaluations, all of the best models used the goodness functions. They also perform well on average, with two of the three metrics having the highest average model performance when using them. Interestingly, the backpropagation evaluations all perform the best when using an LS-SVDD loss.
## 4 Conclusion
In summary, the following conclusions were made:
1. For one-class problems, forward-forward training shows comparable results to normal backpropagation in this case study (Table 2 and Table 3)
2. The goodness function is a viable loss candidate for one-class models (Table 2 and Table 3)
3. Forward-forward seemlessly enables the visualization of loss landscapes within the network, which can help gain insights into the learning process (Figure 1)
Future work should be conducted to expand this study to deeper models and more benchmark data. Additionally, when training one-class problems using neural networks, many implementations find
that pretraining the network weights using autoencoders are helpful, and sometimes, essential. Lastly, further work can introduce autoencoders into the training pipeline to regulate the model results across different random seeds.
|
2304.03193 | Improving automatic endoscopic stone recognition using a multi-view
fusion approach enhanced with two-step transfer learning | This contribution presents a deep-learning method for extracting and fusing
image information acquired from different viewpoints, with the aim to produce
more discriminant object features for the identification of the type of kidney
stones seen in endoscopic images. The model was further improved with a
two-step transfer learning approach and by attention blocks to refine the
learned feature maps. Deep feature fusion strategies improved the results of
single view extraction backbone models by more than 6% in terms of accuracy of
the kidney stones classification. | Francisco Lopez-Tiro, Elias Villalvazo-Avila, Juan Pablo Betancur-Rengifo, Ivan Reyes-Amezcua, Jacques Hubert, Gilberto Ochoa-Ruiz, Christian Daul | 2023-04-06T16:17:28Z | http://arxiv.org/abs/2304.03193v2 | Improving automatic endoscopic stone recognition using a multi-view fusion approach enhanced with two-step transfer learning
###### Abstract
**Resume -** Cette contribution presente une methode d'apprentissage profond pour l'extraction et la fusion d'informations d'images acquises sous differents points de vue, le butt d'obtenir des caracteristiques plus discriminantes pour l'identification du type des calculs renaux vus dans des images endoscopiques. Le modele a ete ameliore a l'aide d'une methode de transfert de connaisances en deux etes de modules d'attention pour affiner les cartes de caracteristiques apprises par ce modele. Ces strategies de fusion de caracteristiques profondes ont permis d'amelioration les performances des extracteurs a vue unique puisque la precision de la classification des calculs renaux a augmente de 6% par rapport aux methodes de reference.
## 1 Introduction
The formation of kidney stones that cannot freely pass through the urinary tract is a major public health issue. In industrialized countries, it has been reported that at least 10% of the population suffers from a kidney stone episode once in their lifetime. In the United States alone, the risk of relapse of the same type of kidney stone has increased by up to 40%. The formation of kidney stones is caused by different factors such as diet, low fluid intake, and a sedentary lifestyle. However, there are other unavoidable factors such as age, genetic inheritance, and chronic diseases that increase the risk of forming kidney stones [1]. Therefore, methods for identifying the different types of kidney stones are crucial for the prescription of appropriate treatments and to reduce the risk of relapses. In order to carry out this identification in the clinical practice, different procedures have been developed, such as the Morpho-Constitutional Analysis (MCA), and Endoscopic Stone Recognition (ESR).
MCA is commonly accepted as the standard procedure for determining the different types of kidney stones (up to 21 different types and sub-types including pure and mixed compositions are recognized during the MCA). MCA consists of a double laboratory analysis of kidney stone fragments extracted from the urinary tract during an ureteroscopy [2]. First, a biologist performs a visual inspection of the kidney stone which is observed with a magnifying glass. This inspection aims to describe kidney stones in terms of colors, textures, and morphology. This visual analysis is done both for the surface view (the external part of the kidney stone fragment), and for a cross-section of the kidney stone fragment (the internal stone part may consist of several layers surrounding a nucleus). Then, the kidney stones are ground up and the resulting powder is used to perform a biochemical analysis using a Fourier Transform Infrared Spectroscopy (FTIR). The FTIR provides a detailed description of the chemical composition of the kidney stone. Finally, the MCA analysis returns the type of kidney stone through a detailed report of the biochemical and morphological characteristics of both views of the kidney stone. However, MCA has some major drawbacks : the results are often available only after several weeks, and it is difficult to have a specialized team in each hospital to perform MCA.
Therefore, urologists have proposed, as a possible alternative, the Endoscopic Stone Recognition (ESR) procedure in which the most common kidney stones are visually identified on the video displayed on a screen during the ureteroscopy itself [3]. However, this visual analysis of the surface and section
views requires a great deal of expertise due to the high similarities between classes. However, nnly a limited number of specialists have this expertise. In addition, this technique is more operator dependent and subjective than MCA. Therefore, in order to automate and speed-up the kidney stone identification, new approaches based on deep-learning (DL) methods have been proposed. Such automated recognition assists urologists in terms of real-time decision-making during ureteroscopy.
This paper has two contributions : i) it proposes a novel DL-model for fusing information included in endoscopic images of the two views (surface and section) of a kidney stone fragment with the aim to increase the discrimination performance and, ii) it shows how a multi-branch model can be trained using a two-step transfer learning (TL) approach in order to improve the model generalization capabilities.
This paper is organized as follows. Section 2 reviews the literature on automated ESR and introduces the key concepts used in this work, namely multi-view fusion and two-step TL. Section 3 describes the construction of the dataset, details the two-step TL setup, and presents the pre-training stage of the multi-view model. Section 4 compares the results obtained with the proposed model in several configurations, with that of other models given in previous works. Finally, section 5 discusses future research directions.
## 2 State-of-the-art
Different DL approaches for automated classification of kidney stones demonstrated encouraging results.[4]. However, DL-models require large data amounts to yield accurate results. In ureteroscopy, it is difficult to collect such large datasets. A solution to this issue lies in methods such as TL and fine-tuning from other distributions (ImageNet) as a weight initialization technique. Such techniques also enable one to avoid training from scratch. However, for an automated endoscopic stone recognition (aESR), these initialization techniques are not useful, since the distribution of ImageNet and endoscopic ( ureteroscopic) images are substantially different. Thus, customized TL methods that initialize useful weights closer to the target domain are required.
Furthermore, most models performing aESR were trained on surface or section images taken separately. However, the visual inspection in MCA (by biologists) and ESR (by urologists) is based on both views by exploiting information from fragment surfaces and sections jointly. So far, the DL-models in the literature did not together use surface and section information to improve the classification efficiency. Multi-View (MV) classification is exploited in this contribution to combine the features observed in the two fragment-type views.
The aim of this paper is to show that an MV-model outperforms models without an elaborated fusion strategy. MV is performed by fusing features (of shallow models) or feature maps (for DL-models) determined for various images with the aim to learn more complete representations and to obtain more effective classifiers [5]. Contrary to an MV-approach, previous works for aESR were based on a DL-model, trained three times (only with section data, only for surface data, and for surface and section data gathered in the same class). This contribution leverages recent advances in DL-based models that combine information from multiple viewpoints and improve the results using domain adaptation techniques.
## 3 Materials and Methods
### Datasets
Two kidney stone datasets were used in our experiments [6, 7]. According to the dataset, the images were acquired either with standard CCD cameras or with an ureteroscope (i.e., an endoscope). These datasets are described below.
**Dataset A,**[6]. This ex-vivo dataset of 366 CCD camera images (see, Fig. 0(a)) is split in 209 surface and 157 section images, and contains six different stone types sorted by sub-types denoted by WW (Whewellite, sub-type Ia), CAR (Carbapatite, IVa), CAR2 (Carbapatite, IVa2), STR (Struvite, IVc), BRU (Brushite, IVd), and CYS (Cystine, Va). The fragment images were acquired with a digital camera under controlled lighting conditions and with a uniform background.
**Dataset B,**[7]. The endoscopic dataset consists of 409 images (see Fig. 0(b)). This dataset includes 246 surface and 163 section images. Dataset B involves the same classes as dataset A, except that the Carbapatite fragments (sub-types IVa1, and IVa2) are replaced by the Weddelite (sub-type IIa) and Uric Acid (IIIa) classes. The images of dataset B were captured with an endoscope by placing the kidney stone fragments in an environment simulating in a quite realistic way in-vivo conditions (for more details, see [7]).
Automatic kidney stone classification is usually not performed on full images due to the limited size of the datasets. Therefore, as in previous works [8], patches of 256\(\times\)256 pixels were extracted from the original images to increase the size of the training dataset (for more details, see [4]). A total of 12,000 patches were generated for each dataset which is organized as follows : For dataset A (WW, STR, CYS, BRU, CAR, CAR2) and dataset B (WW, WD, AU, STR, BRU, CYS).
Figure 1: Examples of ex-vivo kidney stone images acquired with (a) a CCD camera and (b) an endoscope. SEC and SUR stand for section and surface views, respectively.
A thousand patches are available for each class and view (SUR, SEC). For each data set, \(80\%\) of the patches (9600 patches) are used for the training and validation steps, while the remaining \(20\%\) of the patches (2400 patches) act as test data. Patches of the same image contribute either only to the training/validation data or solely to the test data. The patches were also "whitened" using the mean \(m_{i}\) and standard deviation \(\sigma_{i}\) of the color values \(I_{i}\) in each channel [4].
### Proposed approach
Several approaches [4] have demonstrated the ability of DL-based models to recognize in single views (SUR or SEC) different types of kidney stones with high performance. However, in most cases, they have been trained by fine-tuning with a totally different distribution than kidney stones, or worse, they have been trained from scratch with the endoscopic images for individual views. On the other hand, so far no elaborated technique was exploited to combine the surface and section information. Usually, to exploit the information from SUR and SEC images, the patches of the two views of a fragment are simply seen as instances of the same class. Although such methods fuse both views information and more data available for the training, the way in which image features are extracted and combined is far from being optimal, as it does not emulate how the visual inspection of MCA/ESR is performed. To make matters worse, mixing the features in this way does not always improve the classification results. As can be observed in the MIX column of Table 1 (values marked by the * symbol), in some cases fusing features from SUR and SEC patches does not produce better feature maps, as this information combination is not optimal and hinders the model performance [8].
In order to exploit the best features of both views, the proposed DL-model (see Fig. 2) combines the information in a systematic way using a fusion strategy based on a multi-view scheme, introducing attention mechanisms to further filter out unnecessary features maps of our CNN-model. Moreover, instead of training from the scratch the individual branches, we assist the model training with a two-step TL approach as a method of initializing weights from a similar distribution (CCD-camera images) to the endoscopic images.
### Two-step Transfer Learning
The DL-model acquires knowledge in several ways. During the HeTL (HeTL stands for heterogeneous TL), the pre-training is performed with a general domain. The model weights are updated during a HoTL (homogeneous TL) using a domain whose data distribution is the closest to that of the target (domain adaptation process, see[8]). In the kidney stone application, the pre-training on ImageNet improves the generalization capabilities of the DL-model and the CCD camera images of ex-vivo fragments are used as a first fine-tuning. This fine-tuning is finalized using the target dataset (fragment images acquired with endoscopes), this dataset is also used for the validation and testing steps. More specifically, during the HeTL-step, the large ImageNet dataset is used to transfer knowledge into a ResNet50 network which is fine-tuned by the smaller kidney stone image set acquired under controlled acquisition conditions (dataset A) as shown on the left part of Fig. 2. Then, fine-tuning is achieved for each branch (i.e. individual model for each view) during the HoTL-step.This final tuning exploits dataset B which is composed of endoscopic images close to dataset A, but with higher variability in terms of image contrast, noise, and resolution, emulating thus the illumination and scene conditions actually encountered in ureteroscopy when patient data are acquired with an endoscope. The second-step TL is performed for each of the views (SUR/SEC) by obtaining two independent models trained with dataset B of endoscopic images for their respective views (for more details, see [8]). As described below, a MV-model, assisted by the second TL-step, is used to combine the SUR and SEC views into a mixed model (MIX).
### Multi-view model
Once the two SUR and SEC models are trained through the previous two-step TL, the feature extraction layers of this single-view network are frozen to ensure that each branch of the multi-view model extracts the same features and that any variation in performance depends on the non-frozen layers (merge and full connection layers). These frozen layers are connected to a fusion layer, which is responsible for mixing the information of the two views. In this work, the two late-fusion methods proposed in [5] were exploited. On the one hand, the first method
Figure 2: Proposed multiview-fusion model assisted by two-step transfer learning for aESR.
concatenates the feature vectors obtained from each view and merges the resulting representation through a fully connected layer. On the other hand, in the second method, feature vectors are stacked and max-pooling is applied to them. Two configurations were used to implement max-pooling. The first corresponds to a model without attention mechanisms. The second consists of two layers of attention (arranged as shown in Fig. 2). The results presented in this work correspond to the second configuration (for more details, see [5]). Lastly, the output of the late-fusion layer is connected to the remaining part of the MV-model, which merely consists of the classifier. The full proposed model is shown in Fig. 2.
## 4 Results and Discussion
Three experiments were carried out to assess the performance of the two-step TL approach applied to the patch data described in Section 3.1. In the first and second experiments, the two-step TL approach described in Section 3.3 was used to predict kidney stone types in endoscopic images for SUR and SEC views, respectively. Then, in the third experiment, the models trained on SUR and SEC data were combined using the MV-model described in Section 3.4. The results of these experiments are gathered in Table 1 and discussed below.
The results obtained for the first and second experiments follow the trend observed in [4]. For the SUR view, a mean accuracy of \(83.2\pm 1.2\) (in \(\%\)) was obtained. On the other hand, the results observed for the SEC view (accuracy of \(90.4\%\pm 4.8\)) are better than those obtained for the SUR view, probably due to the extraction of more discriminant features. The importance of section data was highlighted in previous works and is confirmed by this contribution. In comparison to the state-of-the-art, the highest performances were reached by the presented TL-method, both for the SUR and SEC views taken separately.
For the third experiment, fusion through MV, an accuracy of \(91.42\pm 0.5\) was obtained. This result is given for the max-pooling configuration with attention. However, the concatenation configuration (\(89.8\pm 3.03\)) presents results very close to the max-pooling configuration. Regardless of the configuration selected for the MV-model, the fusion shows promising results. First, the accuracy obtained in the MIX column in Table 1 suggests a clear improvement over the state of the art. Secondly, it shows that combining in an efficient way both views (SUR/SEC) in a "mixed" model can maintain the performance, contrary to the models marked by the * symbol. The latter shows that combining the SUR and SEC information of stones in a single class leads to a performance decrease.
## 5 Conclusion and future work
This contribution shows that, by mixing information from two views, it is possible to train more accurate models to identify kidney stones acquired with endoscopes. Thus, AI technology can be an interesting solution for assisting urologists. However, these contributions used a very limited dataset in terms of class number and patch samples. The learning approaches on few samples must be improved to cope with the small amount of training data, and especially to increase the class separability when more kidney stone types have to be identified.
|
2307.13808 | Watermarking Conditional Text Generation for AI Detection: Unveiling
Challenges and a Semantic-Aware Watermark Remedy | To mitigate potential risks associated with language models, recent AI
detection research proposes incorporating watermarks into machine-generated
text through random vocabulary restrictions and utilizing this information for
detection. While these watermarks only induce a slight deterioration in
perplexity, our empirical investigation reveals a significant detriment to the
performance of conditional text generation. To address this issue, we introduce
a simple yet effective semantic-aware watermarking algorithm that considers the
characteristics of conditional text generation and the input context.
Experimental results demonstrate that our proposed method yields substantial
improvements across various text generation models, including BART and Flan-T5,
in tasks such as summarization and data-to-text generation while maintaining
detection ability. | Yu Fu, Deyi Xiong, Yue Dong | 2023-07-25T20:24:22Z | http://arxiv.org/abs/2307.13808v2 | # Watermarking Conditional Text Generation for AI Detection:
###### Abstract
To mitigate potential risks associated with language models, recent AI detection research proposes incorporating watermarks into machine-generated text through random vocabulary restrictions and utilizing this information for detection. While these watermarks only induce a slight deterioration in perplexity, our empirical investigation reveals a significant detriment to the performance of conditional text generation. To address this issue, we introduce a simple yet effective semantic-aware watermarking algorithm that considers the characteristics of conditional text generation and the input context. Experimental results demonstrate that our proposed method yields substantial improvements across various text generation models, including BART and Flan-T5, in tasks such as summarization and data-to-text generation while maintaining detection ability.
## 1 Introduction
Language Models (LMs) have demonstrated remarkable effectiveness in generating content that closely resembles human performances across diverse tasks Tan et al. (2023); Dong et al. (2023); Liu et al. (2023). As large-scale models such as Chat-GPT OpenAI (2021) evolve and produce increasingly human-like content, concerns have surged around potential limitations and risks tied to their use Bender et al. (2021). These include hallucination Alkaissi and McFarlane (2023), failure in commonsense reasoning Bian et al. (2023), and misinformation and malicious use OpenAI (2023).
To mitigate potential risks associated with LMs, it's crucial to develop methods that differentiate between AI and human-generated content. Current AI-detection tools primarily rely on perplexity-based classifiers, assuming lower perplexity in AI-generated text Solaiman et al. (2019); Jawahar et al. (2020); Mitchell et al. (2023); Mitrovic et al. (2023). Conversely, an alternative approach is to inject watermarks during generation for subsequent detection. For instance, Kirchenbauer et al. (2023) proposed using hash functions to randomly bifurcate the vocabulary into 'green' and'red' lists at each decoding step, serving as watermarks. This watermark provides reliable detection signals without the need to train a classifier, and produce high-quality generated texts with a minor perplexity drop in language modeling Bengio et al. (2000).
Different from existing research, our focus is on watermarks for conditional text generation (CTG),
Figure 1: The outputs with the original watermark (OW) Kirchenbauer et al. (2023) and our proposed semantic-aware watermark (SW) on a test example from DART – a data-to-text generation benchmark – with parameters \(\gamma=0.1\) and \(\delta=5\). We expect \(\sim\) 50% of human-generated texts from the |red list|, whereas AI primarily utilizes the |green list|. Both watermarks yield high \(z\)-scores (\(z>4\)), indicating strong watermark strength for detection. Yet, OW forces the algorithm to generate from the red list due to randomly assigning key source entities (Mandy Patinkin) to it. As \(\delta\) increases (towards a hard watermark), excluding these red tokens risks more hallucinations.
and we unveil the challenges associated with the use of watermarks (Kirchenbauer et al., 2023). Our research findings suggest that **watermarking algorithms cannot be seamlessly applied to CTG tasks without a notable decline in performance**: the omission of task-specific considerations leads to significant decreases observed - up to 96.99% drop with hard watermarks and 27.54% drop with soft watermarks - in conditional generation tasks including summarization (See et al., 2017; Narayan et al., 2018) and data-to-text generation (Nan et al., 2021; Gardent et al., 2017). Additionally, our detection results reveal a paradox, which indicates another challenge in applying watermarks for CTG: **the prevalent human habit of using tokens similar to the input for text generation complicates the detection of watermarks**.
To enhance the effectiveness of watermarks for CTG, we propose a simple yet effective semantic-aware watermarking algorithm that leverages hash functions to embed watermarks, while also taking into account the input context and the distinctive characteristics of conditional generation tasks. In particular, we strategically bifurcate the vocabulary to balance randomness and semantic relatedness to the input source using word vector similarity, based on hash functions for detection. These semantically-related tokens can efficiently cover a substantial portion of the information that needs to be generated in conditional text generation tasks. Consequently, their inclusion in the 'green list' acts as a buffer, reducing the adverse impact of adding watermarks on the generated results, while maintaining the detection ability.
Our contributions can be summarized as follows:
* We show that directly applying Kirchenbauer et al. (2023)'s watermark method to conditional text generation tasks, without task-specific considerations, can lead to a significant performance drop (up to 96.99%). This significant decline is observed across multiple tasks like summarization and data-to-text generation, and various text generation models such as BART and Flan-T5.
* We propose a semantic-aware watermarking algorithm that utilizes hash functions while considering the input context of CTG tasks. Automatic and human evaluations on multiple datasets and models indicate that our method effectively mitigates quality degradation associated with the use of watermarks, while minimizing the trade-off in detection.
## 2 Related Work
Automatic DetectionThe detection of AI-generated text, particularly in the context of large language models (LLMs), has recently attracted significant research interest (Bakhtin et al., 2019; Schuster et al., 2020; Frohling and Zubiaga, 2021; Sadasivan et al., 2023; Mitchell et al., 2023). Previous approaches have primarily focused on leveraging the perplexities of generated texts for detection. For example, Solaiman et al. (2019) utilize a classifier to evaluate the total log probability of the text, using it as a means to determine whether the content originated from a machine. Building on this premise, Mitchell et al. (2023) further hypothesize and validate that the log probability of machine-generated text diminishes upon perturbation, while the log probability of human-written text remains unpredictable when perturbed.
Besides the aforementioned detection classifiers, there has been a recent emergence of approaches that involve watermarking specific patterns into generated text. For instance, Kirchenbauer et al. (2023) proposed a method that randomly bifurcates the vocabulary and modifies the probability distribution during each decoding step, thereby ensuring the inclusion of detectable patterns (watermarks) in the generated text. On the other hand, Yang et al. (2023) focused on revising and recognizing generated text without having access to the decoding process, enabling watermarking even in the case of black-box LLMs. Their approach identifies words in the generated text that deviate from a predefined pattern and then watermark them with synonymous words that adhere to the pattern.
Conditional Text GenerationConditional text generation aims to produce texts based on given inputs while considering task-specific characteristics and requirements. The core objective is to generate text that is conditioned on specific information or conditions to fulfill the intended purpose of the task. Classical conditional text generation tasks include machine translation (Stahlberg, 2020), dialogue (Huang et al., 2020), summarization (Xu et al., 2022; Goyal et al., 2022), question answering (Karpukhin et al., 2020; Lazaridou et al., 2022), data-to-text generation (Goyal et al., 2022; Keymanesh et al., 2022), and code generation (Vaithilingam et al., 2022; Zhang et al., 2023).
Method
This section provides an overview of the basic principles of watermarks, elaborates on our proposed semantic-aware method, and discusses how it's integrated into the watermarking procedure for CTG.
Original WatermarkConsidering a language model with parameters denoted by \(\theta\), the probability distribution for the \(t\)-th token in sequence \(\mathbf{S}=\{s_{1},s_{2},\ldots,s_{|\mathbf{S}|}\}\) can be formulated as :
\[p(s_{t})=p_{\theta}(s_{t}|s_{<t}) \tag{1}\]
By considering all preceding tokens, language models (LMs) generate a probability distribution across the vocabulary and sample tokens accordingly.
Watermarking is a technique designed to incorporate robust detection signals into machine-generated text. Kirchenbauer et al. (2023) propose two methods, namely hard and soft watermarks, for adding watermarks to text by imposing vocabulary restrictions during each decoding step. Specifically, the "Hard Red List" watermark algorithm randomly divides the vocabulary into "green" and "red" lists using a hash function and previously generated tokens. During the generation process, only tokens from the green list can be selected for the \(t\)-th position. To detect the presence of the watermark in the generated text, a statistical analysis such as the _one proportion z-test_ can be employed.
However, randomly partitioning the vocabulary and solely selecting words from the green list can hinder the generation of crucial tokens that are not included in the green list. As an alternative, the "Soft Red List" watermark approach introduces a constant \(\delta\) to the logit \(l_{k}^{(t)}\) of tokens in the green list during prediction:
\[p_{k}^{(t)}=\exp(l_{k}^{(t)}+\delta)/\sum_{i}\exp(l_{i}^{(t)}) \tag{2}\]
This adjustment ensures that even if there are deterministic tokens not included in the green list, they can still be generated. We observe that hard watermarking can be seen as a special case of soft watermarking, achieved by adding a large \(\delta\) to the tokens in the green list. Therefore, we choose soft watermarking algorithm as the unified formulation in our paper.
```
Input :Input sequence \(\mathbf{x}=\{x_{1},x_{2},\ldots,x_{|\mathbf{x}|}\}\), Conditional model \(p_{\theta}\), green list size: \(\gamma\in(0,1)\), hardness parameter: \(\delta>0\) cluster parameter: \(k\in[1,2,5,10]\) Output :Watermarked text
1 Get word vector from model embedding and Compute word similarity matrix \(\mathbf{M}\in\left[|V|,|V|\right]\) ;
2 Using input sequence \(\mathbf{x}\) and parameter \(k\) to get semantically related tokens \(S\) and Insert them to "green list" \(G\) ;
3for\(t\leftarrow\mathbf{10}\ldots\mathbf{do}\)
4 Apply the conditional model to input sequence \(\mathbf{x}\) and get a logit vector \(l^{(t)}\) over the vocabulary \(V\) ;
5 Compute a hash of token \(y_{t-1}\) and use it to seed a random number generator ;
6 Using the random number generator and partition the remaining vocabulary into \(G\) of size \(\gamma V-len(S)\) and a "red list" \(R\) of size \((1-\gamma)|V|\) ;
7 Add \(\delta\) to each green list logit. Apply these modified logits to get a probability distribution over \(\mathbf{V}\) ;
8
9 Sample the next token \(y_{t}\) according to watermark distribution \(\hat{p}^{(t)}\).
10 end for
```
**Algorithm 1**semantic-aware Watermark
### semantic-aware Watermark
In contrast to text generation tasks involving language models, conditional text generation (CTG) tasks often exhibit significant textual overlap, either at the token level or the semantic level. For instance, Chen et al. (2020) demonstrate that in the CNN/DailyMail dataset (See et al., 2017), over 80% of the tokens found in the summary can be located within the original document. Even in the case of the XSUM dataset (Narayan et al., 2018), known for its "abstractive" nature, this percentage remains above 60%. Consequently, random watermarking algorithms, which bifurcate the vocabulary arbitrarily at each decoding step, can drastically impair the performance of generation models.
Considering this characteristic of CTG tasks, we propose a simple yet effective semantic-aware watermarking method to enhance performance. Our approach uses the input context to extract semantically related tokens, measured by word vector similarity to the source. By incorporating semantically related tokens as a constraint, we ensure the quality of the generated output. We then apply the original watermark and randomly bifurcate the remaining vocabulary.
To implement this approach, we tokenize the in
put sequence \(\mathbf{x}\) to \(\hat{\mathbf{x}}=\{\hat{x}_{1},\hat{x}_{2},\ldots,\hat{x}_{|\hat{\mathbf{x}}|}\}\). Next, the tokenized sequence \(\hat{\mathbf{x}}\) is transformed into contextualized vector representations using the model's embedding layer. Integrating input information into the watermark's green list is a direct and crucial step (step 2&6 in Algorithm 1), consistent with the requirements of CTG tasks where the output is dependent on the input. However, it's crucial to note that output information isn't solely determined by the input. Thus, relying exclusively on input as a constraint may not yield optimal results. To overcome this limitation, we broaden the constraints by incorporating token embeddings to measure token similarities.
We extend the constraints to prioritize the inclusion of content closely related to the input within the partitioned green list, as detailed in Algorithm 1. This strategy effectively minimizes the impact of random vocabulary partitioning on the quality of generated results. The decision to utilize model embeddings to acquire semantically related tokens - steps 1&2 in Algorithm 1 - is motivated by the following reasons:
* Semantic Relevance: By exploiting model embeddings, we capture semantic token relationships. This ensures coherent and semantically consistent text generation by identifying tokens closely linked to the input.
* Enhanced Output Quality: Including semantically related tokens in the green list elevates the relevance and quality of the generated text, aligning it more effectively with the CTG task objectives.
For a specific model, the embedding size is represented as \(\big{[}|V|,d_{\mathrm{emb}}\big{]}\), where \(V\) denotes the vocabulary size, and \(d_{\mathrm{emb}}\) represents the dimension of the model's embedding. Each row of the embedding matrix serves as the representation for the corresponding token. With the token representations in hand, we can calculate vector similarity using methods like cosine similarity to assess the similarity between different tokens. By sorting the tokens based on their similarity values, we construct a similarity matrix \(\mathbf{M}\) of size \(\big{[}|V|\times|V|\big{]}\).
In the similarity matrix \(\mathbf{M}\), each row contains the IDs of all tokens in the vocabulary, sorted according to their similarity to the token represented by that row. Since each row includes the token itself as the first entry, the matrix exhibits diagonal symmetry, forming a symmetric matrix along the diagonal.
In the semantic watermarking method, prior to partitioning the green list, we leverage the input as a foundation and utilize the similarity matrix \(\mathbf{M}\). By combining this similarity matrix with a hyperparameter \(k\), we identify semantically related tokens. These semantically related tokens are then included in the green list, while the remaining portion of the vocabulary is randomly partitioned after the incorporation of semantically related tokens into the green list.
## 4 Experiments and Results
This section provides an overview of the datasets and models utilized in the experiments. We also present the main experimental results, including both automatic and human evaluations.
### Datasets and Models
We conducted experiments to assess the generalization ability of our proposed method by utilizing models with different parameter sizes and architectures, including BART-base, BART-large (Lewis et al., 2020), Flan-T5-small, and Flan-T5-base (Chung et al., 2022). Our focus was on two distinct conditional text generation tasks: summarization - CNN/DailyMail (See et al., 2017) and XSUM (Narayan et al., 2018), and data-to-text generation - DART (Nan et al., 2021) and WebNLG (Gardent et al., 2017). These datasets are widely recognized for evaluating text summarization and data-to-text generation models, respectively. By conducting comprehensive evaluations across multiple datasets, tasks, and models, our objective was to thoroughly compare the differences between the original watermarking algorithm (Kirchenbauer et al., 2023) and our proposed semantic-aware watermarking approach.
### Main Results
Our main experimental results are presented in Table 1. The summarization task was evaluated using the ROUGE metric (Lin, 2004), while the data-to-text generation task was evaluated using BLEU (Papineni et al., 2002). The table illustrates the performance of the models under various watermarking methods, highlighting the enhancements achieved by incorporating semantic constraints in watermarking for both the summarization and data-to-text generation tasks. Our proposed semantic-aware watermark method exhibits significant improvements in comparison to the original water
mark method across all datasets and models.
Additionally, we observe that hard watermarks invariably cause a greater decline in CTG performance compared to soft watermarks. The hard watermarks designed for language models (Kirchenbauer et al., 2023) essentially completely forbid generation from the red list that might contain key input context, potentially leading to near-ineffective generations with almost no overlap with the reference generations. For example, in the data-to-text generation task, the original hard watermark method adversely affects Flan-T5-small's performance on WebNLG, resulting in a decrease of over 57.97 BLEU points with 97.0% of performance drop. In contrast, our semantic-aware watermark effectively mitigates the impact of adding the watermark, demonstrating an 39.09 BLEU point increase over the original watermark with a performance improvement of 21.67 times.
More notably, on the CNN/DailyMail dataset, our semantic-aware watermarking method applied to the Flan-T5-small and Flan-T5-base models not only mitigates the drawbacks of watermark injection but also surpasses the performance of the original generation without watermark. This can be credited to the nature of the summarization task, where a considerable amount of the target information is already present in the input. The semantic-aware watermark method enhances the generation process by effectively harnessing this input, enabling it to capture the essential details for creating high-quality summaries. This synergy between input and target data contributes to the superior performance of the Flan-T5-small and Flan-T5-base models when utilizing the semantic-aware watermark method in summarization tasks.
Human EvaluationIn addition, we conducted a human evaluation comparing BART-base with the original and our proposed watermarks on the XSUM dataset. The human judges1 were presented with reference summaries and generations from different watermarking algorithms in a random and anonymized order. The judges were asked to evaluate which system's summary was better and more similar to the reference. They were instructed to read the source article only when they were unable
\begin{table}
\begin{tabular}{l l l l l l|l l l l} \hline \hline Dataset & Model & Method & R-1 & R-2 & R-L & Dataset & Model & Method & BLEU \\ \hline \multirow{8}{*}{**CNN**} & & NW & 43.80 & 20.88 & 40.73 & \multirow{8}{*}{BART-large} & \multirow{8}{*}{BART-large} & NW & 47.78 \\ & & OW (Hard) & 33.38 & 8.73 & 30.61 & & & OW (Hard) & 6.65 \(\downarrow 86.1\%\) \\ & & SW (Hard) & **43.46** & **20.75** & **40.45** & & & & & **41.04**\(\downarrow 14.1\%\) \\ & & OW (Soft) & 42.46 & 18.33 & 39.52 & & & & OW (Soft) & 37.06 \(\downarrow 22.4\%\) \\ & & SW (Soft) & **43.50** & **20.83** & **40.62** & & & & & **44.04**\(\downarrow 7.8\%\) \\ \cline{2-10} & & NW & 41.78 & 19.57 & 38.66 & & & & & & **49.55** \\ & & OW (Hard) & 24.47 & 5.60 & 22.48 & & & & & **49.2**\(\downarrow 89.2\%\) \\ & & Flan-T5-base & SW (Hard) & **41.80** & **19.80** & **38.72** & & & & & **48.36**\(\downarrow 28.6\%\) \\ & & OW (Soft) & 38.60 & 16.29 & 35.90 & & & & & **49.19**\(\downarrow 20.9\%\) \\ & & SW (Soft) & **41.90** & **19.86** & **38.80** & & & & & **44.18**\(\downarrow 10.8\%\) \\ \hline \multirow{8}{*}{**XSUM**} & & NW & 45.25 & 22.15 & 37.03 & \multirow{8}{*}{BART-large} & NW & 57.18 & \multirow{8}{*}{BART-large} & NW & 57.18 & \multirow{8}{*}{9.25 \(\downarrow 83.8\%\)} \\ & & OW (Hard) & 29.60 & 7.15 & 20.83 & & & & & **48.02**\(\downarrow 16.0\%\) \\ & & SW (Hard) & **42.44** & **18.64** & **33.91** & & & & & **48.02**\(\downarrow 16.0\%\) \\ & & OW (Soft) & 40.07 & 16.51 & 31.50 & & & & & **44.58**\(\downarrow 22.1\%\) \\ & & SW (Soft) & **43.83** & **20.39** & **35.42** & & & & & **52.50**\(\downarrow 8.2\%\) \\ \cline{2-10} & & NW & 39.51 & 16.92 & 31.90 & & & & & **49.77** \\ & & OW (Hard) & 22.98 & 4.80 & 16.66 & & & & & **49.89**\(\downarrow 31.6\%\) \\ & & SW (Hard) & **37.67** & **14.69** & **29.94** & & & & & **48.42**\(\downarrow 24.0\%\) \\ & & OW (Soft) & 35.23 & 12.58 & 27.52 & & & & & **49.2**\(\downarrow 10.9\%\) \\ & & SW (Soft) & **38.79** & **15.91** & **31.03** & & & & & **49.77**\(\downarrow 10.9\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main results of comparing different watermarking strategies across various datasets and models. NW (no watermark) serves as the baseline, and adding a watermark is expected to decrease performance to trade-off detection. OW (original watermark) denotes the use of the Soft or Hard watermark (Kirchenbauer et al., 2023) with hyperparameters \(\gamma=0.5\) and \(\delta\in\{2,10\}\). Our proposed SW (semantic-aware watermark) approach employs semantically related tokens to partition the green and red lists, with hyperparameters \(k=1/2/5/10\), while keeping the same values of \(\gamma\) and \(\delta\) to ensure a fair comparison.
to decide or needed additional information2.
Footnote 2: We made the decision to make reading the source article optional for the judges in order to prevent creating a significant cognitive burden and to encourage them to take shortcuts.
Table 2 presents the results of the human evaluation. With a confidence level of 95% and one-sided A/B tests, the semantic-aware watermark exhibits a significantly higher preference according to human judges (\(p=0.0358\)). Specifically, the preference for the semantic-aware watermark (55.33%) surpasses that of the original watermark (48.00%) by a substantial margin of 15.28%. Moreover, pairwise inter-annotator agreement was assessed, resulting in agreement percentages of 70%, 66%, and 54% for the respective evaluations. These findings strongly support the effectiveness of the semantic-aware watermark method, highlighting its ability to enhance the quality of summarization outputs.
### Watermark Strength and Detection
To evaluate the quality of watermarking for detection, we followed established research (Kirchenbauer et al., 2023; Yang et al., 2023) and assessed the strength using the average \(z\)-score and the area under the curve (AUC) score. Figure 2 and Figure 3 present the \(z\)-score and AUC results, respectively.
A higher \(z\)-score generally indicates a greater presence of tokens from the "green list" in the generated results, increasing the likelihood of successful detection. However, in the context of conditional text generation tasks, maintaining consistency in the length of the generated results with the original model is crucial. It has been observed that the \(z\)-score tends to increase with the length of the generated text (Kirchenbauer et al., 2023). To address this, we introduce an additional penalty term to the \(z\)-score, incorporating the ratio of the average length of the generated results to the average length of the original model's output without the watermark.
As seen in Figure 2, the semantic-aware watermark method significantly outperforms its counterpart in terms of \(z\)-score, reflecting a higher inclusion of "green list" tokens in the generated output. Under normal circumstances, an elevated average \(z\)-score should boost detectability (Kirchenbauer et al., 2023). Yet, as Figure 3 illustrates, the AUC curve for the original watermark method surpasses ours. This paradox suggests another challenge in applying watermarks for CTG: **the prevalent human habit of using input-similar tokens for CTG adds complexity to the detection of watermarks**. Our method, despite showing remarkable improvements in ROUGE metrics and hence bearing closer resemblance to the reference, contributes to a slight dip in the final AUC scores. This scenario indicates a trade-off between enhancing the ROUGE score, indicative of increased similarity to the reference, and preserving detectability. Notwithstanding this, our empirical results compellingly argue that the significant rise in performance (up to \(\sim 2167\%\)) outweighs the detection decreases (Avg. \(\sim 12.6\%\)); further increasing this advantage margin remains an area for future exploration.
\begin{table}
\begin{tabular}{l c c c|c} \hline \hline SW (ours) vs. OW & Judge 1 & Judge 2 & Judge 3 & Avg. \\ \hline SW (ours) preferred & 58\% & 54\% & 54\% & 55.33\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Human evaluation results on 100 randomly sampled examples, accompanied by generations from BART-base with original or semantic-aware watermarks, presented in a random and anonymized order. Each example was independently annotated by three annotators, resulting in an average pairwise inter-annotator agreement of 63.33%.
Figure 3: Watermark detection: AUC scores under different \(\delta\) settings. Higher AUC scores indicates a better detection performances.
Figure 2: Watermark detection: average \(z\)-score under different \(\delta\) settings (x-axis). Higher \(z\)-scores indicate stronger watermark detection confidence. We can see that hard watermarks are easier to detect but lead to a more significant decline in CTG performance.
## 5 Analysis
This section analyzes the hyperparameters, focusing on: \(k\), introduced by our semantic watermark; \(\gamma\) and \(\delta\), inherited from Kirchenbauer et al. (2023).
### Semantic \(k\) Analysis
The semantic-aware watermark uses a hyperparameter, \(k\), to determine the extent of semantically related tokens, derived from word embedding similarities during decoding, that are integrated into the green list. Table 3 shows that **increasing \(k\) in semantic-aware watermarks improve the CTG performance**. We hypothesize that this improvement stems from that increasing \(k\) includes more reference tokens in the green list, leading to a broader coverage of tokens that humans typically use for CTG generation.
To validate our hypothesis and study the relationship between \(k\) and target token coverage, we carried out experiments by measuring the overlaps between semantically related tokens and the reference target tokens under different \(k\) values. Figure 4 (left) presents curves, which, with increasing \(k\), demonstrate a correlation with an increased proportion of target unigram text tokens covered by semantically related tokens.
Interestingly, when we adjust the setup to measure the relative percentage of coverage increase with higher \(k\) values, we observe different trends for various CTG tasks. Figure 4 (right) indicates that watermarks with larger \(k\) values have a more significant performance improvement impact on data-to-text generation tasks compared to summarization tasks. This observation is also reflected in the findings that an increased \(k\) leads to substantial improvements in BLEU scores for data-to-text generation, compared to the ROUGE score improvements for summarization (Appendix A). Specifically, DART and WEBNLG show greater sensitivity to \(k\), where its increase yields better results.
### \(\gamma\) and \(\delta\) Analysis
The soft watermark method (Kirchenbauer et al., 2023) depends on two hyperparameters: \(\gamma\) and \(\delta\). \(\gamma\) regulates the size of the green list during partitioning, whereas \(\delta\) dictates the intensity of watermarks applied to the logits of green list tokens. Essentially, a very large \(\delta\) (e.g., 10) is equivalent to the hard watermark that entirely prohibits tokens from the red list from being generated. This section compares original and semantic-aware watermarks under varying \(\gamma\) and \(\delta\) values, demonstrating that our proposed watermark consistently outperforms the original across different hyperparameter settings.
Increasing \(\gamma\) incorporates more words into the green list, typically lessening the watermark's impact on model performance. Surprisingly, Table 3
Figure 4: The coverage of target tokens by semantically related tokens varies with different datasets and values of the hyperparameter \(k\) on BART-base. Increasing the value of \(k\) improves the coverage of semantic tokens, aligning with our objective and motivation.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \multicolumn{3}{c}{BLEU} \\ \cline{2-4} \(\gamma\) & \(\gamma=0.25\) & \(\gamma=0.5\) & \(\gamma=0.75\) \\ \hline NW & 45.90 & - & - \\ \hline \hline OW & 37.32 & 35.99 & 39.01 \\ SW (k=1) & 37.23 & 38.46 & 41.36 \\ SW (k=2) & 38.10 & 39.29 & 42.01 \\ SW (k=5) & 38.87 & 38.63 & 42.24 \\ SW (k=10) & **41.37** & **42.89** & **44.59** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The effect of the hyperparameter \(k\) on the results of the DART dataset using the BART-base with \(\gamma\in\{0.25,0.5,0.75\}\) and \(\delta=2\).
shows that the original watermark method performs poorly when \(\gamma=0.5\). To further explore possible reasons for this and to test our methods under different setups, we conducted a comparative analysis with varying \(\gamma\) and \(\delta\) set to 2, 5, and 10. Figure 5 indicates that the semantic-aware watermark **consistently** outperforms the original watermark, except when \(\delta\) is set to 2 with relatively small \(\gamma\) values. Decreasing \(\gamma\) reduces the number of selected and enhanced tokens due to the smaller green list size. As a result, the model's performance is expected to gradually decrease with a smaller watermark. However, the change curve of the original method in the \(\gamma<0.2\) range deviates from these expectations.
We hypothesize that this irregularity arises from the negligible impact of soft watermark when \(\gamma\) is small. This happens when soft watermarks with an extremely small green list scarcely affect logits predictions. To confirm this, we examined the impact of varying \(\delta\) on the BART-base model's performance using the DART dataset under extrem small \(\gamma\), as shown in Figure 6. We observe that when \(\gamma\) is set extremely low (\(\gamma=0.05\)) in the soft watermark settings (i.e., \(\delta\) < 4), there is hardly any performance trade-off upon adding watermarks, suggesting ineffective watermarks for detection.
In addition, to ensure that semantically related tokens included in the green list for the semantic-aware watermark do not negatively affect the performance, especially the ones obtained with a large \(k\), we calculate the percentage of these semantically related tokens relative to the overall vocabulary size. Table 4 reveals that it is significantly lower than the green list size dictated by \(\gamma\).
## 6 Conclusion
Our study reveals a significant performance drop when random watermarks are directly applied to conditional text generation tasks without considering the task-specific context. To tackle this challenge, we propose a semantic-aware watermarking algorithm that incorporates hash functions and carefully takes into account the input context of conditional generation tasks. We extensively evaluated our method on diverse datasets and models, including summarization, data-to-text generation, and various text generation models like BART and Flan-T5. The results demonstrate that our proposed method effectively mitigates the quality degradation associated with watermark techniques, as confirmed by both automatic and human evaluations. These findings emphasize the importance of task-specific approaches when applying watermarking methods to ensure optimal performance in conditional text generation tasks.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Dataset & 1 & 2 & 5 & 10 \\ \hline DART & 0.0004 & 0.0009 & 0.0020 & 0.0037 \\ WebNLG & 0.0005 & 0.0009 & 0.0022 & 0.0039 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The percentage of semantically related tokens to the size of the vocabulary \(V\).
Figure 5: The impact of \(\gamma\) on DART results with settings of \(\delta=2/5/10\). \(\gamma\) controls the size of the green list. From \(\delta=2\) to \(\delta=5\), the watermarking method tends to change from a soft watermark to a hard watermark, and the probability of generating tokens from the green list gradually increases.
Figure 6: The impact of \(\delta\), which controls the extent of enhancement applied to the logits, on the DART results.
## Limitations
One limitation that we did not address in our study, which we leave for future work, is how our approach handles different types of attacks for AI detection, specifically paraphrasing attacks. In addition, while our approach has shown significant improvements in downstream performance, we have also observed a slight compromise in the detection sensitivity. This trade-off can be attributed to the fact that humans often use tokens similar to the input for generation, making it more challenging to detect our semantic-aware watermark. While our research clearly shows that performance improvements outweigh the detection decreases, the challenge of further expanding this margin of advantage remains a topic for future exploration.
|
2310.10716 | $W$ state is not the unique ground state of any local Hamiltonian | The characterization of ground states among all quantum states is an
important problem in quantum many-body physics. For example, the celebrated
entanglement area law for gapped Hamiltonians has allowed for efficient
simulation of 1d and some 2d quantum systems using matrix product states. Among
ground states, some types, such as cat states (like the GHZ state) or
topologically ordered states, can only appear alongside their degenerate
partners, as is understood from the theory of spontaneous symmetry breaking. In
this work, we introduce a new class of simple states, including the $W$ state,
that can only occur as a ground state alongside an exactly degenerate partner,
even in gapless or disordered models. We show that these states are never an
element of a stable gapped ground state manifold, which may provide a new
method to discard a wide range of 'unstable' entanglement area law states in
the numerical search of gapped phases. On the other hand when these degenerate
states are the ground states of gapless systems they possess an excitation
spectrum with $O(1/L^2)$ finite-size splitting. One familiar situation where
this special kind of gaplessness occurs is at a Lifshitz transition due to a
zero mode; a potential quantum state signature of such a critical point. We
explore pathological parent Hamiltonians, and discuss generalizations to higher
dimensions, other related states, and implications for understanding
thermodynamic limits of many-body quantum systems. | Lei Gioia, Ryan Thorngren | 2023-10-16T18:00:01Z | http://arxiv.org/abs/2310.10716v2 | # \(W\) state is not the unique ground state of any local Hamiltonian
###### Abstract
The characterization of ground states among all quantum states is an important problem in quantum many-body physics. For example, the celebrated entanglement area law for gapped Hamiltonians has allowed for efficient simulation of 1d and some 2d quantum systems using matrix product states. Among ground states, some types, such as cat states (like the GHZ state) or topologically ordered states, can only appear alongside their degenerate partners, as is understood from the theory of spontaneous symmetry breaking. In this work, we introduce a new class of simple states, including the \(W\) state, that can only occur as a ground state alongside an _exactly_ degenerate partner, even in gapless or disordered models. We show that these states are never an element of a stable gapped ground state manifold, which may provide a new method to discard a wide range of 'unstable' entanglement area law states in the numerical search of gapped phases. On the other hand when these degenerate states are the ground states of gapless systems they possess an excitation spectrum with \(O(1/L^{2})\) finite-size splitting. One familiar situation where this special kind of gaplessness occurs is at a Lifshitz transition due to a zero mode; a potential quantum state signature of such a critical point. We explore pathological parent Hamiltonians, and discuss generalizations to higher dimensions, other related states, and implications for understanding thermodynamic limits of many-body quantum systems.
On a length \(L\) 1d chain of spin-\(\frac{1}{2}\)'s the \(W\) state is defined as \(|W_{1}\rangle=\frac{1}{\sqrt{L}}\left(|10...0\rangle+|010...0\rangle+...+|0...01\rangle\right)\). This state has received much attention in the literature [1; 2; 3; 4; 5], as it is representative of a class of states distinct both from short-range entangled states and from familiar long-range entangled states such as macroscopic superpositions (e.g. GHZ states) and topologically ordered states (e.g. toric code states), for instance in the entanglement classification [6]. It has also become a target for state-preparation protocols on existing and near-term quantum hardware [7; 8; 9]. In this letter, we provide a sharp characterization in the form of a no-go theorem, which roughly says the \(W\) state (and its relatives, we dub _wh\(\overline{n}\)nau-states_) is a ground state of a local Hamiltonian only if the all-zero state \(|0\rangle\) is as well. This provides a barrier for adiabatic preparation of the \(W\) state despite its low area-law entanglement entropy, existence of finite bond dimension matrix-product state description [5; 10], and general'simplicity' of the state. We also describe some condensed matter implications of the theorem, which turns out to be deeply intertwined with the physics of quantum Lifshitz transitions [11], and challenges our current concepts of the thermodynamic limit.
Our results apply to a very broad class of Hamiltonians which includes the usual translation-invariant (with any unit cell) or disordered Hamiltonians studied in condensed matter, as well as more exotic Hamiltonians with domain walls and other defects inserted in particular ways. However, it rules out some pathological Hamiltonians we consider later in the paper, such as those whose coefficients depend explicitly on \(L\) or all-to-all Hamiltonians. Specifically, we define a (infinite or half-infinite, periodic or open) _finite-range 1d Hamiltonian system_ to be a sequence of Hilbert spaces \(\mathcal{H}_{k}\) and a sequence of bounded norm operators \(h_{k}\) acting on \(\bigotimes_{j=k-l}^{k+l}\mathcal{H}_{j}\) (for some fixed range \(l\)), where either \(k\in\mathbb{Z}\) in the infinite case or \(k\in\mathbb{Z}_{>0}\) in the half-infinite case. For each system size \(L\), these define Hamiltonians \(H_{L}=\sum_{k=-\lfloor(L-1)/2\rfloor}^{\lceil(L-1)/2\rceil}\) or \(H_{L}=\sum_{k=1}^{L}h_{k}\), on each length \(L\) chain \(\mathcal{H}(L)=\bigotimes_{k}\mathcal{H}_{k}\) (with ranges as above), with either periodic or open boundary conditions, where in the periodic case, we let the action of \(h_{k}\) within \(l\) of the edges of the range wrap around the chain. Note that the periodic case requires some identification between the local Hilbert spaces \(\mathcal{H}_{k}\).
Consider a length \(L\) 1d chain of spin-\(\frac{1}{2}\)'s. Starting with the "all-zero" state \(|0\rangle\), defined by \(Z_{j}|0\rangle=|0\rangle\) for all \(j\), for each \(n\), we can further define the generalization of the \(W\) state to be
\[|W_{n}\rangle=\binom{L}{n}^{-1/2}\sum_{i_{1}<\cdots<i_{n}}X_{i_{1}}\cdots X_ {i_{n}}|0\rangle, \tag{1}\]
where \(|W_{1}\rangle\) is simply the \(W\) state. State \(|\psi_{L}\rangle\) defined on a sequence of length \(L\) Hilbert spaces as above, such as \(|W_{n}\rangle\), is a ground state of a local Hamiltonian system if for some large enough sequence of \(L\), \(|\psi_{L}\rangle\) is a lowest energy eigenstate of \(H_{L}\). With these definitions, we can state our main result:
**Theorem 1**.: _The \(W\) state (and more generally \(|W_{n}\rangle\)) is not the unique ground state of any finite-range 1d Hamiltonian system. In particular, if it is a ground state, \(|0\rangle\) is also an exactly degenerate ground state._
The idea of the proof is that any negative energy the \(W\) state has relative to \(|0\rangle\) must be associated with the single 1. When we study \(|W_{2}\rangle\), it has two 1s, which are likely far apart and each contribute the same negative energy, so \(|W_{2}\rangle\) must therefore have even less energy than the \(W\) state, meaning the \(W\) state could not have been the unique ground state.
The previous best result is that the gap above the \(W\) state scales as \(O(1/L^{3/2})\)[12]. We show that beyond the _exact_ degeneracy with \(|0\rangle\), by studying the "boosted \(W\) states" we find an \(O(1/L^{2})\) excitation spectrum.
## II Proof of the no-go theorem
Let us now give the proof of the no-go theorem regarding the \(|W_{n}\rangle\) states. We will give two propositions that capture what is really special about the \(W_{n}\) states and then synthesize them into a proof of the theorem. First, we will show that in the large \(L\) limit, the expected energy differences between \(|W_{n}\rangle\) and \(|W_{n+1}\rangle\) are equal for all \(n\). In particular, we have the following:
**Proposition 1**.: _Consider a finite-range 1d Hamiltonian system with Hamiltonians \(H_{L}\). (We do not assume \(|0\rangle\) or \(|W_{n}\rangle\) are eigenstates.) Define_
\[\Delta_{1}:=\langle 0|H_{L}|0\rangle-\langle W_{1}|H_{L}|W_{1}\rangle, \tag{2}\]
_then for all \(n>1\),_
\[\Delta_{n}:=\langle 0|H_{L}|0\rangle-\langle W_{n}|H_{L}|W_{n}\rangle=n\Delta_ {1}+O(1/L). \tag{3}\]
The idea of the proof is to write \(\Delta_{1}\) as a sum of contributions from where the \(X_{i}\)'s are inserted in \(|0\rangle\). Then, since the expected energy is local, we only get nonzero contributions when \(h_{k}\) is near where the \(X_{i}\)'s are inserted. Then, when we compute \(\Delta_{n}\), most of the contribution is from when the \(X_{i}\) are distantly separated, so each will contribute independently a factor of \(\Delta_{1}\), up to errors going to zero with \(L\). The proof is detailed in the Supplemental material [13].
Another very special property of the \(W\)-state is:
**Proposition 2**.: _If \(|W_{n}\rangle\) is an eigenstate, then \(\Delta_{n}\) is independent of \(L\) for \(L>n+2l\)._
Proof.: Let us first demonstrate the case for \(n=1\).
It is convenient to write the Hamiltonian in normal ordered form
\[H_{L}=\sum_{k}:h_{k}:+C_{k}, \tag{4}\]
where \(\langle 0|:h_{k}:|0\rangle=0\). We are free to set \(C_{k}=0\) since we are only interested in energy differences. Then \(H_{L}|W_{1}\rangle=\Delta_{1}|W_{1}\rangle\). Let us focus on one term \(\frac{\Delta_{1}}{\sqrt{L}}X_{i}|0\rangle\) which must appear in \(H_{L}|W_{1}\rangle\). For \(j\) with \(|j-i|>2l+1\),
\[\langle 0|X_{j}H_{L}X_{i}|0\rangle=0, \tag{5}\]
(this is the special property of \(|W_{1}\rangle\)) so this term must come from \(H_{L}\) applied to
\[\frac{1}{\sqrt{L}}\sum_{j=i-l}^{i+l}X_{j}|0\rangle. \tag{6}\]
We can isolate it by forming the matrix element (which also takes care of the normalization), giving
\[\Delta_{1}=\langle 0|X_{i}H_{L}\sum_{j=i-1}^{i+1}X_{j}|0\rangle. \tag{7}\]
Finally, since \(H\) is composed of finite range, normal ordered terms, we can discard pieces that cannot connect \(i\) and \(j\), so
\[\Delta_{1}=\langle 0|X_{i}\sum_{k=i-l}^{i+l}\sum_{j=i-l}^{i+l}:h_{k}:X_{j}|0\rangle, \tag{8}\]
which is manifestly independent of \(L\) when \(L>2l+1\).
To prove the case for general \(n\), we look instead at a particular configuration of 1's, such as when they are all next to each other, by studying the piece \(X_{i}X_{i+1}\cdots X_{i+n-1}|0\rangle\) appearing in \(|W_{n}\rangle\). The proof goes through as above.
Combining these two propositions, we can finish the proof of Theorem 1:
Proof.: Suppose towards a contradiction that for some \(n>0\), and all large enough \(L\), \(|W_{n}\rangle\) is the lowest energy eigenstate. This means \(\Delta_{n}\) is eventually greater than zero, and in fact by proposition 2 it is eventually a positive constant. By proposition 1, we thus see \(\Delta_{1}=\frac{1}{n}\Delta_{n}+O(1/L)\) is eventually positive. Using proposition 1 again,
\[\begin{split}\langle W_{n}|H_{L}|W_{n}\rangle-\langle& W_{n+1}|H_{L}|W_{n+1}\rangle=\Delta_{n+1}-\Delta_{n}\\ &=\Delta_{1}+O(1/L)\end{split} \tag{9}\]
is also eventually positive, so \(|W_{n+1}\rangle\) has even lower energy than \(|W_{n}\rangle\), a contradiction!
Therefore, if \(|W_{n}\rangle\) is a ground state, \(\Delta_{n}=0\), so \(|0\rangle\) is also a ground state.
The argument above may seem in contradiction with the existence of _any_ ground state of \(H\) if \(\Delta_{1}>0\). However, there is a trick with the order of limits. The error in the formula for \(\Delta_{n}\) is \(O(1/L)\) (unless they are eigenstates) but also grows linearly with \(n\), so the true ground state in the thermodynamic limit is one with some nonzero "charge density" \(n/L\), which is what we physically expect. We return to this point below.
Also, it is very important in Proposition 2 that \(|W_{n}\rangle\) is assumed to be an eigenstate. Only this way can we
compute \(\Delta_{n}\) locally. When it is not an eigenstate, \(\Delta_{n}\) requires an average over the whole system, which introduces \(L\) dependence. Without it, we would conclude all \(|W_{n}\rangle\)'s must be degenerate, however we will demonstrate below a Hamiltonian with only \(|0\rangle\) and the \(W\)-state as its two ground states.
If we relax the finite-range condition, and just ask that the support of the terms \(h_{k}\) fall off with some prescribed decay, such as exponential or power law, we expect our results will still hold, up to finite-size splittings with the same decay. Higher dimensional generalizations apply straightforwardly to states such as \(\frac{1}{L^{d/2}}\sum_{i}X_{i}|0\rangle\), where \(i\) ranges over a \(d\)-dimensional lattice.
## III Consequences
In this Section we present some immediate consequences of Theorem 1 for both gapless and gapped Hamiltonians.
### Gapless Hamiltonians & Lifshitz transitions
One interesting question is what sort of low-energy excitations exist as a consequence of having the \(|W_{1}\rangle\) state as a ground state. One may show the following statement, where the proof is given in the Supplementary materials [13],
**Corollary 1**.: _For local Hamiltonians with ground state \(|W_{1}\rangle\), the "boosted \(W\)-states" \(|W_{1}^{(m)}\rangle\equiv e^{i\frac{2\pi}{2}m\sum_{i}x_{i}\hat{n}_{i}}|W\rangle\) of momentum boosts \(2\pi m/L\) (\(m\) independent of \(L\)), will be at most a \(O(1/L^{2})\) energy expectation value difference above the ground state._
For \(U(1)\) and translation symmetric Hamiltonians this statement becomes even stronger as it implies that there are \(O(1/L^{2})\) low-energy eigenstates. The easiest demonstration of this concept is given by the following free fermion Hamiltonian
\[H =-\frac{1}{2}\sum_{i}\left[c_{i}^{\dagger}c_{i+1}+c_{i+1}^{ \dagger}c_{i}\right]+\sum_{i}c_{i}^{\dagger}c_{i},\] \[=\sum_{k}\left(1-\cos k\right)\,c_{k}^{\dagger}c_{k}. \tag{10}\]
The energy dispersion in momentum space is \(\varepsilon(k)=1-\cos k\), as shown in Fig. 1(b). Here we see that the \(|W_{1}\rangle\) state is a zero mode with the same energy as the empty \(|0\rangle\) state. Together they form the smallest ground state manifold possible for the \(|W_{1}\rangle\) state, as necessitated in Theorem 1. The low energy excitations of this Hamiltonian are indeed the momentum \(k=2\pi m/L\) single particle excitations \(c_{k}^{\dagger}|0\rangle\) which have quadratic dispersion for small \(k\), with \(k_{\min}\) scaling as \(1/L\), leading to an \(O(1/L^{2})\) gap.
This Hamiltonian is special as it represents a critical point, known as a Lifshitz transition, where the Fermi energy precisely touches the bottom of a quadratic dispersion. With these observations in mind we propose a new quantum state signature of a Lifshitz transition
**Conjecture 1**.: _If a translation invariant Hamiltonian is tuned such that a zero-mode \(|W_{1}\rangle\)-like state, i.e. a state in the form of \(\frac{1}{\sqrt{L}}\sum_{i}e^{i\frac{2\pi}{2}m\varepsilon_{i}}X_{i}|\alpha \rangle^{\otimes L}\) where \(m\in\mathbb{Z}\), and a product state \(|\alpha\rangle^{\otimes L}\) become the ground states of the system then the system is at a Lifshitz transition._
This signature would cover the simple types of Lifshitz transitions such as when a state transitions from an insulator to metal or vice versa, but does not encapsulate other instances such as when a metal or semimetal changes its Fermi surface shape via a Van-Hove singularities or Dirac lines [11]. We suspect these more complicated transition may also have an interpretation in terms of the \(|W_{1}\rangle\)-like zero mode degeneracy, however the precise statement remains to be formulated. Our conjecture is in alignment with the fact that at these critical points a quadratic (or higher) dispersion necessarily occurs.
### Gapped Hamiltonians
Some simple consequences also follow for gapped Hamiltonians for which \(|W_{1}\rangle\) is a ground state. Here we present the most interesting result with more fun facts in the Supplementary materials [13].
A key question is whether the \(W\) state can belong to the ground state manifold of a stable gapped phase? Stability of the ground state degeneracy is essential for defining such phases. Generally one desires that the ground state degeneracy is exponentially stable in system size, i.e. a perturbation of magnitude \(\lambda\ll 1\) creates an exponentially small energy splitting of the degeneracy (such as \(O(\lambda^{L})\)), as is the case for topological orders and fractons [14; 15; 16]. However the \(|W_{1}\rangle\) degeneracy with \(|0\rangle\) is never stable in this sense:
**Corollary 2**.: _The \(|W_{1}\rangle\) state is never a part of a stable gapped ground state manifold, i.e. there exists a perturbation of magnitude \(\lambda\) that can lift the ground state degeneracy by an energy proportional to \(\lambda\)._
Proof.: We have previously shown that \(|0\rangle\) must be in the ground state manifold if \(|W_{1}\rangle\) is a ground state. Since this is the case, we may simply add a perturbation \(\delta H=-\lambda\sum_{i}Z_{i}\) to create an energy gap of \(\langle W_{1}|H_{0}+\delta H|W_{1}\rangle-\langle 0|H_{0}+\delta H|0\rangle=2\lambda\).
It follows that these states should be excluded from numerical searches of gapped ground state phases, despite their area law entanglement entropy.
## V Pathological parent Hamiltonians
In this section we demonstrate the limits of Theorem 1 and its assumptions by constructing converse Hamiltonians that possess the \(|W_{1}\rangle\) state as the unique ground state.
### Explicit length dependence of parameters
One key assumption Theorem 1 makes is that as one takes the thermodynamic limit of the Hamiltonian the addition of new terms does not change the original terms, i.e. no intrinsic \(L\)-dependence of the individual Hamiltonian parameters. If we break this assumption we can arrive at a Hamiltonian for which the \(|W_{1}\rangle\) is the unique ground state. To do this, we simply modify the critical Lifshitz-transition Hamiltonian in Eq. 10 by shifting the chemical potential in \(O(1/L^{2})\) to create an ever-shrinking Fermi surface, as depicted in Fig. 1(c). Such a situation is given by Hamiltonian:
\[H =-\frac{1}{2}\sum_{i}\left[c_{i}^{\dagger}c_{i+1}+c_{i+1}^{ \dagger}c_{i}\right]+\cos\left(\frac{\pi}{L}\right)\sum_{i}c_{i}^{\dagger}c_{i}\] \[=\sum_{k}\left(\cos\left(\frac{\pi}{L}\right)-\cos k\right)\,c_{ k}^{\dagger}c_{k} \tag{11}\]
where \(|0\rangle\) is an energy \(O(1/L^{2})\) excitation above \(|W_{1}\rangle\). Here we see that the chemical potential parameter explicitly depends on \(L\). We see that this sort of dependence is generally unphysical as it defies the condensed-matter notion of locality since the knowledge of the system size is encoded in non-local operators spanning \(O(L)\) sites.
By further breaking the bounded operator norm condition of individual terms one may create an even'sicker' _gapped_ Hamiltonian with \(|W_{1}\rangle\) as the unique ground state by explicitly multiplying the terms in Eq. 11 by \(L^{2}\) such that the \(O(1/L^{2})\) excitation becomes an \(O(1)\) excitation.
### Locality-breaking
Naturally if we break locality by allowing terms that couple to sites that are \(O(L)\) apart, then we may find Hamiltonians for which \(|W_{1}\rangle\) is the unique, and in fact gapped, ground state. Here we present two notable examples of such a phenomenon.
The first example of this principle is given by a modification of the critical Lifshitz model in Eq. 10 with a total charge projector. This procedure results in the non-local gapless Hamiltonian
\[H=\lambda\prod_{i}\left(1-2n_{i}\right)-\frac{1}{2}\sum_{i} \left[c_{i}^{\dagger}c_{i+1}+c_{i+1}^{\dagger}c_{i}\right]+\sum_{i}c_{i}^{ \dagger}c_{i}, \tag{12}\]
where \(\lambda>0\) which projects states in the odd charge sector (such as \(|W_{1}\rangle\)) to a lower energy state by \(2\lambda\) as compared to even charge sectors (such as \(|0\rangle\)). This lifts the degeneracy of the ground state while maintaining the gapless spectrum at the cost of the non-local charge projector.
The second example is an all-to-all 2-body Hamiltonian with unbounded norm, as presented in Ref. [2],
\[H=\left(1-\sum_{i}\frac{1}{2}(1-Z_{i})\right)^{2}-J^{2}\quad, \tag{13}\]
where \(J^{2}=(\sum_{i}X_{i})^{2}+(\sum_{i}Y_{i})^{2}+(\sum_{i}Z_{i})^{2}\) for which the Dicke states [17]\(|L,m\rangle\) are the eigenstates for \(J^{2}\) and \(J_{z}\) with eigenvalues \(L/2(L/2+1)\) and \(L/2-m\), respectively. Here \(|W_{1}\rangle=|L,1\rangle\) is the gapped lowest energy ground
Figure 1: A simple Lifshitz transition is depicted. In (a) we have a fully empty band since the Fermi energy \(\epsilon_{F}\) is below the band - this corresponds to the insulating phase. As we increase the chemical potential \(\mu\) we arrive at a critical Lifshitz transition as depicted in (b). Here the ground state manifold contains both the \(|0\rangle\) and \(|W\rangle\) states, a hallmark of a Lifshitz transition. In (c) we increase the chemical potential further such that the Fermi energy lies within \(O(1/L^{2})\) above the \(|W\rangle\) state we have the \(|W\rangle\) state as the ground state. However to maintain this state as the sole ground state one has to continuously tune the chemical potential with increasing system size, otherwise one creates a finite density Fermi surface corresponding to a metal.
state of Eq. 13 since the first term favours states with total charge one, and the second term lifts the degeneracy between \(|W_{1}\rangle\) and its momentum boosted states. The unbounded norm of operators renders it difficult to define a gap in the usual condensed-matter sense and is similarly pathological to the case in Eq. 11 when the terms are multiplied by \(L^{2}\).
Recently, a more mild notion of non-local Hamiltonian has received attention, due to its connection with low-density parity-check (LDPC) codes [18; 19; 20]. We can define a low-density Hamiltonian to be one whose terms each involve a bounded number of sites, and for which each site participates in a bounded number of terms. The latter condition rules out the Hamiltonian (13). We could also soften this condition by requiring only that the sum of the norms is bounded in each case. We do not know at this time whether the \(W\) state is a unique ground state of such a Hamiltonian (or whether the \(W\) state is an LDPC code).
Low-density Hamiltonians can prepare more kinds of states than local Hamiltonian systems. For example, our method of argument also shows that \(\frac{1}{\sqrt{2}}(X_{1}+X_{L/2})|0\rangle\) is not the unique ground state of any local Hamiltonian system. However, it is the unique (even gapped!) ground state of the low-density Hamiltonian \(H=2Z_{1}Z_{L/2}-\sum_{i}Z_{i}\).
## IV Discussion
The relationship between \(|0\rangle\) and the \(W\) state can be generalized. Given a state \(|\psi\rangle\), we can consider \(|\psi^{\prime}\rangle=\sum_{i}\mathcal{O}_{i}|\psi\rangle\). We propose to call these _wh\(\overline{n}\)au-states1_. In higher dimensions, we can also consider applying operators on subspaces, such as \(\frac{1}{\sqrt{L_{x}}}\sum_{i=1}^{L_{x}}\prod_{j=1}^{L_{y}}X_{j}|0\rangle\) in a 2d square lattice. This can be extended to generate a whole "family tree" of \(|\psi\rangle\), and we expect analogous "ground-stateability" results along this whole tree, such that a state can only be a ground state if its ancestors are as well. We sketch an argument for this in the supplemental material.
Footnote 1: Wh\(\overline{n}\)au, pronounced [f:anau], is a Maori word for family.
An interesting property concerning \(|0\rangle\) and \(|W\rangle\) is that for all regions \(R\), the reduced density matrices \(\rho_{R}^{\beta}\) and \(\rho_{R}^{W}\) converge in the \(L\to\infty\) limit. 2. In particular, the expectation value of any fixed operator in \(|0\rangle\) and \(|W\rangle\) will converge as \(L\to\infty\).
Footnote 2: This convergence is not uniform in \(R\): if we set \(|R|=L/2\), the entanglement entropy of \(|W\rangle\) is \(\log 2\).
Although from this point of view, the \(W\) state is indistinguishable from a product state in the thermodynamic limit, it is still long-range entangled (this can be proven by considering the boosted \(W\) states, which have nonzero momentum, and then applying the results of [21]). This presents a challenge for the formal understanding of states in the thermodynamic limit as functions on the algebra of local observables [22; 23], since these states are identified, while being physically distinct. What is needed, it seems, is a theory of the thermodynamic limit which can also keep track of finite-size corrections.
The authors especially thank Omar Abdelghani, Xie Chen, Nick G. Jones, Ruben Verresen, and Chong Wang for very useful insight and suggestions. LG is grateful to Leonardo A. Lessa, Sanjay Moudgalya, and Sergey Syzranov for related discussions. We are grateful to the Perimeter Institute for Theoretical Physics and Institut des Hautes Etudes Scientifiques for hosting us during part of this work. LG also thanks the graduate fellowship program at the Kavli Institute for Theoretical Physics.
|
2310.06174 | Cost-Efficient Prompt Engineering for Unsupervised Entity Resolution | Entity Resolution (ER) is the problem of semi-automatically determining when
two entities refer to the same underlying entity, with applications ranging
from healthcare to e-commerce. Traditional ER solutions required considerable
manual expertise, including domain-specific feature engineering, as well as
identification and curation of training data. Recently released large language
models (LLMs) provide an opportunity to make ER more seamless and
domain-independent. However, it is also well known that LLMs can pose risks,
and that the quality of their outputs can depend on how prompts are engineered.
Unfortunately, a systematic experimental study on the effects of different
prompting methods for addressing unsupervised ER, using LLMs like ChatGPT, has
been lacking thus far. This paper aims to address this gap by conducting such a
study. We consider some relatively simple and cost-efficient ER prompt
engineering methods and apply them to ER on two real-world datasets widely used
in the community. We use an extensive set of experimental results to show that
an LLM like GPT3.5 is viable for high-performing unsupervised ER, and
interestingly, that more complicated and detailed (and hence, expensive)
prompting methods do not necessarily outperform simpler approaches. We provide
brief discussions on qualitative and error analysis, including a study of the
inter-consistency of different prompting methods to determine whether they
yield stable outputs. Finally, we consider some limitations of LLMs when
applied to ER. | Navapat Nananukul, Khanin Sisaengsuwanchai, Mayank Kejriwal | 2023-10-09T21:57:07Z | http://arxiv.org/abs/2310.06174v2 | # How does prompt engineering affect ChatGPT performance on unsupervised entity resolution?
###### Abstract.
Entity Resolution (ER) is the problem of semi-automatically determining when two entities refer to the same underlying entity, with applications ranging from healthcare to e-commerce. Traditional ER solutions required considerable manual expertise, including feature engineering, as well as identification and curation of training data. In many instances, such techniques are highly dependent on the domain. With recent advent in large language models (LLMs), there is an opportunity to make ER much more seamless and domain-independent. However, it is also well known that LLMs can pose risks, and that the quality of their outputs can depend on so-called prompt engineering. Unfortunately, a systematic experimental study on the effects of different prompting methods for addressing ER, using LLMs like ChatGPT, has been lacking thus far. This paper aims to address this gap by conducting such a study. Although preliminary in nature, our results show that prompting can significantly affect the quality of ER, although it affects some metrics more than others, and can also be dataset dependent.
large language models, prompt engineering, experimental study, entity resolution, generative models +
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn.
+
Footnote †: journal: Acm J. Mach. Learn. Learn.
default prompt, and use two e-commerce benchmark datasets that been extensively used in the literature. At least one of these (based on matching products across Google and Amazon) has been found to be challenging even for state-of-the-art ER systems.
## 2. Prompting Methods
In this study, we engineered 1 default prompt and 4 alternative prompts to assess GPT-3.5's capabilities in ER. Each alternative was derived by modifying a single component from the default prompt. The default prompt (that we denote as _Binary Reasoning with Concatenated Attributes_) integrates multiple attributes for its input. The response structure is such that GPT-3.5 delivers binary answers: "yes" or "no", each accompanied by a rationale. Additionally, a confidence level, ranging from 0 to 1, is provided for deeper insight. We ensured that GPT-3.5 processes each product pair individually, guaranteeing undividently attention to each set. To further enhance the quality of feedback, we consistently applied a persona, refining its interpretability and accuracy (Figure 1). Below, we briefly describe the five prompting methods (other than the default) that were evaluated in this study:
1. **Single attribute (Title or Product name):** The intuition behind this is giving the simplest information such as the product name may lead to high matching performance. The first alternative is changing from prompting a long concatenation of multiple attributes to using one attribute in this case, each dataset's product name/title.
2. **Representing attributes using JSON format:** We observed that GPT-3.5 can process structured data efficiently. GPT-3.5 can separate attribute names or keys and the information or values of JSON format data. Therefore, we embedded all the pairs with JSON format type before sending them to GPT-3.5 for entity matching.
3. **Few-shot prompting:** This approach presents the model with three examples with correct answers for each pair. We then request binary (yes or no) responses from GPT-3.5 as we prompt in the default setup.
4. **Using similarity score instead of Y/N response:** Rather than offering a simple yes or no response, we aim to evaluate GPT-3.5's performance through its provision of similarity scores. We then aim to determine an optimal threshold that maximizes the model's F1 score.
5. **No persona:** A persona provides context for ChatGPT to understand the user's input better. By 'thinking' from the perspective of a particular persona, the model can interpret and respond to user queries more accurately. To validate these assertions in the context of ER, we exclude the persona and assess GPT-3.5's performance against the default configuration.
## 3. Experimental Study
We present experiments on two established ER benchmarks that are publicly available, and have been previously used extensively in the community for evaluating ER:
1. **WDC Dataset** contains 26 million products and descriptions from e-commerce websites. The products are categorized based on computers, cameras, shoes, and watches. We selected computer type as our primary dataset in this experiment. This smaller curated dataset consists of 7 attributes and comprises 1100 pairs, of which 300 are duplicates and the rest are non-duplicates.
2. **Amazon-Google Product** contains product data from Amazon and Google. This dataset contains mainly technology products such as software, and computer hardware from both sites. This dataset is larger and more challenging, consisting of 11,460 pairs, of which 1,166 are positives. There are only three attributes, one of which is a detailed 'description' attribute.
### Results
Table 1 shows the result of each prompt method we designed across the two datasets. Overall, the engineered prompts perform well on ER across both datasets. Table 2 further illustrates the cost. Considering the _Single Attribute_ prompting method, we find that there is no substantial difference (compared to the default) across the metrics; nevertheless, the cost when using a single attribute is considerably reduced (by about 37%). However, the method presumes that we know a priori which attribute will contain the information 'density' to enable the model to make a good matching decision using only that attribute.
On the other hand, the use of JSON format increases the cost of GPT-3.5 because it increases the output token size. Furthermore, the F1-score experiences a slight decline of 0.1 compared to the default method when GPT-3.5 was tasked with entity matching over JSON-formatted data. This indicates that a structured format like JSON does not particularly enhance performance when prompting GPT-3.5 for ER.
Few-shot prompting was somewhat more effective than representing attributes using JSON format, but it had the highest cost, as it required giving the model examples of matching duplicates. A similar outcome can be achieved by the default or by the single attribute method, but at significantly lower cost.
When using the similarity score instead of binary response, the performance shows that similarity score can create uncertainty in GPT-3.5 results. Even when selecting the optimal threshold values for the F1 score, with the WDC dataset, the performance dropped from 0.91 to 0.71. However, the F1-score increased from the original method of 0.87 to 0.95 for the Amazon-Google Dataset. Therefore, this method shows volatility but may be valuable in some contexts. Its cost was similar to that of the default method (albeit the slight cost difference could magnify if the experiments are conducted at large scale).
Finally, the _no persona_ prompting method has similar performance (in terms of F1-score) to the default method, but lower recall and higher precision. Its cost is lower, but not as low as the single-attribute method. Considering its lower cost (by more than 20%), it may be a useful cost-saving mechanism compared to the default (assuming the single-attribute method cannot be used) on larger-scale ER problems.
## 4. Conclusion and Future Work
Entity Resolution continues to be an important problem in the database, Semantic Web and knowledge capture communities. It has
many applications, and due to its highly domain-specific nature, is unlikely to be completely solved by the advent of LLMs. Nevertheless, LLMs do offer an excellent alternative to ER methods that require training and feature engineering. Even deep learning methods had not completely eliminated this problem. Given the many issues that have been noted for LLMs, including sensitivity to prompting, there was an open question of how sensitive ER is to prompting, and what the baseline quality of ER is when applied to models like GPT-3.5, on which the base ChatGPT model is based.
Our results show that, while the prompting method does matter, the results are generally remarkably stable. The real difference is in the cost. Although the dollar amounts indicated in our results are small, it bears noting that the benchmarks are not the size typically encountered in industrial-level ER, where there can be many millions of records that may need to be deduplicated and merged. Even
\begin{table}
\begin{tabular}{l c c c|c c c} & \multicolumn{3}{c}{WDC Dataset} & \multicolumn{3}{c}{A-Google} \\ Prompt Engineering Method/Technique & Precision & Recall & F1-score & Precision & Recall & F1-score \\ \hline Binary Reasoning with Concatenated Attributes & 0.92 & 0.90 & 0.91 & 0.97 & 0.79 & 0.87 \\ Single attribute: Title or Product Name & 0.91 & 0.94 & 0.93 & 0.96 & 0.70 & 0.81 \\ Representing attributes using JSON format & 0.96 & 0.70 & 0.81 & 0.98 & 0.53 & 0.69 \\ Few-shot prompting & 0.94 & 0.87 & 0.90 & 0.97 & 0.79 & 0.87 \\ Using similarity score instead of Yes or No response & 0.97 & 0.85 & 0.91 & 0.93 & 0.97 & 0.95 \\ No Persona & 0.97 & 0.85 & 0.91 & 0.97 & 0.56 & 0.71 \\ \hline \end{tabular}
\end{table}
Table 1. Evaluation results for different prompt engineering methods on both benchmarks.
Figure 1. Structure of an ER prompt presented to the LLM for a ‘candidate’ pair that needs to be classified as duplicate or non-duplicate.
\begin{table}
\begin{tabular}{l|c|c}
**Prompt Engineering Method / Technique** & **WDC Cost** & **Amazon-Google Cost** \\ \hline Binary Reasoning with Concatenated Attributes & \$0.93 & \$3.04 \\ Single attribute: Title or Product Name & \$0.59 & \$2.19 \\ Representing attributes using JSON format & \$0.99 & \$3.23 \\ Few-shot prompting & \$1.36 & \$3.75 \\ Using similarity score instead of Yes or No response & \$0.95 & \$3.11 \\ No Persona & \$0.68 & \$2.01 \\ \end{tabular}
\end{table}
Table 2. Cost Summary for different prompt engineering methods.
differences of 5-10% in cost can be significant in such situations (as can similar differences in performances). Therefore, the seemingly small differences noted in our results suggest that prompting needs to be taken carefully into account, perhaps through pilot studies, before any large-scale ER process is undertaken using LLMs.
In future work, we plan to expand the prompting methods considered in this paper, as well as consider 'cross-tabulations' of where one prompting method is retrieving a duplicate correctly, but another is not. This may allow us to better understand the relative merits of when to use one prompting method over another. We will also consider more diverse datasets beyond e-commerce. This is important because it would allow us to understand whether the performance we observed in this paper is domain-independent or highly dependent on either domain or dataset. While the results suggest some dependency, a larger, more cross-domain study would help to yield insights that the preliminary study here cannot.
Another agenda that we hope to pursue in future research is to compare how these prompting methods can vary in performance across different LLMs, including not just different foundation models within ChatGPT itself (e.g., GPT-3.5 versus GPT-4), but also LLMs that are publicly available (such as Bloom) and released by other companies (such as Google's Bard and Meta's Llama).
|
2303.11530 | Active Coarse-to-Fine Segmentation of Moveable Parts from Real Images | We introduce the first active learning (AL) model for high-accuracy instance
segmentation of moveable parts from RGB images of real indoor scenes.
Specifically, our goal is to obtain fully validated segmentation results by
humans while minimizing manual effort. To this end, we employ a transformer
that utilizes a masked-attention mechanism to supervise the active
segmentation. To enhance the network tailored to moveable parts, we introduce a
coarse-to-fine AL approach which first uses an object-aware masked attention
and then a pose-aware one, leveraging the hierarchical nature of the problem
and a correlation between moveable parts and object poses and interaction
directions. When applying our AL model to 2,000 real images, we obtain fully
validated moveable part segmentations with semantic labels, by only needing to
manually annotate 11.45% of the images. This translates to significant (60%)
time saving over manual effort required by the best non-AL model to attain the
same segmentation accuracy. At last, we contribute a dataset of 2,550 real
images with annotated moveable parts, demonstrating its superior quality and
diversity over the best alternatives. | Ruiqi Wang, Akshay Gadi Patil, Fenggen Yu, Hao Zhang | 2023-03-21T01:30:20Z | http://arxiv.org/abs/2303.11530v3 | # Coarse-to-Fine Active Segmentation of Interactable Parts in Real Scene Images
###### Abstract
We introduce the first _active learning_ (AL) framework for high-accuracy instance segmentation of _dynamic, interactable _parts from RGB images of _real indoor scenes_. As with most human-in-the-loop approaches, the key criterion for success in AL is to minimize human effort while still attaining high performance. To this end, we employ a transformer-based segmentation network that utilizes a masked-attention mechanism. To enhance the network, tailoring to our task, we introduce a _coarse-to-fine_ model which first uses _object-aware_ masked attention and then a _pose-aware_ one, leveraging a correlation between interactable parts and object poses and leading to improved handling of multiple articulated objects in an image. Our coarse-to-fine active segmentation module learns both 2D instance and 3D pose information using the transformer, which supervises the active segmentation and effectively reduces human effort. Our method achieves close to fully accurate (96% and higher) segmentation results on real images, with 77% time saving over manual effort, where the training data consists of only 16.6% annotated real photographs. At last, we contribute a dataset of 2,550 real photographs with annotated interactable parts, demonstrating its superior quality and diversity over the current best alternative.
## 1 Introduction
Most objects we interact with in our daily lives have dynamic movable parts, where the part movements reflect how the objects function. Perceptually, acquiring a visual and actionable understanding of object functionality is a fundamental task. In recent years, motion perception and functional understanding of articulated objects have received increasing attention in computer vision, robotics, and VR/AR applications. Aside from per-pixel or per-point motion prediction, the _segmentation_ of dynamic, _interactable_ parts serves as the basis for many downstream tasks, including robot manipulation, action planning, and part-based 3D reconstruction.
In this paper, we tackle the problem of _instance_ segmentation of interactable parts from RGB images of _real indoor scenes_. Most prior works on such segmentations [37, 17, 12] operate on point clouds, which are more expensive to capture than images while having lower resolution, noise, and
outliers. To our knowledge, OPD [15], for "openable part detection", represents the state-of-the-art in interactable part segmentation from images. However, their method was trained and evaluated only on _single_ objects, not scenes, and there remains a large gap between synthetic and real test performances. The best reported accuracy on dynamic part segmentation from real images, by OPD, is only 45%.
Typical approaches to close the synthetic-to-real gap rely on domain adaptation using annotated real images, but the manual annotation process is highly tedious for instance segmentation. To this end, OPD[15] opted to manually annotate _mesh_ models of real articulated 3D objects and then render them from many views to obtain OPDReal, a dataset of about 20K annotated images. However, there is an inevitable gap between projected images of _digitally reconstructed_ 3D meshes and real photographs, with both reconstruction errors and re-projection errors further hindering image quality.
To address the above challenges, we present an _active learning_ (AL) [3, 23, 38] approach to obtain high-accuracy instance segmentation of interactable parts from real scene images. AL is a semi-supervised learning paradigm, relying on human feedback to continually improve the performance of a neural segmentation model. As with most human-in-the-loop approaches, the key criterion for success in AL is to minimize human effort. To this end, we employ a transformer-based [9] segmentation network that utilizes a masked-attention mechanism [7]. To enhance the network for interactable part segmentation, we introduce a _coarse-to-fine_ model which first uses an _object-aware_ masked attention and then a _pose-aware_ one, leveraging a correlation between dynamic parts and object poses and leading to improved handling of multiple articulated objects in an image.
Our coarse-to-fine segmentation method learns both 2D instance and 3D pose information using the transformer network, which supervises the active segmentation and effectively reduces human effort. Unlike prior works on active segmentation [35, 27] which mainly focused on the efficiency of human annotation, our network learns the region-of-interests (ROI) from the pose-aware masked-attention decoder for better segmentation sampling in AL iterations.
In summary, our main contributions include:
* We introduce the first active learning framework for instance segmentation of dynamic/interactable parts from RGB images of real indoor scenes. Our method achieves close to fully accurate (96% and higher) segmentation results on real images, with 77% time saving over manual effort, where the training data consisting of only 16.6% annotated real photographs.
* We present a coarse-to-fine, object- and pose-aware masked-attention mechanism for active segmentation, leading to reduced human effort in AL and improved interactable part segmentation over state-of-the-art methods, including OPD [15] and Mask2Former [7].
* Our AL method has allowed us to annotate a dataset of 2,550 real photographs of articulated objects in indoor scenes. We show the superior quality and diversity of our new dataset over OPDReal, and the resulting improvements in segmentation accuracy.
## 2 Related Works
Figure 2: Overview of our coarse-to-fine active segmentation method for interactable parts in real scene images. Our active learning setup (shown on the left; see Section 3.4) makes use of a synthetically trained, transformer-based, coarse-to-fine 2D segmentation model (shown on the right; see Section 3.3) to obtain segmentation masks on unseen real images through iterative refinement via human-in-the-loop feedback.
Articulated objects dataset.The last few years has seen the development of articulation datasets on 3D shapes. Of the many, ICON [11] build a dataset (unreleased) of 368 moving joints corresponding to various parts of 3D shapes from the ShapeNet dataset [6]. The Shape2Motion dataset [31] provides kinematic motions for 2,240 3D objects across 45 categories sourced from ShapeNet and 3D Warehouse [1]. The PartNet-Mobility dataset [34] consists of 2,374 3D objects across 47 categories from the PartNet dataset [20], providing motion annotations and part segmentation in 3D.
All these datasets are obtained via manual annotations and are _synthetic_ in nature. Since sufficient training data is made available by these synthetic datasets, models trained on them can be used for fine-tuning on _real-world_ 3D articulated object datasets with limited annotations.
A recent work, called OPD [15], provides a 2D image dataset of real-world articulated objects, OPDReal, obtained from RGB-D scans of indoor environments. The images in OPDReal come with 2D segmentation labels on all _openable_ parts along with their motion parameters. However, due to the nature of annotation process, the 2D part segmentation masks obtained via 3D-to-2D projection do not completely cover all interactable parts in the image. Also, in OPDReal, objects are scanned from within a limited distance range. Practical scenarios and use cases are likely going to have large camera pose and distance variations.
To overcome these limitations, we contribute a 2D image dataset of articulated objects present in the real world (furniture stores, offices, homes), captured using iPhone12 Pro and 14. We then use our active-learning framework (see Figure 2 and Section 3) to learn a generalized 2D segmentation for interactable object parts.
Part segmentation in images.Early approaches [30, 29, 33] to 2D semantic part segmentation developed probabilistic models on human and animal images. While not addressing the 2D semantic part segmentation problem as such, [13, 19, 4, 16, 21] tackled the problem of estimating 3D articulations from human images, which requires an understanding of articulated regions in the input image.
Recently, with the availability of 3D part datasets [20, 34], there have been works that estimate 3D articulations from articulation images [2, 17, 39]. However, they learn a latent space of 3D articulation parameters and do not provide 2D segmentation masks for interactable parts. Our work aims at segmenting _interactable_ parts of a 3D object from image input. To our knowledge, OPD [15] is the only work that can segment such object parts given an input image, and is built on the Mask RCNN architecture [10]. In our work, we employ the Mask2Former [7] architecture with task-specific modifications as described in Section 3.
Active learning for image segmentation.Active learning (AL) is a well-known technique for improving model performance with limited labeled data. Prior works [24, 26, 5, 36, 25] have demonstrated different ways of using the most informative data to acquire labels with minimum cost for 2D segmentation task. There exist AL algorithms for 2D segmentation task [22, 32] that are specifically designed to reduce the domain gap by aligning two data distributions. We can not borrow such methods to reduce the domain gap between synthetic and real scene images of interactable objects because of large feature differences (our synthetic images contain no background, unlike real scene images).
More recently, [27, 35] employed AL to refine initial 2D segmentation mask through key point or region selection, requiring little human guidance. Due to potentially multiple interactable parts, such point/region selection are ambiguous for an interactable object. As such, we design an AL framework that reduces manual effort by focusing on: (a) using improved part segmentation model (see Section 3.3), and (b) simpler rules for modifying the test set for iterative model refinement (see Section 3.4).
## 3 Method
### Terminology
For exposition clarity, we define some terms that are frequently used across much of this work.
**Interactable objects** - 3D shapes that contain moveable parts, such as a cabinet drawer, either in the rest state or in the articulated state, are said to be interactable objects. **Interactable parts** - Moveable parts in interactable objects are called interactable parts. **Dynamic parts** - Dynamic parts and interactable parts are used interchangeably throughout. **Articulated objects** - Interactable objects with articulations on their dynamic parts are termed as articulated objects.
### Problem statement
Let \(D\) be a real-world image dataset of interactable objects. Given an RGB image \(I\in D\) containing one or more interactable objects \(\{o_{j}\}\) as input, our goal is to output a set of 2D segmentation masks, \(\{m_{i}\}\), corresponding to all interactable parts for each object \(o\in\{o_{j}\}\) present in \(I\), where each mask \(m_{i}\) is represented by a 2D polygon.
We propose an active learning setup (for segmentation on unlabeled datasets) using a transformer model for continual mask refinement on unseen real images using a human-in-the-loop framework. Figure 2 provides an overview of our approach. It consists of two parts: (a) a pose-aware masked-attention network for 2D segmentation of interactable parts of \(\{o_{i}\}\) in \(I\), and (b) learning to generalize such segmentations using an active learning framework.
### Pose-aware masked-attention network
Fig 2 (right) shows a detailed structure of our segmentation network, whose working can be broken down into five
major steps as explained below.
**Detector backbone.** First, the input image \(I\) is passed through a _pre-trained_ 2D object detection network, MaskRCNN[10], to obtain a 2D object bounding box \(bbox^{o}\), and features maps, \(f\) for subsequent processing.
**DD Decoder.** We use the _pretrained_ decoder from _Deformable DETR_ transformer module proposed by Zhu et al.[40]. Inspired by [14], we replace the learned object query embedding with the normalization of centre coordinates \((c_{x},c_{y})\), width and height \((w,h)\) from the detected 2D bounding box, so that the decoder can generate new object query embeddings which contain both local and global information extracted from the image, and the 2D bounding box can be used for 6DoF pose estimation.
**Task-specific MLPs.** Object queries from decoder are passed into three separate MLP heads, trained from scratch, for (a) object class prediction, (b) 6DoF object pose estimation and (c) binary object mask prediction. The class prediction head uses the cross-entropy loss, and the mask prediction head uses a pixel-wise cross-entropy loss.
The pose estimation head, predicting 6DoF object pose, outputs a rotation matrix \(\mathbf{R}\), and translation matrix \(\mathbf{t}\), against whom the loss function is formulated. Specifically, for \(\mathbf{t}\), we use an L2 loss: \(L_{t}=\|\mathbf{t}-\tilde{\mathbf{t}}\|_{2}\), and for \(R\), we use a geodesic loss as defined in [18]: \(L_{rot}=\arccos\frac{1}{2}\left(Tr\left(\mathbf{R}\tilde{\mathbf{R}}^{T}\right) -1\right)\), where \(\tilde{\mathbf{t}}\), \(\tilde{\mathbf{R}}\) are predictions. The loss for pose estimation head is: \(L=\lambda_{t}L_{t}+\lambda_{rot}L_{rot}\), where \(\lambda_{t}\) and \(\lambda_{rot}\) are the weighting parameters, set to 2 and 1 respectively.
Using \(bbox^{o}\) and the estimated 6DoF object pose, we can obtain the corresponding 3D _oriented_ bounding box \(OBB^{o}\), which tightly fit the \(bbox^{o}\). From among the eight vertices in \(OBB^{o}\), we select vertices of the face with positive \(x\) coordinates as the representative 2D box for object front, and use it to crop the input image. This cropped image may contain pixels that do not actually belong to the object of interest. We filter out such pixels by multiplying it with the 2D binary object mask, resulting in a refined binary object mask, \(m^{o}_{rfnd}\).
**Pixel decoder** We borrow the the _pretrained_ pixel decoder from MaskFormer [8] which takes \(f\) as input, and upsamples the features to generate embeddings \(f_{pd}\).
**Masked-attention decoder.** To finally output segmentation masks corresponding to the interactable parts in \(I\), we make use of masked-attention decoders from Mask2Former [7]. The structure of each layer \(L_{i}\) is shown right below \(L_{1}\) in Figure 2. \(L_{1}\) takes as input \(f_{pd}\) and the refined mask, \(m^{o}_{rfnd}\), and outputs a binary mask which is fed to the next layer. The binary mask at the output of \(L_{3}\) is multiplied with \(f_{pd}\) resulting in part segmentation in the RGB space. The loss here is the pixel-wise cross entropy loss. We call it as our _pose-aware masked-attention decoder_.
All these modules are _jointly_ trained in an _end-to-end_ fashion for synthetic image datasets (see Section 4). When finetuning on real images where part annotations are made available, weights for all the modules, except the MLPs, are updated as GT pose and object masks are not collected for these images, which are required to train these MLPs.
### Active learning for 2D part segmentation
In the active learning setup, we first consider a mini dataset, \(E\in D\) of \(m\) images, and use it to improve our segmentation model with a human-in-the-loop framework. We call this mini dataset \(E\) the enhancement set as it iteratively helps enhance the segmentation masks at the output of our model. Next, we consider a really small training set, \(T_{s}\in D\) (s.t \(T_{s}\cap E\) is a nullset), of \(r\) images, and fine-tune our model \(M_{s}\) on this very small training set. As expected, \(M_{s}\) fine-tuned on \(T_{s}\) does not generalize well to images in \(E\). This is where the active learning framework kicks in. In our implementation, \(m=500\), and \(r=50\).
We input images from \(E\) to the fine-tuned model \(M_{s}\), which outputs segmentation masks for interactable parts in the input image. three scenarios exist: (1) If the output mask is deemed to be perfect (i.e., covers all interactable parts without any holes), as determined by humans, we move
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline & & \multicolumn{5}{c}{Category} \\ \cline{3-6} & & Storage & Fridge & Dishwasher & Micro.\&Oven & Washer \\ & Objects & 231 & 12 & 3 & 12 & 3 \\ OPDReal[15] & Images & 27,394 & 1,321 & 186 & 823 & 159 \\ & image \% & 91.67\% & 3.93\% & 0.62\% & 2.75\% & 0.53\% \\ \cline{2-6} & Objects & 176 & 51 & 31 & 62 & 13 \\ Ours & Images & 925 & 370 & 315 & 775 & 175 \\ & image \% & 36.27\% & 14.51\% & 12.35\% & 30.39\% & 6.8\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics for OPDReal and our datasets, over six object categories. Both datasets contain images with different object categories with multiple interactable parts. Our dataset is relatively more balanced in terms of sample distribution between different object categories, allowing models to generalize better on unseen images. We combine microwave and oven since objects in these two categories often appear together in real indoor scenes.
such an example from \(E\) to \(T_{s}\). So, \(|E|\) and \(|T_{s}|\) are now decreased and increased by 1, respectively; (2) If the output mask is imperfect (i.e., holes exist in predicted masks and/or not all interactable parts are segmented), we keep that sample as-is in \(E\). And, (3) if the output is deemed to be bad (i.e., no interactable part is segmented), we obtain manual annotation for segmentation masks on all interactable parts from humans in the loop.
Once annotated, the examples are now "perfect" and moved from \(E\) to \(T_{s}\). This process continues iteratively until all the examples in \(E\) are moved to \(T_{s}\), resulting in \(|E|\) being zero; see Table 6. This framework allows \(M_{s}\) to continually see new and good labeled training data on previously unseen images, helping it to learn better. We will show more details of human verification and annotation process in our supplementary material.
## 4 Datasets and Pre-training
**Datasets.** We use two kinds of real image datasets in our experiments: (1) images from the OPDReal dataset [15], and (2) images from our dataset. Our dataset images are obtained from the real world by taking photographs of interactable objects in indoor scenes from furniture stores, offices, and homes, captured using iPhone12 Pro and iPhone14. On average, each interactable object is photographed from five distinct viewpoints with varying camera poses and distances from the objects. Also, a captured image can contain more than one object with interactable parts. As such, our dataset is quite diverse compared to OPDReal, where objects are scanned from within a limited distance range.
For both datasets, we consider six object categories - Storage, Fridge, Dishwasher, Microwave, Washer, and Oven. The data distribution of 3D objects per category and their 2D images are shown in Table 1. In total, our dataset contains 2,550 images and OPDReal contains \(\sim\)23K images. Unlike OPDReal where part segmentation masks are manually annotated on a 3D mesh and then projected back to the image space, for our dataset, we obtain manual annotations (i.e., the ground truth) for 2D segmentation masks directly on the captured images. Note that such manual annotations are used only to evaluate our active learning framework.
From Table 1, we observe that the majority of data samples in OPDReal belong to the Storage category (91.67%), with the rest distributed among the remaining categories. The difference in distribution between the largest and the second largest category is 87.74%, and between the largest and smallest categories is 91.14%. With such data skewness towards one category, models trained on OPDReal will likely overfit to the dominant category.Our dataset, on the other hand, contains smaller variations in data distributions across the six object categories, where the difference in data distribution between the largest and second largest categories is 5.88%, and between the largest and smallest categories is 29.47%.
Pre-training.We begin our experiments by rendering synthetic models from the PartNet-Mobility dataset [34] in various articulation states since this enables us to obtain sufficient annotations to train 2D segmentation networks and thus enable transfer learning applications. Our synthetic dataset contains around 32K articulation images, with equal data samples for each object category. We use a 90%-10% train-test split to train on this synthetic dataset, and use this trained model for fine-tuning on real images. To this end, we use all data samples from both datasets, OPDReal and ours, with 80%-20% train-test split.
We implement our network in PyTorch on a Nvidia RTX 2080 Ti GPU. All images are resized to 256\(\times\)256 for training. During pre-training on the PartNet-Mobility dataset, we use the Adam optimizer with a learning rate (lr) of 2.5e-4, and train for 2K epochs. When fine-tuning on real images, we use the same lr and run for 4.5K epochs.
## 5 Results and Evaluation
We evaluate our pose and mask-aware interactable part segmentation model on real scene images (both ours and OPDReal) by comparing its segmentation results against two competing methods. Through ablation studies, we provide insights into the need for pose-aware and mask-aware components present in our network. Finally, we compare the segmentation results on _our_ real scene images, with and without the active learning framework.
### Competing Methods
Opd[15].As one of the first (and only) works for detecting interactable object parts for in RGB images, we use OPD as one of our comparisons. In our experiments, we select OPDRCNN-C for comparison.
Mask2Former[7].As an advanced, transformer-based extension of Mask RCNN architecture for generalized object detection and segmentation in 2D images, we compare against the Mask2Former architecture by employing it to detect all interactable object parts in input images.
### Model ablations
The performance of our method is driven, in parts, by two specially designed prediction branches that individually output camera pose and object mask. We perform evaluations by ablating these two modules in our network architecture.
Ours w/o pose prediction branch.Keeping all other modules, we remove that MLP branch in Figure 2 which predicts 6 DoF camera pose for a given input image.
Ours w/o object mask prediction branch.Here, we only remove that MLP branch in Figure 2 which predicts a binary object mask for the interactable objects (not parts) in the input image.
### Evaluation Metrics
Mean Average Precision (mAP).Following OPD [15], we report mAP@IoU=0.5 scores for both 2D bounding box detection and 2D segmentation tasks. For brevity, in the rest of the section, we use mAP to denote mAP@IoU=0.5.
Annotation time.In the active learning setup, the key components that determine the need and efficiency of human-in-the-loop framework are the number of manual annotations, the annotation time (measured in hours), and segmentation accuracy (mAP@IoU=0.5).
Figure 3 and 4 show visual results on different models for test images from OPDReal and our dataset, respectively.
### Quantitative results w/o active learning
We start off by comparing the performance of our model with competing methods without any active learning framework. As explained in Section 4, all these models are pre-trained on synthetic renderings, which are then fine-tuned using images from the two OPDReal and our dataset. Our primary interest in the 2D part segmentation performance. As such, we report the segmentation mAP(@IoU=0.5) on both, the OPDReal dataset and our dataset. For completeness of evaluation, we also report the mAP scores for 2D bounding box corresponding to interactable parts. These results are tabulated in Table 2.
We observe that our model significantly outperforms competing methods on both datasets. This proves the efficacy of our model over competing methods. It is interesting to note the big jump in performance for _all_ the models when testing using models fine-tuned on our dataset, compared to their respective performance when fine-tuned on the OPDReal dataset. This is mainly due to data skewness towards the Storage category in OPDReal (makes for 91.67% of the total samples), leading to a lack of generalizability on images with other object categories. Since our dataset is relatively balanced across different categories (except for the Washer), we observe that all three models perform much better than OPDReal counterparts, with our model achieving the best results again. These results validate the richness of our dataset over OPDReal for dynamic part segmentation tasks.
### Ablation studies w/o active learning
We perform ablations on our network architecture to better understand the need for pose estimation MLP and the object mask prediction MLP at the output of the transformer decoder (shown in the orange-colored box on the right side of Figure 2). The segmentation performance is tabulated in Table 3. Essentially, when both the pose estimation module and the mask predictor module are removed (row 1), our network reduces to the architecture of Mask2Former [7]. Also, the pose estimation module seems to play more of a role than the object mask predictor module in achieving better part segmentation, with a positive net difference of 1.5% in mAP score, see row 3 and row 2 in Table 3, respectively.
This is because estimating 6-DoF pose enables us to obtain regions of interactable parts for objects in the image,
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{segm mAP / bbox mAP (\(\uparrow\))} \\ \cline{2-4} & OPD-C[15] & M2F[7] & Ours \\ \cline{2-4} OPDReal & 37.380 / 44.663 & 39.380 / - & **48.168 / 54.393** \\ Ours & 85.533 / 85.994 & 93.118 / - & **96.592 / 96.699** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison against competing 2D part segmentation methods on test set images from OPDReal and our dataset. This is not using active learning.
Figure 3: Qualitative results on OPDreal test set. “Miss” represents the absence of part segmentation with \(\geq\) 75% confidence. Both OPDRCNN-C and mask2former fail to detect some/all dynamic parts for categories with fewer samples (ex. Washer and Dishwasher). Our method outperforms the two – on part edges (first and second row), for noisy GT (third row), and on multiple objects (last row).
\begin{table}
\begin{tabular}{c c c c} \hline \hline Row ID & Object Mask & Pose & segm mAP (\(\uparrow\)) \\ \hline
0 & - & - & 93.118 \\
1 & ✓ & - & 94.542 \\
2 & - & ✓ & 96.016 \\
3 & ✓ & ✓ & **96.592** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on our key components (no AL).
whereas a coarse object binary mask provides information about the object as a whole. When both modules are present (row 4), the network achieves the best performance since they provide a coarse-to-fine refinement over their individual contributions. This is nothing but our setting.
### Quantitative results with active learning
The learning framework and data modifications per iteration in the active learning (AL) setup are described in Section 3.4. In Table 4, we present the mAP scores for different models, and contrast them with the active learning setup on our model, on the same train-test set of 50-500 real-world images. Naturally, all models without AL framework overfit on the 50-image training set, with the performance of the _best_ model being \(\sim\)10% lower than our AL framework.
We also compare annotation times required for labeling all images in our enhancement set (see Section 3.4) purely based on segmentation results output from OPD and Mask2Former. That is, no active learning is considered here. Table 5 presents these results and compares them against our AL framework. We clearly see that the number of images as well as interactable parts that need manual annotation to segment all interactable parts is greater for OPD, followed by Mask2Former. However, this number drops drastically when our AL model is considered. As such, our AL model consumes the least time to get 2D segmentation masks.
Finally, we also show the AL process for all four iterations in Table 6. The important thing to read in this table are the last three columns that represent the number of images/parts with "perfect" segmentations, and the amount of time taken to fix them, respectively (see Section 3.4 for interpretation of "perfect" and "bad"). We observe that as the iterations progress, the amount of annotation time decreases, which can be attributed to the model's generalization ability with more labeled training data.
### Ablation studies with active learning
In Table 7, we show timing comparisons for annotating enhancement set images (refer to Section 3.4 for terminology) using different versions of our model. We again observe that the pose estimation module (Row ID 2) provides better segmentation masks on unseen images compared to the object mask prediction module (Row ID 1), resulting in less human intervention for rectifying the masks, as recorded by the annotation time in the last column. Our proposed model (Row ID 3) requires minimum time effort from humans, validating, yet again, our network design choices.
## 6 Conclusion
We advocate active learning as a general and effective means to obtain high-accuracy instance segmentations. It may be the most viable option to achieve close-to-error-free performance on arbitrary test sets. If properly designed, AL can significantly reduce human annotation effort for dataset preparation. In this work, we realized both goals for the specific task of instance segmentation of interactable parts from real scene images containing articulated objects.
Our contribution also includes a high-quality and diverse dataset of annotated real photographs, which we will continue to scale up to serve the vision community. We would also like to endow the annotated parts with motion parameters. On the technical side, there is much room to improve on speeding up the correction of erroneous segmentations during AL. Additional priors beyond object poses may also be explored to facilitate dynamic part segmentation. At last, we would like to extend our AL framework to other motion- or functionality-aware vision and annotation tasks.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Iter. & Train & Test & Perfect & Bad & Time (hr) \\ \hline
0 & 50 & 500 & - & - & - \\
1 & 340 & 210 & 260 & 30 / 97 & 0.82 \\
2 & 445 & 105 & 82 & 23 / 53 & 0.39 \\
3 & 550 & - & 75 & 30 / 89 & 0.46 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Different data and efficiency statistics over each iteration of the active learning process on enhancement set.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & AL & segm / bbox mAP (\(\uparrow\)) & Time (hr) \\ \cline{2-4} OPDRCNN-C & - & 62.190 / 63.630 & - \\ Mask2Former & - & 75.626 / - & - \\ Ours & - & 88.088 / 88.242 & - \\ Ours & ✓ & **97.780** / **97.780** & 1.675 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative comparison against competing methods without AL and our AL framework. (50train/50test)
\begin{table}
\begin{tabular}{l c c c} \hline \hline Row ID & Object Mask & Pose & Time (hr) \(\downarrow\) \\ \hline
0 & - & - & 3.882 \\
1 & ✓ & - & 2.302 \\
2 & - & ✓ & 1.854 \\
3 & ✓ & ✓ & **1.675** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation study on key modules of our method with activate learning on our enhacement set.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \#Images/Parts. & Time (hr) \(\downarrow\) \\ \hline OPDRCNN-C & 483 / 1640 & 7.25 \\ Mask2Former & 324 / 1102 & 5.01 \\ ours (AL) & 83 / 239 & **1.675** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of different methods in _manually_ annotating segmentation masks for images in our enhacement set.
Figure 4: Qualitative results for test-set images from our dataset. Left to right – Prediction results using OPDRCNN-C [15], Mask2Former[7], our method using only the object mask prediction branch (i.e., without the pose estimation branch), our method using only the pose estimation branch (i.e., without the object mask prediction branch), our method (i.e., with both prediction branches), and the ground truth (GT). We show results on different object categories, and observe that our method outputs better segmentation masks over interactable parts, even when multiple objects exist in the input image. Our results also show that the existence of object mask predictor and pose estimator modules can effectively reduce segmentation errors from unwanted objects (second row) and object side surfaces(first, fifth and last rows). More results in the supplementary material. |
2302.06365 | Evolution of SLAM: Toward the Robust-Perception of Autonomy | Simultaneous localisation and mapping (SLAM) is the problem of autonomous
robots to construct or update a map of an undetermined unstructured environment
while simultaneously estimate the pose in it. The current trend towards
self-driving vehicles has influenced the development of robust SLAM techniques
over the last 30 years. This problem is addressed by using a standard sensor or
a sensor array (Ultrasonic sensor, LIDAR, Camera, Kinect RGB-D) with sensor
fusion techniques to achieve the perception step. Sensing method is determined
by considering the specifications of the environment to extract the features.
Then the usage of classical Filter-based approaches, the global optimisation
approach which is a popular method for visual-based SLAM and convolutional
neural network-based methods such as deep learning-based SLAM are discussed
whereas considering how to overcome the localisation and mapping issues. The
robustness and scalability in long-term autonomy, performance and other new
directions in the algorithms compared with each other to sort out. This paper
is looking at the published previous work with a judgemental perspective from
sensors to algorithm development while discussing open challenges and new
research frontiers. | B. Udugama | 2023-02-13T13:51:50Z | http://arxiv.org/abs/2302.06365v1 | # Evolution of SLAM: Toward the Robust-Perception of Autonomy
###### Abstract
Simultaneous localisation and mapping (SLAM) is the problem of autonomous robots to construct or update a map of an undetermined unstructured environment while simultaneously estimate the pose in it. The current trend towards self-driving vehicles has influenced the development of robust SLAM techniques over the last 30 years. This problem is addressed by using a standard sensor or a sensor array (Ultrasonic sensor, LIDAR, Camera, Kinect RGB-D) with sensor fusion techniques to achieve the perception step. Sensing method is determined by considering the specifications of the environment to extract the features. Then the usage of classical Filter-based approaches, the global optimisation approach which is a popular method for visual-based SLAM and convolutional neural network-based methods such as deep learning-based SLAM are discussed whereas considering how to overcome the localisation and mapping issues. The robustness and scalability in long-term autonomy, performance and other new directions in the algorithms compared with each other to sort out. This paper is looking at the published previous work with a judgemental perspective from sensors to algorithm development while discussing open challenges and new research frontiers.
SLAM, Visual-SLAM, Autonomous Robots, Localization, Mapping, Sensors, Computer vision, Perception, Deep learning, Neural networks.
## I Introduction
Capability to manoeuvre over a complicated unknown surrounding is one of the foremost necessary challenges in the field of autonomous robotics [1]. The solution for this problem was divided into two frontiers: (1) the Simultaneous Localization and Mapping (SLAM) that will generate the map information while localising the robot in the environment, and (2) the Navigation algorithm that describes the traverse path towards the goal while avoiding obstacles [2, 3, 5]. The success of the Autonomous robotics field is hugely dependent upon the solution of this SLAM problem and will have various applications ranging from agriculture to oil exploration, medical applications to nuclear laboratories, and intelligent vehicles to spatial expeditions [4]. The research focus on this topic has increased with the aid of the automobile industry due to the current trend towards self-driving cars.
Furthermore, for robust and exact estimation of the localisation, one could propose the utilisation of Global Navigation Satellite System (GNSS) which will provide autonomous geospatial information with global coverage to accomplish a noteworthy outcome. It will not be a perfect solution to this problem due to its drawbacks. Even though the precision of the traditional GNSS arrangements is raised with the usage of correctly aligned base stations, accessibility to such kind of a system in a global manner stays as a problem. Further GNSS is affected by spatial constraints that are unable to predict and eliminate. Especially the signal interference and line of sight disturbances could block the communication which has catastrophic outcomes with the erroneous localisation [6, 27].
Road markings or roadway identification can use to navigate a vehicle in a road such as Advanced-Driver-Assistance-Systems (ADAS) technology that is included with control for stability, new anti-locking, predictive cruise control and adaptive control of the pathway. This approach resolved the requirement of a distance communication system and focused on the local features to ensure the optimum and safe traversal in the road [11]. A significant drawback of these systems is that it needs a considerable number of identifiable characteristics such as in road signs and markings on the road. Increasingly complex conditions (urban for the most part, with convergences, bent streets, and so forth.) do not generally give enough data to confine a vehicle within a secure path. However, more information with precision and accuracy is required to develop a framework and guarantee reliable and safe navigation. Thus, various positioning systems ought to be considered.
The basic functionality of an autonomous vehicle is to localise itself with respect to a global or a local frame prior to any other planning or sensing steps [1]. Predicting the optimum safe path while evaluating other moving objects to avoid obstacles will be the next question to arise after obtaining its accurate position and orientation. The SLAM addresses this requirement, while yet being sufficiently general to permit the utilisation of any sensor or estimation procedure that suits the essential of evaluating both locating and mapping simultaneously. The mapping step is of prime intrigue when the self-driving is considered as it offers the first degree of discernment that is required to generate proper resolution of the navigation steps.
Solving the SLAM problem is treated as one of the major areas in autonomy and intrinsic part of self-driving robots [27]. Numerous issues are yet averting the utilisation of SLAM with autonomous vehicles that should travel for many kilometres in altogether different conditions. This elaborates the major
problems for SLAM applications on autonomy: pose estimation tends to drift in long as well as maps are not feasible in almost any driving circumstance. The approximation of local and concurrent positioning provided by
maps are not feasible in almost any driving circumstance. The approximation of local and concurrent positioning provided by SLAM algorithms continues to deviate from the actual trajectory with the travelling distance [23]. Therefore, without prior knowledge or exact details, maintaining proper positioning during several kilometres becomes almost impossible. This brings us to the next problem, which is to provide maps that are appropriate for the task of localisation regardless of the conditions like the season, weather, geographical area, and traffic. Several approaches have been proposed to tackle this challenge, such as creating a map with careful monitoring of distinguishing data to reuse it later or use wireless communication infrastructures to exchange and improve the maps created by other road users [9, 12].
SLAM framework encompasses simultaneous evaluation of the state of a robot fitted with onboard sensors and the development of a map representation. Throughout this paper, the SLAM evolution from sensor data extraction to the algorithmic approach on state estimation and simultaneous mapping is discussed along with the loop closure technique which is used in the purpose of refining the localisation by revisiting the already mapped areas. However, different benchmarks and data sets available to experiment with algorithms is not discussed in this review.
## II Anatomy of SLAM Implementations
A SLAM system consists of mainly two parts: called as the front-end and back-end implementations. Perception stage, which includes the sensing and modelling the data is the front end, while the back end carries out decision making based on the front end, generated abstract data. Exploration of these two areas is the focus of this section, beginning from the Maximum a Posteriori Estimation (MAP) [27, 29].
### _MAP Estimation - SLAM Back-End_
The new _de-facto_ default SLAM definition came from Lu and Milios [57], which was preceded by Gutmann and Konolige [58]. From then on, most methods throughout the area of task analysis have increased efficiency and strength [7, 10, 27, 29]. All the above methods articulate SLAM as a top-down issue, and which is a maximum-posterior estimation problem [43] as a basis for the cooperation between states.
### _Perception-Based SLAM Front-End_
In real robot implementations, as mandated by the MAP prediction, it would be challenging to read the sensor readings straight as an analytical state function. Of example, when the information of the unprocessed sensor is a raw image, the frequency of each pixel may be challenging to show based on the SLAM system, with more simplistic instruments. The same complexity occurs [19]. In each of these scenarios, it arises from the fact that we could not model a reasonably broad, but comprehensible, physical interpretation. It would be tough to write an analytical mechanism that connects the calculated variables even in the presence of this general representation.
Therefore, a front-end unit, which derives related unique features from the sensor information, is typically located before the SLAM back end. For example, in Visual SLAM, the front-end derives the pixel position of some distinct vital points in the world, which are now readily modelled in the back end. The front end often compares the different measurements in the field with a reference point: this is the so-called metadata association [27].
Figure 2 shows a visual depiction of a traditional SLAM process. A collection of shorter and longer-term information associations is included in the software association section at the front end. For shorter-term sensor quantification, the affiliation is accountable for the connection of co-responding features; for example, the shorter-term affiliation will monitor the fact that two-pixel measurements show the same 3-D point at a successive frame. At the other side, longer-term information correlation is responsible for the connection of modern quantities to earlier points of reference. The back end generally provides info back to the front end, e.g. for the identification of loop.
## III SLAM for Autonomy I: Robustness
In specific ways a SLAM framework may be error-prone: malfunction can depend on hardware or algorithmic approach. This former involves error modes precipitated by limitations of the established SLAM algorithms considering the difficulty in managing excessively diverse or challenging surroundings. Sensor deterioration or actuator destruction may cause the latter
Fig. 1: SLAM process, with front end and back end. The back end provides feedback to the front end to track and validate the loop closure [27].
Fig. 2: _Graph representation of SLAM_: (\(x_{t},x_{2},...\)) represents the concurrent poses of the robot, (\(l_{t},l_{2},...\)) represent the landmark position estimations, \(K\) represents the parameters for implicit calibration. Black squares are used to represent the state inputs: ”u” describes the control inputs, ”v” describes sensor observations, “c” describes loop closures, and “p” describes previous state factors [27].
problem. For long-term deployment, the solution to such points of failure is critical because simplifying predictions about the ecosystem's configuration can no longer be established depending on the onboard sensors. The main obstacles to computational reliability are reviewed in this section and address open problems, including equipment-related vulnerabilities [29].
Data aggregation is one of the primary sources of algorithmic flaws. As stated in the Anatomy section for SLAM, that measurement follows the part of the state to which the information relates. In a graphical SLAM based on the feature, for instance, each visual interface is correlated with a function. This question is especially challenging due to the cognitive aliases, which results in a different sensory input resulting in the same sensor footprint. Data relations create inaccurate estimation-state matches in the involvement of perceptual associations. Besides, this results in false estimates of states. While, if the data relation improperly discards a sensor data as fraudulent (false negatives), calculating the precision of the measurements is affected due to lack of data. The condition is compounded by the existence of undescribed states of the dynamics, which can mislead the association of data. The latest SLAM approach takes a relatively general view that the environment remains consistent as the robot travels through it. The stationary universe hypothesis is valid in one map run in small-scale contexts if short-term movements are not present (e.g. moving individuals and objects). Change is necessary for long-term and widespread map levels.
The resiliency of SLAM in extreme environments such as the underwater is another aspect [14, 31]. The difficulties are the continually evolving conditions and the fact that it is tough to use traditional sensors in such kind of environments (e.g. LIDAR).
### _Survey on Robustness_
In the front end/back end of a SLAM process, robustness problems related to inaccurate data affiliations can be dealt with. The front end has conventionally been given the right combination of results. The short-term data combination is more straightforward: when have a quick sampling frequency, monitoring the characteristics that correspond to the same 3-D mark was easy compared to robot dynamics. For instance, if a 3-D point across successive pictures, tracking was required and think the frame rate is high enough. Then it is guaranteed to have reliable tracking by standard Descriptor Matching approaches or Imaging flow. The camera sensor perspective does not shift dramatically at a high frame rate, so the characteristics at t+1 remain similar to those seen at t. A more difficult long-term application of data at the front end requires the identification and verification of loop closures. A Brute-force approach, which perceives current measuring characteristics (e.g. pictures) to match all recently detected characteristics, becomes unrealistic for loop closure detection at the front end. Models with bag-of-words [26] ensure that this traceability is reduced by discretising the storage and search efficiency. Bag-of-words should be organised into structured vocabulary trees [39] so that a wide-scale set of data can easily display.
Moreover, such strategies would not cope with severe glare alterations as visual sentences could no longer be paired. This resulted in the creation of alternative methods that take specific variants into account by coordinating sequences [13] or giving different views together in a single representation [49] or utilising data on both context and appearances [17]. Lowry et al. [49] provide a detailed analysis of visual location identification. Feature-based approaches are often included for laser-based loop closures for front ends of SLAM; for example, Tipaldi et al. [50] give 2-D laser-scanning applications.
The evaluation of the loop closure then involves more mathematical check-up steps to verify the value of the loop close easily. RANSAC is regularly considered for mathematical analysis and outlier exclusion in perception-based forms and links in it. In LIDAR methods, a loop closure can be verified by verifying whether the new laser scan corresponds to the original map that is, how tiny is the remainder of the test corresponding error.
The problem is multifold in dynamic environments. Next, modifications must be observed, detected, or tracked by the SLAM program. Although the conventional methods seek to remove the reactive aspect of the landscapes [18], reactive components are part of the prototype [12], [23]. The secondary task consists of forecasting guaranteed changes in the SLAM system and knowing how and when to modify the graph appropriately. Modern complex models with SLAMs either have multifaceted representations of the same position [16] or have a unified interpretation with a parameter that differs over time [14].
### _Available Challenges_
This segment explores accessible problems and new SLAM development concerns.
#### Ii-B1 Error pruned SLAM with Recovery option
While progress has been made on the SLAM history, in the case of exceptions, established SLAM algorithms remain fragile. The main reason for this is the fact that almost all efficient SLAM strategies are focused on iterative non-convex values. The outcome of the exclusion of a single outlier depends on what sort of the initial assumption fed to the optimisation; furthermore, the system is utterly vulnerable: it compromises the value of the calculation by including a single external component, which subsequently deteriorates the potential of apparent exceptions. Such types of shortcomings converge to a wrong linearising point from which recuperation, particularly in gradual setups, is not trivial. Failure secure and ineffective, i.e. the device needs to be aware of potential failures and provide restoration measures that could also repair the process correctly, should be an optimal SLAM approach. Any of the current SLAM approaches do not provide these capabilities. A closer alignment from the front to the back end is one conceivable way of achieving that, but the problem is still unanswered [42].
#### Ii-B2 Robustness to Hardware failure
Such faults can be identified outside the range of SLAM when investigating device malfunction, which impacts the SLAM process, and therefore can play a vital role in control and cognition loss identification and mitigation. If the precision of a measurement
deteriates because of degradation, suitability issues to the surrounding or age, the reliability of the observations of the measurement is not in line with the back-end interference method, which leads to poor estimates. How to identify the behaviour of damaged sensors? How to improve the data of sensor interference accordingly? How can contradictory data from multiple sensors be addressed more broadly? In protection-critical uses (e.g. autonomous cars), this seems critical, when misreading of sensor data could endanger living creature [33].
#### Iii-B3 _Relocalisation using loop closure_
Although the method could close loops in any time in the day or during different seasons, the resultant loop closure is quaternion in nature, based on appearances and as options for feature-based approaches. Feature-based methods remain the standard for compositional relocation, while component classifications lack underlying mathematical formalism to function properly under these conditions. Perception knowledge intrinsic to the SLAM issue could be used to solve such constraints, such as path comparison. A mapping can also be a useful addition, using one configuration of the sensor and a place in the same region, using a specific type of sensor [1].
#### Iii-B4 Drift in the generated Maps with time
With a fixed and stable environment hypothesis in mind, the traditional SLAM approaches were developed; but, both due to motion and the artefacts' intrinsic deformation, the natural world is non-rigid. An optimal SLAM system must be capable of comprehending and being able to produce maps of "any landscape" for conditions, like non-rigidity, over extended periods. Many attempts have been made to derive structure from unrigid artefacts from the computer vision group since the 80s. Latest performances for multi-rigid SfMs, such as [32], [37] are not very restrictive. The justification of smaller scale rebuilding has been discussed by Newcombe et al. [51].
#### Iii-B5 Automated tuning of the parameters
In order to function correctly under a scenario, the SLAM systems need exhaustive parameter calibration. Such variables include levels that regulate similarity functions, RANSAC variables [27] and guidelines for planning whether new factors are applied to the maps or when an algorithm to look for similarities can be enabled. In entirely arbitrary circumstances SLAM must work "outside the box," techniques should be considered for automatically tuning the actively engaged parameters.
## IV _Slam for Autonomy II: Scalability_
Though the most compelling example of recent SLAM implementations has been rendered in interior building settings, autonomous robots need to work for a more extended period in more extensive areas in many applications. Such technologies involve ocean inspection, non-stop maintenance of robots or substantial-scale comprehensive farming is continually changing cities. In this scenario, the continuous discovery of new areas and growing operating period, the scale of the parameter graph behind SLAM may expand indefinitely. In practice, the processing time and flash memory time are constrained by the robot's infrastructure [39]. SLAM techniques whose computing and memory sophistication persists constrained therefore are essential to model.
In the weirdest case, consecutive linearising approaches based on empirical deterministic algorithms suggest a quadratically increasing space usage. The resource usage tends to increase in the number of conditions linearly when exponential regression solvers [19] are used. It is exacerbated by the fact, that factor diagram scalability is less productive when exploring a location many times because edges or boundaries are introduced consistently into the same geographic region and thus undermine the map's sparse architecture.
This segment examines several established ways to control or at least to prevent the cause of a problem growth and explore outstanding issues.
### _Survey on Scalability_
In this section, a study focused on specific ways to control the complexities of factor graph optimisation: 1) sparsification techniques, which sacrifice knowledge loss for memory and operational performance, and 2) Metrics that divide calculation into many robots and processors.
#### Iv-A1 Feature Sparsification - nodes and edges
Scaling is accomplished in this class of strategies by reducing the number of nodes attached to the graph or by minimizing "informational" nodes and variables. Ila et al. [52] have an info-theoretical procedure to attach just extremely informative nodes and dimensions to the graph. Johannson et al. [53] prevent the addition of new nodes to the graph if necessary, by creating new restrictions among the available nodes, whilst the number of factors only expands with the extent and not the scanning time of the explanation area. The information-based critique of Kretzschmar et al. [54] proposes which nudges to marginalize when improving the graph.
The cumulative path prediction would be another line of work which reduces the number of features measured over time that the opening SLAM method of this group would reflect the robot's persistent path with cubic splines [13]. Throughout their analysis, the nodes depicted in the variable graph are the trigger positions of the sliding pane. In the corresponding batch optimization proposal, Furgale et al. [55] suggested employing base features, in specific B-spline, to estimate pathway of robots.
#### Iv-A2 Parallel SLAM
Parallel SLAM algorithms split calculations among different processors and shares the graph scalability workload. The main principle is to segment the map into multiple parts moreover to simplify the total diagram by interchanging local performance improvements and a worldwide enhancement of each subgraph. This idea goes back to the initial intent to solve substantial-scale maps [19] with the propose sub-mapping strategies to factor graph optimisation to arrange the submaps into a binary framework. They have been referred to as post-mapping methods.
#### Iv-A3 Distributed SLAM
A method to map a broad ecosystem is to use several autonomous SLAM robots and to segment the
scene by divided spaces, one of which is controlled by another robot. This method has several vital variants: the centralized approach, during which robots generate sub-maps and pass regional details to the main station that carries out extrapolations [46, 49], and the decentralized version, whereby central data aggregation is not accessible.
### _Available Challenges_
The history has significant shortcomings on specific facets of long-term activities, given the effort to control the sophistication of the variable graph refinement [27].
#### Iv-B1 Mapping in a large-scale area
The dilemma of how the map can be stored for long-term purposes is entirely unexplored. Even when storage is not stringent, e.g. content is processed throughout the web, primitive interpretations as information-waste point maps, or conformal graphs, or the preservation of vision-based SLAM feature classifications is tedious. Specific approaches for a compact, established map [43] position and storage active deep restoration have been latterly suggested [36].
#### Iv-B2 Sparsification
An underlying issue is how much data in the graph can be modified for long-term monitoring and how to determine when this knowledge is obsolete and could be simply ignored. If ever, when is it safe to forget? What could be overlooked, and what is important to keep? Is it possible to "take" portions of the graph and alert it if necessary? Although this obviously depends on the mission, in research, there was no reliable solution to these concerns [27].
#### Iv-B3 Multirobot applications towards robust mapping
Whereas in the individual solutions to exceptional case refusal were suggested on the multi-robot SLAM applications. It is especially tricky for several causes to work with incorrect estimates. Firstly, the automatons may not share a common basis for comparison making it more complicated to identify and discard incorrect loop closures. Next, the automatons must find deviations of minimal and regional data in the clustered environment. A slightly earlier, try to address the above problem is [48], when robot cars consciously check observations of proximity by means of a rendezvous and docking policy before data is fused.
#### Iv-B4 Platforms with Limited Resources
One entirely untouched question is how to apply the current SLAM methods in case of extreme technical limitations in the autonomous platforms. This is an essential issue in the reduction of the complexity of the system, e.g. handheld telephones, autonomous aerial vehicles or autonomous insects [29]. Most SLAM algorithms are quite complicated for those systems and optimisations should be used to set up a 'knob' that enables the precision to be carefully modified at the processing cost. Additional challenges emerge in the context of numerous robots: how should we maintain stable functionality of multi-robot squads while meeting limited bandwidth limits and connectivity trims.
## V New Frontiers: Perception and Exploring
Intriguingly, the embedded sensor technology is the fundamental source of research in each SLAM solution. Appropriate algorithms are developed according to the characteristics of the sensors while considering the fusion of several sensors to mitigate fundamental errors in the individual sensors. In the beginning, distance sensors were used to get the aerial information such as acoustic and Light Detection and Ranging (LIDAR) sensors [23]. Such sensors have precise information about the depth but are not rich in features. Later systems have used mostly the vision sensors as the main source of information for perception such as monocular cameras and 360-degree cameras. Nevertheless, they lacked a comprehensive estimate of the depth. Then the more advanced type of sensors which could measure depth and colour are introduced such as RGB-D sensors and stereo cameras. They are used to generate point clouds while measuring the associated depth with the same point of reference. This section not only investigates all these sensors but reviews them for their efficacy based on their energy efficiency, scope, durability, maintenance, cost, preciseness, and spatial restrictions, which are essential for long-term deployments of autonomy.
Critical drivers for SLAM have been the introduction of new sensors and the use of new algorithmic and processing tools. This chapter discusses all traditional and innovative sensors as well as the challenges and opportunities they raise in the SLAM context Whereas the next paragraph addresses the position of deep learning as a significant field for SLAM, it analyses how this technology can boost, influence, or even summarise the question of SLAM.
### _Innovative and Conventional Sensors for SLAM_
#### V-A1 Brief survey
Traditional Sensors are reviewed in this section. Solution for any kind of a SLAM is solely depended on the used sensor technology.
_a) Ultrasonic SONAR Sensors:_ Acoustic sensors are popular among early implementations of the SLAM solutions. Customized Echo-SLAM was implemented using microphone array and surrounding speakers as an Omni-directional acoustic sensor system [39]. This range only SLAM systems which function together with sensor networks should be used to make more feature-rich SLAM algorithms in the purpose of reducing the pose drift with time. Predominantly, these sensors are Sound Navigation and Ranging (SONAR) sensors which are using the time of flight strategies to compute the location of an obstacle. The Ultrasonic sensor, which is lying under the SONAR sensor category, is used extensively for robotics. Ultrasonic sensors are one the most inexpensive sensors used in SLAM implementations and hence popular among other acoustic
Fig. 3: HC-SR04 Ultrasonic Sensor [29]
sensors. They use ultrasonic waves (above 20kHz) which cannot be heard by humans.
These are appropriate for dark conditions due to insusceptibility towards illumination and opacity. However, these sensors operate well in the presence of dirt and humidity, but huge contaminations can influence sensor readings [44]. Moreover, because the sound waves require a transverse medium, they do not work in a vacuum. Also, these sensors are vulnerable to the reflection distortion with the surface smoothness since smooth surfaces tend to absorb acoustic waves rather than reflecting them. The dimensions of Acoustic sensors are generally limited to few inches due to its compact form and the power requirements for an acoustic sensor is ranges from milliwatts to watts. Considering all of these facts, Ultrasonic sensors are remain as an affordable and robust distance measuring equipment given that the accuracy of the maximum depth range of these sensors is about 1% to 3% whereas the environment temperature, moisture level and air pressure for which compensation techniques are usually employed with the sensor itself [38].
LIDARIt is the abbreviation for Light Detection and Ranging sensor. Since the 1960s, LIDAR's fundamental architecture has been around for obtaining aerial distance mapping. LIDAR's job is like an ultrasonic detector. LIDAR utilises electromagnetic waves in the spectrum of visible light as a radiation reference instead of using sound waves. By activating to 1,000,000 bursts per second, a LIDAR produces a 3D Visual representation of its environment known as the Point Cloud. In determining the depth analysis, LIDARs can provide 360 degrees of perception and are quite precise (\(\sim\)2 cm) [27].
Depth sensing camerasLight-emitting distance cameras were not new devices; however, with the introduction of the Xbox Kinect game console, they became mainstream equipment. They function on various principles, such as structured light, the flight of time, diffraction gratings, or shutter speed coded. Structure-light cameras operate by triangulation; the range between both the cameras and the sequence projector, therefore, limits their precision [12, 23]. From the other hand, the precision of Time-of-Flight (ToF) sensors relies just on the TOF sensor, thereby having the maximum precision distance. Whereas a poor signal-to-noise (SNR) ratio and high cost marked the very first generation of ToF and structured-light cameras, they quickly grew famous with computer game technologies, which helped make them cheaper and enhance their precision. Although field cameras bring their own source of light, they also operate in darkened and untextured environments, allowing impressive SLAM performance to be achieved [18].
Event-based camerasIn contrast to conventional frame-based cameras that send full frame rates, event-based cameras like the dynamic vision sensor [15] or the Asynchronous Time-based photograph sensors [16], only addresses the local bitmap-level motion alterations to an incident when they take place.
These have five key benefits relative to traditional frame-based cameras: 1 ms time delay, up to 1 MHz refresh frequency, a dynamic contrast of up to 140 dB (up from 60-70 dB of regular camera systems), 20 mW throughput (up from 1,5 W of conventional camera systems), and minimal bandwidth and processing demands (because only shifts in brightness are broadcasted). This allows for the construction of a new category of SLAM methodologies, which can rely on high-speed movement scenes [9] and steep-dynamic range [12][27], in which traditional cameras are unsuccessful. As the throughput is a set of unpredictable activities, though, standard computer-vision techniques focused on structures are not appropriate. This calls for a radical shift over the last 5 decades from standard computer perception strategies. Recently, event-based techniques for the location of events and visualization have been proposed [32], [47]. The development purpose of these kinds of algorithms is to enable every arriving event to alter the entire system approximate state dynamically, thereby retaining the sensor's eventual existence and enabling microsecond-latency algorithms to be designed [178].
#### Ii-A2 Open Problems
Effective range and intervention with other additional ambient light sources (such as daylight) is the main
Fig. 4: Velodyne HDL-64E. Popular sensor among self-driving cars [29].
Fig. 5: Kinect XBOX 360: RGB-D Sensor [29].
Fig. 6: DVS128 Event Camera [29].
drawback of active range cameras; these deficiencies can be enhanced by more power.
Light field sensors in SLAM were hardly ever utilized since the quantity of data generated, and more processing power is required. Nonetheless, current researches have demonstrated that it is ideal for SLAM implementations since it allows for linear optimization of a movement approximation dilemma and could provide more precise motion predictions if appropriately designed [49].
What's the perfect SLAM sensor? Of course, there is one question: how is it to conduct coming decades, long-term SLAM research using succeeding sensor systems? The quality of a given pair of algorithm sensors for SLAM obviously relies both on the transducer constraints, on the algorithm, and on the surrounding [22]. There hasn't yet been a comprehensive analysis of the selection of architectures and sensors for better results. Research by Censi et al. [56] showed that effectiveness for an application also hinges on the measuring strength. It also implies that several sensors may be automatically turned on and off to the desired quality level for the optimum sensing system or calculate the same behaviour by various physical standards of reliability [40].
### _Deep Learning approach_
A paper that aims to take longer-term directions in the SLAM without considering deep learning should be a mistake. It has transformed image processing and already makes significant additions to conventional robotic systems, along with SLAM implementations [22, 51].
Research scientists have already demonstrated that a deep neural network can be trained to reconstruct the interframe position among two images captured from a travelling robot immediately from the initial object pair [53], permanently eliminating the traditional geometry of graphical odometry. Additionally, the 6DoF can be found in the regression forest [47] and with the deep convolutional neural network [29], and a single frame can be used to determine the depth of an environment [28],[41] and [58].
## VI Conclusions
In the last 30 years, tremendous development on simultaneous localization and mapping has happened [27]. Various fundamental problems were resolved across the way, with the implementation of new technologies, new instruments and modern cognitive tools that introduced numerous new and exciting questions.
Whereas finding an answer to the question that "Is SLAM required?" That answer depends on the request, but often the reply is a definite yes [27]. In a wide range of real-world contexts, from autonomous cars to mobiles devices, SLAM and associated technologies, such as visual-inertial odometry, are rapidly being used. For cases where telecommunication options, such as GPS, have not been accessible or available with a lack of precision, SLAM strategies are commonly used to deliver accurate metric positioning. It is possible to imagine that cloud-based positioning and navigation applications coming online and maps being commoditized because of the popularity of mobile phones and positioning data [9].
Precise localization is often done in some systems; for instance, self-driving cars are linking established sensor data to an advanced high-definition map of the scene [54]. The online SLAM would not be essential if the a priori mapping is precise. Nevertheless, interactive online map updates are needed to deal with development or significant improvements in road systems for operations in increasingly complex ecosystems. The distributed refreshing and construction of spatial maps generated by large autonomous fleets is an important area for future research [51].
There will be tasks that are more suitable to distinct SLAM flavours than others can be identified. For example, a spatial map can be used to evaluate the accessibility of a particular location, but is not ideal for movement planning and minimal control; a locally compatible map is well optimized for avoiding obstacles and local encounters with the terrain, but may compromise accuracy; a consistent map enables the robot to navigate broad routes [12].
One may also formulate instances in which SLAM is entirely unnecessary and can be supplemented by other methods such as visual command servoing, or "educating and recurring" to carry out ongoing navigation tasks. The more general way to pick the best SLAM model is to imagine of SLAM as a framework for calculating adequate statistics to reiterate all the robot's observed data and in that context which data is fiercely task-dependent to retain.
With respect to the common problem "SLAM is achieved?" This paper review that the autonomous robot, ecosystem and performance combination cannot be addressed until we reach the robust perception era. Significant difficulties and critical concerns stay open for several implementations and ecosystems [34]. Further work in SLAM is required to achieve more reliable perception and navigation for durable autonomous systems. SLAM is not entirely solved as an academic attempt with significant real-world impacts [35].
Unsolved issues cover four main aspects: overall performance, wide-level awareness, resource consciousness, and process-driven conclusion. The layout of a SLAM autotuning is a significant difficulty in terms of robustness, with many aspects widely not investigated [24]. The durability of autonomy requires a significant quantity of scientific research on strategies for generating and sustaining maps as well as rules that identify when remembering, updating, or forgetting information; similar difficulties arise in robotic structures which are severely restricted to resources [50, 52].
In addition to addressing many achievements and challenges ahead for the SLAM community, opportunities are explored related to the use of new sensor data, new techniques (e.g. convex relaxation and duality theory, or deep learning), and the role of active detecting [27]. SLAM is still an invaluable cornerstone for most robotics implementations and, despite incredible progress over the past decades, existing SLAM technologies are still far from offering intelligent, substantive, and durable environmental designs analogous to those generated and used seamlessly by humans. |
2306.04085 | XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages
and Meaning Representations | Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple
natural languages (NLs) into meaning representations (MRs) such as SQL, lambda
calculus, and logic forms. However, existing CLSP models are separately
proposed and evaluated on datasets of limited tasks and applications, impeding
a comprehensive and unified evaluation of CLSP on a diverse range of NLs and
MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual
semantic parsing featured with 22 natural languages and 8 meaning
representations by examining and selecting 9 existing datasets to cover 5 tasks
and 164 domains. We use XSemPLR to conduct a comprehensive benchmark study on a
wide range of multilingual language models including encoder-based models
(mBERT, XLM-R), encoder-decoder models (mBART, mT5), and decoder-based models
(Codex, BLOOM). We design 6 experiment settings covering various lingual
combinations (monolingual, multilingual, cross-lingual) and numbers of learning
samples (full dataset, few-shot, and zero-shot). Our experiments show that
encoder-decoder models (mT5) achieve the highest performance compared with
other popular models, and multilingual training can further improve the average
performance. Notably, multilingual large language models (e.g., BLOOM) are
still inadequate to perform CLSP tasks. We also find that the performance gap
between monolingual training and cross-lingual transfer learning is still
significant for multilingual models, though it can be mitigated by
cross-lingual few-shot training. Our dataset and code are available at
https://github.com/psunlpgroup/XSemPLR. | Yusen Zhang, Jun Wang, Zhiguo Wang, Rui Zhang | 2023-06-07T01:09:37Z | http://arxiv.org/abs/2306.04085v1 | # XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations
###### Abstract
Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs) such as SQL, lambda calculus, and logic forms. However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains. We use XSemPLR to conduct a comprehensive benchmark study on a wide range of multilingual language models including encoder-based models (mBERT, XLM-R), encoder-decoder models (mBART, mT5), and decoder-based models (Codex, BLOOM). We design 6 experiment settings covering various lingual combinations (monolingual, multilingual, cross-lingual) and numbers of learning samples (full dataset, few-shot, and zero-shot). Our experiments show that encoder-decoder models (mT5) achieve the highest performance compared with other popular models, and multilingual training can further improve the average performance. Notably, multilingual large language models (e.g., BLOOM) are still inadequate to perform CLSP tasks. We also find that the performance gap between monolingual training and cross-lingual transfer learning is still significant for multilingual models, though it can be mitigated by cross-lingual few-shot training. Our dataset and code are available at [https://github.com/psunlpgroup/XSemPLR](https://github.com/psunlpgroup/XSemPLR).
## 1 Introduction
Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs) (Li et al., 2020; Xu et al., 2020; Dou et al., 2022; Sherborne and Lapata, 2021, 2022). As demonstrated in Figure 1, Cross-Lingual Semantic Parsing covers natural languages for geographically diverse users and various meaning representations, empowering applications such as natural language interfaces to databases, question answering over knowledge graphs, virtual assistants, smart home device control, human-robot interaction, and code generation.
However, current research on CLSP has three drawbacks. First, most existing research focuses on semantic parsing in English (Zelle and Mooney, 1996; Wang et al., 2015; Yu et al., 2018), limiting the development of multilingual information access systems for users in other languages. Second, current datasets have a poor coverage of NLs and MRs. Although there are encouraging efforts in developing CLSP models (Li et al., 2020; Dou et al., 2022; Sherborne and Lapata, 2022), their experiments only cover a few NLs and MRs, impeding comprehensive and unified evaluation on a diverse range of tasks. Third, due to the lack of a comprehensive CLSP benchmark, the performance of multilingual language models on CLSP is understudied. Some pretrained language models are proposed to solve cross-lingual tasks such as XLM-R (Conneau et al., 2019) and mT5 (Xue et al., 2020), while other large language models are designed for code generation such as Codex (Chen et al., 2021) and BLOOM (Scao et al., 2022). However, little research has focused on evaluating models on CLSP.
In this paper, we propose XSemPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations as summarized in Table 1. In order to cover a large variety of languages and meaning representations, we first select 9 high-quality CLSP datasets and then clean and format them in a unified manner. Then, we conduct a comprehensive benchmarking study on three categories of multilingual language models including pretrained encoder
-based models augmented with pointer generator (mBERT, XLM-R), pretrained encoder-decoder models (mBART, mT5), and decoder-based large language models (Codex, BLOOM). To evaluate these models, we design 6 experiment settings covering various lingual combinations and learning sample scales, including Monolingual (and Monolingual Few-shot), Multilingual, and Cross-lingual Zero-Shot/Few-Shot Transfer.
Our results show that the encoder-decoder model (mT5) yields the best performance on monolingual evaluation compared with other models. Then, we pick two models with the best monolingual performance (i.e., mT5 and XLM-R) to conduct few-shot and zero-shot cross-lingual transfer learning from English to other low-resource languages. Results show a significant performance gap between monolingual training (Taget NL -> Target NL1) and cross-lingual transfer learning (En -> Target NL). Furthermore, we find that this gap can be significantly reduced by few-shot learning on target NL. We further train these two models in a multilingual setting and find such training can boost the performance in some of the languages, while, however, it usually hurts the performance in English. Finally, we test two large language models Codex Chen et al. (2021) and BLOOM Scao et al. (2022). We find the performance gap of cross-lingual transfer learning is significant for these two models as well.
Footnote 1: We use A -> B to denote the model finetuned on NL A and tested on NL B.
Our contributions are summarized as follows: (1) We propose XSemPLR to unify and benchmark 9 datasets covering 5 tasks, 22 natural languages, and 8 meaning representations for cross-lingual semantic parsing; (2) We perform a holistic evaluation of 3 groups of state-of-the-art multilingual language models on XSemPLR, demonstrating noticeable performance gaps of cross-lingual transfer models comparing English and other languages; (3) We show two effective strategies for boosting performance in low-resource languages: multilingual training and cross-lingual transfer learning.
## 2 XSemPLR Benchmark
Figure 2 shows the construction pipeline of XSemPLR. We first select 9 CLSP datasets according to our design principles. Then, we collect other NLs of the selected datasets. Finally, we clean the datasets by removing outliers and performing alignment between different languages.
### Design Principles
We carefully pick 9 datasets from all available semantic parsing datasets to construct XSemPLR according to two principles. First, the picked datasets need to have **high quality**, which means they are either annotated by humans or augmented with careful crafting Moradshahi et al. (2020), and the translation of user inputs are provided by humans instead of machine translation models. Second, XSemPLR needs to be **comprehensive**Hu et al. (2020), which means including diverse NLs and MRs for a broad range of tasks and applications.
### Data Collection
Table 1 summarizes the characteristics and statistics of different datasets in XSemPLR.
**Multilingual ATIS (MATIS)** contains user questions for a flight-booking task. We collect the origi
Figure 1: Overview of Cross-Lingual Semantic Parsing over various natural languages and meaning representations.
Figure 2: Construction pipeline of XSemPLR.
nal English questions from ATIS (Price, 1990; Dahl et al., 1994) and add the translations from Xu et al. (2020b). For MRs, we focus on the task of Natural Language Interface (NLI) to databases and thus collect SQL from Iyer et al. (2017) and Finegan-Dollak et al. (2018).
**Multilingual GeoQuery (MGeoQuery)** contains user questions about US geography. We collect original English questions from GeoQuery (Zelle and Mooney, 1996) and add other translations (Lu and Ng, 2011; Jones et al., 2012; Susanto and Lu, 2017b). GeoQuery has several MRs available. We collect Prolog and Lambda Calculus from Guo et al. (2020), FunQL from Susanto and Lu (2017b), and SQL from Finegan-Dollak et al. (2018)2.
Footnote 2: We report averaged scores of 4 MRs in the tables, unless otherwise specified.
**Multilingual Spider (MSpider)** is a human-annotated complex and cross-domain text-to-SQL datasets. We collect Spider (Yu et al., 2018) with English questions and add other NLs from Min et al. (2019) and Nguyen et al. (2020).
**Multilingual NLmaps (MNLmaps)** is a Natural Language Interface to query the OpenStreetMap database. We collect NLMaps (Lawrence and Riezler, 2016) in English, and add translations in German (Haas and Riezler, 2016).
**Multilingual Overnight (MOvernight)** is a multi-domain semantic parsing dataset in lambda DCS. We include English Overnight (Wang et al., 2015) and add translations from Sherborne et al. (2020).
**Multilingual Schema2QA (MSchema2QA)** is a question answering dataset over schema.org web data in ThingTalk Query Language. We include training examples with all 11 available languages and pair them with the MR in the corresponding language following Moradshahi et al. (2020) and Xu et al. (2020a). To make the dataset size comparable to others, we include 5% of the training set.
**MCWQ** is a multilingual knowledge-based question answering dataset grounded in Wikidata (Cui et al., 2021). We collect all questions in MCWQ in 4 languages. The split follows maximum compound divergence (MCD) (Keysers et al., 2020) so that the test set contains novel compounds to evaluate compositionality generalization ability.
**MTOP** is a multilingual semantic parsing dataset for task-oriented dialogs with meaning representations of hierarchical intent and slot annotations (Gupta et al., 2018; Li et al., 2020). We include examples with all 6 languages and pair the translations with the compositional decoupled representation in the corresponding language.
**MCoNaLa** is a multilingual code generation benchmark for Python by extending English CoNaLa (Yin et al., 2018; Wang et al., 2022). We include all 4 languages.
### Data Alignment and Unification
We perform data alignment and unification over 9 datasets to construct a unified high-quality benchmark. To be specific, for the first 6 datasets introduced in Section 2.2, because each of them has multiple parts proposed in different work, we merge these parts by aligning the same user question in different languages into the same meaning representation. For the other 3 datasets, we directly use the entire samples since no other parts need to be merged. We also try to unify the language of MRs (e.g., adopting a single form of SQL queries; keeping only one English MR when there is more than one in MTOP). We also remove a few samples in MATIS and MGeoQuery with no MRs. We provide more details in Appendix including the examples of each dataset (Table 5), data construction (Ap
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Task & Dataset & Meaning Representation & Language & Executable & Domain & Train & Dev & Test \\ \hline NLI for Databases & MATIS & SQL & 7 & ✓ & 1 & 4303 & 481 & 444 \\ NLI for Databases & MGeoQuery & SQL\_Lambda,FunQL,Prolog & 8 & ✓ & 1 & 548 & 49 & 277 \\ NLI for Databases & MSPider & SQL & 3 & ✓ & 138 & 8095 & 1034 & – \\ NLI for Databases & MNLmaps & Functional Query Language & 2 & ✓ & 1 & 1500 & – & 880 \\ QA on Knowledge Graph & MOvernight & Lambda Calculus & 3 & ✓ & 8 & 8754 & 2188 & 2740 \\ QA on Knowledge Graph & MCWQ & SPARQL & 4 & ✓ & 1 & 4006 & 733 & 648 \\ QA on Web & MSchema2QA & ThingTalk Query Language & 11 & ✓ & 2 & 8932 & – & 971 \\ Task-Oriented DST & MTOP & Hierarchical Intent and Slot & 6 & ✗ & 11 & 5446 & 863 & 1245 \\ Code Generation & MCoNaLa & Python & 4 & ✓ & 1 & 1903 & 476 & 896 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets in XSemPLR. We assemble 9 datasets in various domains for 5 semantic parsing tasks. It covers 8 meaning representations. The questions cover 22 languages in 15 language families. Train/Dev/Test columns indicate the number of MRs each paired with multiple NLs.
pendix A), natural languages (Appendix A), and meaning representations (Appendix A).
### Evaluation Metrics
We evaluate the predicted results using various automatic metrics. For the Spider dataset, we follow Yu et al. (2018) and use their proposed tool for evaluation 3. For the other datasets, we simply use exact matching, i.e., token-by-token string comparison, to see if the prediction is the same as the ground truth label. For a fair comparison with state-of-the-art models, we also use the metrics proposed in their models, including Execution Score, Denotation Accuracy, and Code BLEU (Section 4.2).
Footnote 3: All numbers reported in the paper is “Exact Set Match without Values” in [https://yale-lily.github.io/spider](https://yale-lily.github.io/spider).
### Data Analysis
Natural LanguagesXSemPLR contains diverse and abundant natural languages in both high-resource and low-resource groups, including 22 languages belonging to 15 language families (Appendix A). Most state-of-the-art performances are achieved in English and a few other high-resource languages. However, the lack of information in the low-resource languages brings unanswered questions to model generalization. Therefore, both these 2 types of languages are included in XSemPLR, to form a unified cross-lingual dataset for semantic parsing. Among these 22 languages, English is the most resourced language with many popular datasets in semantic parsing. Some languages spoken in Western Europe are also relatively high-resource languages, such as German and Spanish. We also involve many low-resource languages as well, such as Vietnamese and Thai.
Meaning RepresentationsXSemPLR includes 8 meaning representations for different applications: Prolog, Lambda Calculus, Functional Query Language (FunQL), SQL, ThingTalk Query Language, SPARQL, Python, and Hierarchical intent and slot. All of them can be executed against underlying databases or knowledge graphs, except for the last one which is designed for complex compositional requests in task-oriented dialogues. The first four are domain-specific because they contain specific predicates defined for a given domain, while the last four are considered open-domain and open-ontology (Guo et al., 2020). It is also worth noting that these MRs are not equivalent to their general expressiveness. For example, the ThingTalk query language is a subset of SQL in expressiveness (Moradshahi et al., 2020), and FunQL is less expressive than Lambda Calculus partially due to the lack of variables and quantifiers.
## 3 Experiment Setup
We describe our evaluation settings and models for a comprehensive benchmark study on XSemPLR.
### Evaluation Settings
We consider the following 6 settings for training and testing.
Translate-Test.We train a model on the English training data and translate target NL test data to English using the public Google NMT system (Wu et al., 2016). This setting uses one semantic parsing model trained on English but also relies on available machine translation models for other languages. This serves as a strong yet practical baseline for other settings.
Monolingual.We train a monolingual model on each target NL training data. This setting creates one model per target NL. In addition to benchmarking them, we design this setting for two reasons: (1) It helps the comparison between monolingual and cross-lingual performance; (2) We pick the best models from this setting to further conduct cross-lingual and few-shot/zero-shot experiments. Additionally, since some target NL training data can be expensive to obtain, we also test a **Monolingual Few-shot** setting by training monolingual models with only 10% training data.
Multilingual.Thanks to the progress in multilingual embeddings and pretrained multilingual language models, we can train one multilingual model on all NL training data. This setting uses only one model to serve all NLs.
Cross-lingual Zero-shot Transfer.Models are trained only on English NL data and then tested on a target-NL test set. This setting uses one model for all target NLs and evaluates the cross-lingual transfer ability without any target-NL training data. Besides, to test the value of additional target NL training data, we finetune the model on 10% target-NL training data. This **Cross-lingual Few-shot Transfer** setting creates one model per target NL. We use these two settings to evaluate the capability
of the model to transfer from a fine-tuned model of high-resource NL to a low-resource test set.
### Models
We evaluate three different groups of multilingual language models on XSemPLR.
Multilingual Pretrained Encoders with Pointer-based Decoders (Enc-PTR).The first group is multilingual pretrained encoders with decoders augmented with pointers. Both encoders and decoders use Transformers Vaswani et al. (2017). The decoder uses pointers to copy entities from natural language inputs to generate meaning representations Rongali et al. (2020); Prakash et al. (2020). We use two types of multilingual pretrained encoders, mBERT Devlin et al. (2018) and XLM-R Conneau et al. (2019), and both are trained on web data covering over 100 languages.
Multilingual Pretrained Encoder-Decoder Models (Enc-Dec).The second group uses pretrained encoder-decoder models, including mBART Chipman et al. (2022) and mT5 Xue et al. (2020) which uses text-to-text denoising objective for pretraining over multilingual corpora.
Multilingual Large Language Models (LLMs).The third group is multilingual large language models based on GPT Brown et al. (2020) including Codex Chen et al. (2021) and BLOOM Scao et al. (2022). Codex is fine-tuned on publicly available code from GitHub. While it is not trained on a multilingual corpus, it has shown cross-lingual semantic parsing capabilities Shi et al. (2022). BLOOM is a 17GB-parameter multilingual language model pretrained on 46 natural and 13 programming languages from the ROOTS corpus Laurencon et al. (2022). We mainly use these models to evaluate the ability of few-shot learning using in-context learning without any further finetuning. Specifically, we append 8 samples and the test query to predict the MR. For Monolingual Few-shot, samples and the query are in the same NL, while for Cross-lingual Zero-shot Transfer, samples are in English and the query is in the target NL.
## 4 Results and Analysis
Table 2 shows the performance of all 6 models on 6 settings. Our results and analysis aim to answer the following research questions:
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & MATIS & MGeoQuery & MSpider & MNLmaps & MOvernight & MCWQ & MSchem2QA & MTOP & MCoNaLa\({}^{\ddagger}\) & Average \\ \hline \multicolumn{10}{l}{_Translate-Test_} \\ mT5 & 44.50 & 53.88 & 45.26 & 66.36 & 59.69 & 19.85 & 3.18\({}^{\bigstar}\) & 29.78\({}^{\bigstar}\) & 8.13 & 36.74 \\ \hline \multicolumn{10}{l}{_Monolingual_} \\ mBERT+PTR & 30.63 & 72.18 & 40.40 & 83.82 & 57.47 & 23.46 & 52.53 & 75.41 & 5.87 & 49.09 \\ XLM-R+PTR & 31.31 & 71.41 & 47.30 & 85.17 & 59.10 & 23.53 & 62.37 & 80.36 & 7.69 & 52.03 \\ mBART & 41.93 & 62.29 & 33.31 & 83.19 & 59.60 & 30.02 & 50.35 & 75.76 & 6.78 & 49.25 \\ mT5 & **53.15** & **74.26** & **50.73** & **91.65** & **66.29** & **30.15** & **65.16** & **81.83** & **10.29** & **58.16** \\ \hline \multicolumn{10}{l}{_Monolingual Few-Shot_} \\ XLM-R+PTR & 23.44 & 17.91 & 36.04 & 19.77 & 40.74 & 5.64 & **49.00** & 60.42 & 0.38 & 28.15 \\ mT5 & **24.85** & 25.48 & **38.10** & 26.93 & **53.59** & **7.68** & 33.27 & **61.90** & 1.05 & **30.32** \\ Codex\({}^{\dagger}\) & 18.02 & **31.93** & 30.66 & **34.26** & 3.43 & 2.93 & 21.62 & 10.08 & **13.87** & 18.53 \\ BLOOM\({}^{\dagger}\) & 0.00 & 17.84 & 2.13 & 12.16 & 0.62 & 0.00 & 5.21 & 5.16 & 8.40 & 5.72 \\ \hline \multicolumn{10}{l}{_Multilingual_} \\ XLM-R+PTR & 39.72 & 71.35 & **40.20** & 85.91 & 61.03 & **30.79** & **61.82** & 81.68 & – & 59.06 \\ mT5 & **54.45** & **76.57** & 32.30 & **91.31** & **67.55** & 28.51 & 60.92 & **82.95** & – & **61.82** \\ \hline \multicolumn{10}{l}{_Cross-lingual Zero-Shot Transfer_} \\ XLM-R+PTR & 6.05 & **39.85** & 18.53 & **60.23** & 36.77 & **4.27** & 20.22 & 51.46 & 0.12 & 26.39 \\ mT5 & **31.85** & 27.35 & **41.93** & 34.89 & **52.68** & 4.06 & **44.04** & **50.18** & 0.77 & **31.97** \\ Codex\({}^{\dagger}\) & 16.31 & 28.53 & 27.56 & 32.05 & 2.99 & 2.16 & 19.57 & 14.08 & **8.35** & 16.84 \\ BLOOM\({}^{\dagger}\) & 0.00 & 11.29 & 1.70 & 7.05 & 0.38 & 0.00 & 3.93 & 1.67 & 6.16 & 3.58 \\ \hline \multicolumn{10}{l}{_Cross-lingual Few-Shot Transfer_} \\ XLM-R+PTR & 15.71 & 51.08 & 43.68 & 64.89 & 52.03 & 20.16 & 53.51 & 72.79 & – & 46.73 \\ mT5 & **49.57** & **57.31** & **49.42** & **71.70** & **62.53** & **24.85** & **59.24** & **74.83** & – & **56.18** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on XSemPLR. We consider 6 settings including 2 Monolingual, 1 Multilingual, and 2 Cross-lingual settings, and one Translate-Test setting. Each number is averaged across different languages in that dataset. \({}^{\dagger}\) Codex/BLOOM are evaluated on only two settings as we apply 8-shot in-context learning without finetuning the model parameters. \({}^{\ddagger}\) Two settings are not applicable to MCoNaLa because it has no training set on NLs other than English. \({}^{\bigstar}\) Translate-Test performances on MSchem2QA and MTOP are especially low because the MR of these data also contains tokens in target languages.
* RQ 1: What is the best model and training strategy for performance, and how does it compare with previous state-of-the-art? (Section 4.1, 4.2)
* RQ 2: How capable are the current multilingual LLMs on the task of CLSP? (Section 4.3)
* RQ 3: What is the effect of few-shot learning? (Section 4.4)
* RQ 4: What is the effect of multilingual learning? (Section 4.5)
* RQ 5: What is the effect of cross-lingual transfer learning? (Section 4.6)
* RQ 6: How performance varies across different natural languages and meaning representations? (Section 4.7, 4.8)
### Analysis of Monolingual
We obtain the following main findings on Monolingual setting:
Enc-Dec (mT5) obtains the best performance. Among the two transformer-based pointer generators, XLM-R+Transformer (XLM-R+PTR) (52.034) performs slightly better than mBERT+Transformer (mBERT+PTR) (49.09). Among mBART and mT5, mT5 (58.16) outperforms mBART (49.25) by a large margin. Besides, although mT5 outperforms XLM-R by 6.13, XLM-R is still able to outperform mBART by 2.78. Thus, we pick mT5 among mT5/mBART, and XLM-R among XLM-R/mBERT to conduct the experiments on the other settings.
Footnote 4: If not specified, the numbers in this section are the averaged exact matching scores across all NLs.
Next, we evaluate mT5 model on Translation-Test setting. As shown in the table, mT5 in Monolingual setting outperforms Translation-Test by a large margin (58.16 vs. 36.74). This shows that multilingual language models are more effective than Translation-Test methods. In other words, it is necessary to train a multilingual model even though we have a high-quality translation system.
### Comparison with SOTA
Table 3 lists the performance of mT5 in Monolingual setting with the previous state-of-the-art. Some of the previous work use denotation accuracy and execution accuracy which are different from the exact match we use. To make our results comparable with previous work, we apply the evaluation tools of previous work to XSemPLR. As shown in the table, Enc-Dec (mT5) outperforms previous work on all NLs of MSchema2QA, MCWQ, MNLMaps, MATIS datasets and obtains comparable results on the others.
### Analysis of Codex and BLOOM
We evaluate Codex and BLOOM to test the performance of in-context learning of large language models. As shown in Table 2, LLMs (Codex and BLOOM) are outperformed by mT5 model by a large margin for both Few-shot (11.79/24.60) and Zero-shot (15.13/28.39) settings. This suggests that multilingual LLMs are still inadequate for cross-lingual semantic parsing tasks.
### Comparison between Few-shot Settings
We also test the Enc-Dec (mT5) and Enc-PTR (XLM-R) models on two types of few-shot experiments, including Monolingual and Cross-lingual Few-Shot.
As can be seen, mT5 of cross-lingual few-shot outperforms monolingual few-shot by a large 22.21 exact match score (excluding MCoNaLa), while XLM-R has a smaller gain of 15.12. We can summarize two observations: 1) pretraining on the English NL can significantly boost the performance of few-shot on target NLs (En + Target Few-shot -> Target NL), and 2) the model with higher cross-lingual capability gains more improvement, such as mT5 gains more than XLM-R. Both observations demonstrate the capability of cross-lingual models to transfer knowledge from the source to the target NLs.
### Analysis of Multilingual Training
We compare the performance of Monolingual and Multilingual settings. As can be seen in Table 2,
Figure 3: Effect of multilingual training with mT5 on different NLs. X-axis is the NL that was included in at least two datasets. Y-axis is the number of datasets that the performance increases/decreases of this NL after multilingual training. Performance of English (high resource NLs) are easier to drop in multilingual training.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Language & SOTA (Source) & XSemPLR & Metric \\ \hline \multirow{6}{*}{MSpider} & English & 77.10 (Li et al., 2023) & 67.60 & Exact Match \\ & English & 81.00 (Li et al., 2023) & 69.10 & Execution \\ & Vietnamese & 69.00 (Shi et al., 2022a) & 43.00 & Exact Match \\ & Vietnamese & 64.50 (Shi et al., 2022a) & 42.00 & Execution \\ & Chinese & 66.1\({}^{\bigstar}\)(Shi et al., 2022a) & 39.90 & Exact Match \\ \hline \multirow{6}{*}{MSchema2QA} & Arabic & 29.17 (Moradshahi et al., 2020) & 53.55 & Exact Match \\ & German & 51.84 (Moradshahi et al., 2020) & 72.19 & Exact Match \\ & Spanish & 56.01 (Moradshahi et al., 2020) & 68.69 & Exact Match \\ & Farsi & 54.88 (Moradshahi et al., 2020) & 60.25 & Exact Match \\ & Finnish & 52.43 (Moradshahi et al., 2020) & 68.28 & Exact Match \\ & Italian & 54.87 (Moradshahi et al., 2020) & 67.97 & Exact Match \\ & Japanese & 46.27 (Moradshahi et al., 2020) & 62.41 & Exact Match \\ & Polish & 49.69 (Moradshahi et al., 2020) & 60.87 & Exact Match \\ & Turkish & 56.84 (Moradshahi et al., 2020) & 70.03 & Exact Match \\ & Chinese & 36.60 (Moradshahi et al., 2020) & 56.54 & Exact Match \\ \hline \multirow{6}{*}{MCWQ} & English & 27.70 (Cui et al., 2022) & 39.29 & Exact Match \\ & Hebrew & 16.60 (Cui et al., 2022) & 33.02 & Exact Match \\ & Kannada & 16.60 (Cui et al., 2022) & 23.74 & Exact Match \\ & Chinese & 23.00 (Cui et al., 2022) & 24.56 & Exact Match \\ \hline \multirow{2}{*}{MNLMaps} & English & 85.70 (Duong et al., 2017) & 92.73 & Exact Match \\ & German & 83.00 (Duong et al., 2017) & 90.57 & Exact Match \\ \hline \multirow{6}{*}{MATIS} & English & 77.20 (Sherborne and Lapata, 2023) & 83.78 & Denotation accuracy \\ & Farsi & 67.80 (Sherborne and Lapata, 2023) & 80.59 & Denotation accuracy \\ & Portuguese & 66.10 (Sherborne and Lapata, 2023) & 78.60 & Denotation accuracy \\ & Spanish & 64.10 (Sherborne and Lapata, 2023) & 76.58 & Denotation accuracy \\ & German & 66.60 (Sherborne and Lapata, 2023) & 80.63 & Denotation accuracy \\ & Chinese & 64.90 (Sherborne and Lapata, 2023) & 78.38 & Denotation accuracy \\ \hline \multirow{6}{*}{MGeoQuery\({}^{\dagger}\)} & English & 90.00 (Zou and Lu, 2018) & 79.06 & Denotation accuracy \\ & Thai & 86.10 (Zou and Lu, 2018) & 72.56 & Denotation accuracy \\ & German & 76.80 (Zou and Lu, 2018) & 73.29 & Denotation accuracy \\ & Greek & 83.20 (Zou and Lu, 2018) & 76.90 & Denotation accuracy \\ & Chinese & 82.10 (Zou and Lu, 2018) & 75.81 & Denotation accuracy \\ & Indonesian & 83.90 (Zou and Lu, 2018) & 80.14 & Denotation accuracy \\ & Swedish & 83.90 (Zou and Lu, 2018) & 79.78 & Denotation accuracy \\ & Farsi & 76.80 (Zou and Lu, 2018) & 69.68 & Denotation accuracy \\ \hline \multirow{6}{*}{MOvernight} & English & 81.90 (Sherborne and Lapata, 2021) & 69.38\({}^{\ddagger}\) & Denotation accuracy \\ & German & 66.20 (Sherborne and Lapata, 2021) & 66.90\({}^{\ddagger}\) & Denotation accuracy \\ & Chinese & 66.00 (Sherborne and Lapata, 2021) & 62.59\({}^{\ddagger}\) & Denotation accuracy \\ \hline \multirow{6}{*}{MCoNaLa} & Russian & 9.56 (Wang et al., 2022) & 6.38 & Code BLEU-4 \\ & Spanish & 2.64 (Wang et al., 2022) & 2.55 & Code BLEU-4 \\ \cline{1-1} & Japanese & 9.90 (Wang et al., 2022) & 7.66 & Code BLEU-4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between mT5 monolingual and state-of-the-art models, except that MCoNaLa dataset uses cross-lingual zero-shot settings because the dataset only contains English training samples. mT5 obtains better or comparable performance on all datasets. \({}^{\bigstar}\) Previous SOTA model only contains exact match scores for Chinese. \({}^{\dagger}\) The SOTA model of MGeoQuery uses Lambda as MR while XSemPLR uses SQL. \({}^{\ddagger}\) The SOTA model of MOvernight uses denotation accuracy and XSemPLR uses exact match.
mT5 improves by 2.31 on MGeoQuery, and XLM-R improves by 8.41 on MATIS dataset. This demonstrates that Enc-Dec/Enc-PTR (mT5/XLM-R) can be improved by training in a mixture of various languages. However, not all datasets can boost performance via such training. The average change of mT5/XLM-R is around -2/+2 points.
We further explore the reason for the performance drop in multilingual training. As shown in Figure 3, most of the major NLs can obtain performance gain, except that English performance drops in 7 datasets and gains in 3 datasets. This is known as "Curse of Multilinguality" Pfeiffer et al. (2022). Similarly in CLSP, performance of English (high resource NLs) is easier to drop in multilingual training.
### Cross-lingual Performance Gap
To examine the transfer ability of the cross-lingual models, we investigate the performance difference between the Monolingual and Cross-lingual Few/Zero-shot for each dataset using mT5. As shown in Figure 4, by examining the distance between green and orange lines, we find that for the zero-shot setting, the cross-lingual transfer performance gap is significant, which is even larger than 50% on the NLmaps dataset, demonstrating the limitation of current cross-lingual models. However, by examining the difference between orange and blue lines, we also find that using even 10% of samples in target data, the transfer gap will be shortened rapidly. The few-shot gap usually shrinks to around half of the zero-shot gap, e.g., the Schema2QA dataset. For MATIS, the gap even shrinks to around 5 which is very close to the performance of the monolingual setting.
### Analysis over Natural Languages
We pick the best model mT5 and analyze its performance in the zero-shot setting in Figure 5. Results show that the performance of Chinese transfer learning (En -> Zh) and English monolingual training (En -> En) usually is the largest compared with transfer learning of other NLs. On the other hand, German usually has the smallest transfer performance loss. This is probably because of two reasons. First, the Chinese data source is less than German when pretraining on mT5. Second, the language family of English is closer to German (IE: Germanic) compared with Chinese (Sino-Tibetan). This phenomenon is discussed in Hu et al. (2020), and we find this conclusion also holds for cross-lingual semantic parsing tasks.
### Analysis over Meaning Representations
Table 4 shows the performance of mT5 on various MRs in MGeoQuery. In almost all languages, FunQL outperforms the other three meaning representations, and SQL obtains the worst performance. This is consistent with the observation of Guo et al. (2020). We speculate that there are two possible reasons: (1) the grammar of SQL is more complex than the others, and FunQL enjoys much easier grammar Li et al. (2022), and (2) FunQL contains a number of brackets that provide information of
Figure 4: The performance of cross-lingual Few/Zero-shot (mT5) on different datasets and languages. MGeoQuery/* indicates a single MR; MGeoQuery is the averaged score across 4 MRs. Each neighbor grey circle has a 10 score difference, and the center of the circle indicates a 0 score. The cross-lingual transfer performance gap is significant for the zero-shot setting. However, few-shot training shrinks this gap greatly.
Figure 5: Left vertical axis: The performance of cross-lingual zero-shot mT5 models on different datasets over different languages. Larger dots indicate higher accuracy. Right vertical axis: Red line indicates the percentage of different languages in the mT5 training data. Chinese/German has the largest/smallest performance loss for transfer learning. Additionally, performance and pretraining data size have no evident correlation.
structure to the models (Shu et al., 2021).
## 5 Related Work
Cross-lingual Semantic Parsing Most semantic parsing datasets are originally in English such as GeoQuery (Zelle and Mooney, 1996), ATIS (Finegan-Dollak et al., 2018), Overnight (Wang et al., 2015), and Spider (Yu et al., 2018). Cross-lingual Semantic Parsing datasets are usually constructed by translating the English user questions into other languages (Dou et al., 2022; Athiwaratkun et al., 2022). For example, Lu and Ng (2011) translate GeoQuery English queries to create a Chinese version. Min et al. (2019) and Nguyen et al. (2020) create Chinese and the Vietnamese translation of Spider. However, existing CLSP datasets follow different formats and are independently studied as separate efforts. We aim to provide a unified benchmark and modeling framework to facilitate systematic evaluation and generalizable methodology.
Multilingual Language ModelsThere has been significant progress in multilingual language models. MUSE (Conneau et al., 2017) aligns monolingual word embeddings in an unsupervised way without using any parallel corpora. XLM (Lample and Conneau, 2019) is a pretrained language model based on RoBERTa (Liu et al., 2019) which offers cross-lingual contextualized word representations. Similarly, mBERT is developed as the multilingual version of BERT Devlin et al. (2018). XLM-R (Conneau et al., 2019) outperforms mBERT and XLM in sequence labeling, classification, and question answering. Focusing on sequence-to-sequence tasks such as machine translation, mBART (Liu et al., 2020) extends BART by introducing multilingual denoising pretraining. mT5 (Xue et al., 2020) extends T5 by pretraining on the multilingual dataset mC4. Multilingual large language models have been proposed such as BLOOM (Scao et al., 2022) and XGLM (Lin et al., 2022). From multilingual embeddings to multilingual large language models, there have been more effective representations as well as more languages covered (Srivastava et al., 2022). We aim to systematically evaluate these models on CLSP, which is understudied by existing work.
Cross-lingual NLP BenchmarksCross-lingual benchmarks have been established in many NLP tasks. XNLI is a large-scale corpus aimed to provide a standardized evaluation set (Conneau et al., 2018). Hu et al. (2020) developed XTREME to evaluate how well multilingual representations in 40 languages can generalize. XGLUE is another dataset used to implement evaluation in various cross-lingual tasks (Liang et al., 2020). MLQA (Lewis et al., 2019), XQuAD (Artetxe et al., 2019), and XOR QA (Asai et al., 2020) are three evaluation frameworks for cross-lingual question answering. Sun and Duh (2020) introduce CLIRMatrix by collecting multilingual datasets from Wikipedia for cross-lingual information retrieval (Zbib et al., 2019; Oard et al., 2019; Zhang et al., 2019; Shi et al., 2021; Chen et al., 2021). For cross-lingual summarization, NLCS was built by Zhu et al. (2019) to tackle the problem of the divided summarization and translation. Nonetheless, there is no unified benchmark for CLSP, and thus we are unable to calibrate the performance of multilingual language models on CLSP.
## 6 Conclusion
We build XSemPLR, a unified benchmark for cross-lingual semantic parsing with multiple natural languages and meaning representations. We conduct a comprehensive benchmark study on three representative types of multilingual language models. Our results show that mT5 with monolingual training yields the best performance, while notably multilingual LLMs are still inadequate to perform cross-lingual semantic parsing tasks. Moreover, the performance gap between monolingual training and cross-lingual transfer learning is still significant. These findings call for both improved semantic parsing capabilities of multilingual LLMs and stronger cross-lingual transfer learning techniques for semantic parsing.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & SQL & Prolog & Lambda & FunQL \\ \hline English & 76.50 & 81.59 & 76.50 & **89.89** \\ German & 68.23 & 64.26 & **72.20** & 71.83 \\ Thai & 68.59 & 63.90 & 70.04 & **76.17** \\ Chinese & 70.04 & 63.18 & 74.37 & **77.62** \\ Farsi & 64.98 & 61.73 & 64.62 & **75.45** \\ Greek & 71.84 & 75.81 & 78.70 & **85.92** \\ Indonesian & 75.09 & 75.09 & 78.34 & **87.00** \\ Swedish & 75.45 & 77.26 & 79.78 & **84.48** \\ Average & 71.34 & 70.35 & 74.32 & **81.04** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Monolingual performance of mT5 on MGeoQuery. FunQL/SQL obtains the best/worst performance.
### Limitations
While we cover a wide range of different factors of cross-lingual semantic parsing (e.g., tasks, datasets, natural languages, meaning representations, domains), we cannot include all possible dimensions along with these aspects. Furthermore, we focus on the linguistic generalization ability for semantic parsing because the questions are translated from the English datasets. In the future, we will explore questions raised by native speakers in each language to study the model ability under variations in cultural backgrounds and information-seeking needs.
## Acknowledgment
We thank Victoria Lin, Bailin Wang, Robin Jia, Ice Pasupat, Tianze Shi, Bing Xiang, Luke Zettlemoyer for their early feedback and discussions. We thank Peng Shi, Yucheng Nie, Junru Liu, Tom Sherborne, Harsh Maniar, Xiangyu Dong, Chen Wang, Songlin Hou, Haoran Zhang, Nan Zhang, and Sarkar Das for their valuable help and comments.
|
2307.02905 | Decomposing the Origin of TeV-PeV Emission from the Galactic Plane:
Implications of Multi-messenger Observations | High-energy neutrino and $\gamma$-ray emission has been observed from the
Galactic plane, which may come from individual sources and/or diffuse cosmic
rays. We evaluate the contribution of these two components through the
multimessenger connection between neutrinos and $\gamma$ rays in hadronic
interactions. We derive maximum fluxes of neutrino emission from the Galactic
plane using $\gamma$-ray catalogs, including 4FGL, HGPS, 3HWC, and 1LHAASO, and
measurements of the Galactic diffuse emission by Tibet AS$\gamma$ and LHAASO.
We find that the IceCube Galactic neutrino flux is larger than the contribution
from all resolved sources when excluding promising leptonic sources such as
pulsars, pulsar wind nebulae, and TeV halos. Our result indicates that the
Galactic neutrino emission is likely dominated by the diffuse emission by the
cosmic-ray sea and unresolved hadronic $\gamma$-ray sources. In addition, the
IceCube flux is comparable to the sum of the flux of non-pulsar sources and the
LHAASO diffuse emission especially above 30 TeV. This implies that the LHAASO
diffuse emission may dominantly originate from hadronic interactions, either as
the truly diffuse emission or unresolved hadronic emitters. Future observations
of neutrino telescopes and air-shower $\gamma$-ray experiments in the Southern
hemisphere are needed to accurately disentangle the source and diffuse emission
of the Milky Way. | Ke Fang, Kohta Murase | 2023-07-06T10:37:24Z | http://arxiv.org/abs/2307.02905v2 | # Decomposing the Origin of TeV-PeV Emission from the Galactic Plane:
###### Abstract
High-energy neutrino and \(\gamma\)-ray emission has been observed from the Galactic plane, which may come from individual sources and/or diffuse cosmic rays. We evaluate the contribution of these two components through the multi-messenger connection between neutrinos and \(\gamma\) rays in hadronic interactions. We derive maximum fluxes of neutrino emission from the Galactic plane using \(\gamma\)-ray catalogs, including 4FGL, HGPS, 3HWC, and 1LHAASO, and measurements of the Galactic diffuse emission by Tibet AS\(\gamma\) and LHAASO. We find that depending on model templates, the diffuse emission is brighter than the sum of resolved sources when excluding promising leptonic sources such as pulsars, pulsar wind nebulae, and TeV halos. Our result indicates that the Galactic neutrino emission observed by the IceCube Collaboration may be dominated by the Galactic diffuse emission or unresolved \(\gamma\)-ray sources. Future observations of neutrino telescopes and air-shower \(\gamma\)-ray experiments in the Southern hemisphere are needed to accurately disentangle the source and diffuse emission of the Milky Way.
## 1 Introduction
High-energy neutrinos from the Galactic plane (GP) may come from two components of the Galaxy: the cosmic-ray sea and individual sources. The cosmic-ray sea is a smooth and steady distribution of cosmic rays that emerge from accelerators and propagate in the Galactic magnetic field. Protons and nuclei at TeV to PeV energies may be confined in the Galactic magnetic field for 0.1 to a few million years and lose their initial directions. They collide with gas in the interstellar medium (ISM) and produce charged and neutral pions, which decay into neutrinos and \(\gamma\) rays, respectively. These secondary particles form the Galactic diffuse emission (GDE). In addition to hadronic cosmic rays, a lower flux of cosmic-ray electrons may also up-scatter the interstellar radiation field and the cosmic microwave background (CMB) to \(\gamma\) rays. Above 10 TeV, electrons have a cooling time of \(t_{e}\sim 64\,(E_{e}/10\,{\rm TeV})^{-1}\,{\rm kyr}\) due to the inverse Compton radiation, and propagate for a distance \(d\sim(D\,t_{e})^{1/2}=0.3\,(E_{e}/10\,{\rm TeV})^{-0.33}\,{\rm kpc}\), where \(D\approx 3\times 10^{28}\,(R/3\,{\rm GV})^{1/3}\,{\rm cm}^{2}\,{\rm s}^{-1}\) is the diffusion coefficient assuming Kolmogorov turbulence and \(R\equiv E/Ze\) is the rigidity of a particle with energy \(E\) and charge number \(Z\). Therefore, electrons above tens of TeV cannot travel too far away from the sources where they were produced.
GDE in \(\gamma\) rays has been measured by the _Fermi_ Large Area Telescope (LAT) between 100 MeV and 1 TeV over the full sky (Ackermann et al., 2012; Abdollahi et al., 2022). Above 1 TeV, the GDE from several regions in the Northern sky has been measured by air shower \(\gamma\)-ray experiments, including ARGO-YBJ at 0.35-2 TeV (Bartoli et al., 2015), Tibet AS\(\gamma\) Observatory at 100-1000 TeV (Amenomori et al., 2021), HAWC Observatory at 0.3-100 TeV (Abeyeskara et al., 2021), and the Large High Altitude Air Shower Observatory (LHAASO) at 10-1000 TeV (Cao et al., 2023).
High-energy neutrinos and \(\gamma\) rays may also be produced by individual sources harbored in the Milky Way. About two hundred Galactic \(\gamma\)-ray sources have been observed above 1 TeV 1. Which sources among them are hadronic emitters, and hence neutrino sources, remains a major question (Sudoh and Beacom, 2023). One of the challenges arises from the fact that the pion decay
and inverse Compton radiation may yield similar spectra. Only a handful sources show promising features of hadronic \(\gamma\)-ray emission, such as the star formation region at the Galactic center (HESS Collaboration et al., 2016) and the supernova remnant G\(106.3+2.7\)(Fang et al., 2022). To date, no Galactic neutrino sources have been identified.
In addition to resolved sources, unresolved sources may also contribute to emission from the GP. These unresolved sources may be counted toward GDE in measurements despite that they do not have a diffuse nature. The luminosity function of TeV sources is poorly known due to the limited number of sources and the complications related to TeV catalog creations. Based on 32 sources with flux above 10% Crab from the H.E.S.S. Galactic plane survey (HGPS), the cumulative \(\log N-\log S\) distribution of integral flux above 1 TeV is derived to follow a power law with a slope of \(-1.3\pm 0.2\)(Abdalla et al., 2018). The distribution is flatter below 10% although the measurement is limited by the completeness of the sample.
The detection of Galactic neutrinos has been anticipated for decades (Stecker, 1979). Whether the Galactic contribution dominates the full-sky neutrino flux was first debated at the time of IceCube's discovery of high-energy cosmic neutrinos (IceCube Collaboration, 2013). Using the multi-messenger connection and diffuse TeV \(\gamma\)-ray data mainly from CASA-MIA and KASKADE, Ahlers and Murase (2014) showed that the all-sky neutrino flux mostly originates from extragalactic sources. Fang and Murase (2021) derived the upper limit on the Galactic neutrino flux based on the GP observation by Tibet AS\(\gamma\), and argued that the 100 TeV emission may come from either the GDE or the sum of discrete sources. Lately, the IceCube Collaboration reported evidence for neutrinos from the GP (IceCube Collaboration, 2023). The observed flux level is consistent with the prediction of Fang and Murase (2021).
An important task in understanding the GP is to disentangle the contribution of individual sources from the truly diffuse emission. This is crucial to understanding the PeVatrons in the Milky Way and the leptonic contribution to the TeV-PeV \(\gamma\)-ray sky. While detecting individual Galactic neutrino sources would be the ultimate solution to this problem, in this paper we take a first step in understanding the source contribution to the neutrino GDE via a multi-messenger approach. Specifically, we constrain the neutrino flux of individual sources using \(\gamma\)-ray catalogs and compare it to the GDE measured by IceCube or derived from \(\gamma\)-ray observations. Unlike extragalactic neutrino sources, Galactic neutrino sources are likely optically thin to TeV \(\gamma\)-rays given their relatively low infrared fluxes. \(\gamma\)-ray emission can be made by either electrons or protons and nuclei whereas high-energy neutrinos can only come from the latter. The \(\gamma\)-ray flux of Galactic sources therefore provide an upper limit on the neutrino flux from individual sources.
We describe the TeV-PeV \(\gamma\)-ray observations of the GP in Section 2, including the source catalogs and GDE observations in Section 2.1 and 2.2, respectively. By converting the differential \(\gamma\)-ray flux to neutrino flux assuming that they are simultaneously produced by protons and nuclei, we constrain the high-energy neutrino emission by sources and compare that to the GDE in Section 3. We conclude and discuss the caveats of the work in Section 4.
## 2 TeV-PeV Gamma-Ray Observations
In this section, we describe the \(\gamma\)-ray catalogs and GDE observations to be used for the deviation of high-energy neutrino fluxes. Figure 1 summarizes the sky regions observed by various experiments. We overlay the neutral hydrogen (HI) emission from the HI 4-PI Survey (HI4PI Collaboration et al., 2016), since the pionic GDE is dominated by cosmic-ray interaction with the HI gas.
### Source Catalogs
We summarize the sky regions and energy ranges of various \(\gamma\)-ray source catalogs in Table 1 in Appendix A. Below we describe the usage of each of them.
**HGPS:** 78 sources are reported by the H.E.S.S. Galactic plane survey (HGPS), which is a decade-long observation of the H.E.S.S. telescope with nearly 2700 h of data covering the inner GP (Abdalla et al., 2018). One source, HESS J1943+213, is likely an extragalactic object and is removed from our analysis. For each of the remaining sources, we use the flux at the pivot energy and spectral index reported by the catalog found by assuming a power-law spectral model to derive the differential flux between 1 and 30 TeV. The right end of the energy range is chosen based on the lower limit of the maximum energy of the sources. The 77 Galactic sources include 12 pulsar wind nebulae (PWN), 8 shell-type supernova remnant (SNR), 8 composite SNR (where the emission can come from either the shell or the interior nebula), 3 \(\gamma\)-ray binaries, and 47 sources without firmly identified associations, including 35 with possible associations in source catalogs and 11 with no associations. We account for a systematic uncertainty of 30% for the flux. A systematic uncertainty for the spectral index, which is estimated to be an absolute value of 0.2, is not included.
**3HWC:** 65 sources are reported by the Third HAWC Catalog (3HWC) based on blind searches across
HAWC's FOV using 1523 days of data (Albert et al., 2020). Two of them, Mrk 421 and Mrk 501, are extragalactic and removed for the list, yielding a total of 63 Galactic sources. Based on the spectral index and differential flux at a pivot energy of 7 TeV, we calculate the flux of the sources in 3HWC between 1 and 49 TeV. This energy range is within an energy range that contributes to 75% of the observed significance for most sources. The differential flux of 3HWC is obtained by assuming a pointlike morphology. An extended source may be associated with multiple point sources. The inaccuracy in the source extension barely impact this work since the sum of the flux of point sources reasonably estimates the flux of an extended source. Our calculation includes the systematic uncertainties of the spectral models of the 3HWC sources, which are at the level of 30%.
**1LHAASO:** 90 sources with extension \(<2^{\circ}\) are reported by the first LHAASO catalog (1LHAASO), including 43 sources that are detected at \(>4\,\sigma\) above 100 TeV (Cao et al., 2023). We exclude the following sources that are likely of extragalactic origin: 1LHAASO J1104+3810, 1LHAASO J1219+2915, 1LHAASO J1653+3943, 1LHAASO J1727+5016, and 1LHAASO J2346+5138. For the remaining sources that are detected, we compute the spectrum following a power law \(dN/dE=N_{0}(E/E_{0})^{-\Gamma}\) between \(E_{\rm min}\) and \(E_{\rm max}\), with \(E_{0}=3\) TeV, \(E_{\rm min}=1\) TeV, \(E_{\rm max}=25\) TeV for WCDA and \(E_{0}=50\) TeV, \(E_{\rm min}=25\) TeV, \(E_{\rm max}=200\) TeV for KM2A. We include systematic uncertainty of 7% on KM2A flux and \({}^{+8\%}_{-24\%}\) on WCDA flux. An absolute uncertainty of 0.02 on spectral index of KM2A measurement is not included. Sources that only have upper limits on flux are not included.
Figure 1: Summary of the sky regions observed by various \(\gamma\)-ray experiments, including H.E.S.S. Telescope for the GP survey (red rectangle; Abdalla et al., 2018), Tibet AS\(\gamma\) Observatory for the GDE observation (yellow rectangle for region A and dashed cyan rectangle for region B; Amenomori et al., 2021), LHAASO Observatory for the GDE measurement (purple rectangle for outer Galaxy; Cao et al., 2023), HAWC Observatory for the Third HAWC Catalog of Very-high-energy Gamma-ray Sources (3HWC; sky blue curves Albert et al., 2020) and LHAASO for the First LHAASO Catalog of Gamma-Ray Sources (1LHAASO; pink curves; Cao et al., 2021). _Fermi_-LAT and IceCube observe the full sky and are not shown in this plot. Details of the observations are summarized in Table 1 and 2. For reference, the neutral hydrogen (21 cm) emission from HI 4-PI Survey (HI4PI Collaboration et al., 2016) is shown with the column density indicated by the color bar. Plot is in Galactic coordinate.
**4FGL:** Between 50 MeV and 1 TeV, the fourth _Fermi_ Large Area Telescope catalog (4FGL) reports 6659 sources based 12 years of _Fermi_-LAT data (Abdollahi et al., 2022). We count both "identified" and "associated" source classes, yielding a total of 539 Galactic sources that can be decomposed into the following groups with corresponding designators: 1) 257 pulsars, including 137 young ('PSR' and 'psr') and 120 millisecond pulsars ('MSP'), 2) 20 PWNe ('PWN' and 'pwn'), 3) 43 SNRs ('SNR' and'snr') 4) composite SNRs ('spp'), 5) 5 star-forming regions ('SNR' and'sfr'), 6) 26 binaries ('HMB', 'hmb', 'LMB', 'lmb', 'BIN', 'bin'), 7) 4 novae ('NOV'), 8) 35 globular clusters ('glc'), and 9) Galactic center ('GC'). For each source, we evaluate the differential flux between 0.1 and 1 TeV based on the parameters for the reported SpectrumType, which can be a power law, log-parabola, or power law with a super exponential cutoff. The errors of the fluxes include systematic uncertainties associated with the detector effective area and Galactic interstellar emission model.
### Galactic Diffuse Emission
The GDE measurements by various air shower \(\gamma\)-ray observatories are summarized in Table 2 and described below.
**ARGO-YBJ** measured the GDE by subtracting a background map from the event map (Bartoli et al., 2015). Known sources from the TeVCat were excluded using a \(4^{\circ}\times 4^{\circ}/\cos(b)\) mask, where \(b\) is the latitude. Faint sources were not masked but expected to contribute to 2.5%.
**Tibet AS\(\gamma\)** detected the GDE at 5.9 \(\sigma\) by comparing the number of \(\gamma\)-ray-like events from the on region, defined as \(|b|<10^{\circ}\), and the off region, \(|b|>20^{\circ}\). By identifying \(\gamma\)-ray-like events within \(0.5^{\circ}\) of TeVCat sources, Amenomori et al. (2021) concludes that the fractional source contribution to the diffuse component within \(|b|<5^{\circ}\) is 13% above 100 TeV. The events above 398 TeV are likely of a diffuse origin since they neither have accompanying signal at lower energies nor come from directions within \(\sim 0.5^{\circ}\) of known sources. The error bars in the top panels of Figure 2 correspond to \(1\,\sigma\) statistical error. In addition, a systematic error of 30% is expected due to the uncertainty of absolute energy scale (Amenomori et al., 2021).
**LHAASO** detected the GDE from the inner and outer GP at \(29.1\,\sigma\) and \(12.7\,\sigma\)(Cao et al., 2023). Sources detected by KM2A and additional known sources in TeVCat are masked with a Gaussian width that is 2.5 times of the quadratic sum of the point spread function (PSF) of the detector and the source extension. The contribution from remaining resolved sources is estimated to be \(<10\%\). The GDE flux of the inner Galaxy measured by LHAASO is lower than that of Tibet AS\(\gamma\) as a result of their more and larger source masks. In addition, the innermost Galactic disk at \(15^{\circ}\lesssim l\lesssim 90^{\circ}\) and \(|b|\lesssim 1.5^{\circ}\) is mostly masked in the study of Cao et al. (2023), which could have caused an undresti
Figure 2: Comparison of intensities of \(\gamma\) rays from resolved sources (cold colors) and GDE (warm colors) in three sky regions including (1) Tibet Regions A, (2) Tibet Region B, and (3) LHAASO Outer Galaxy region. The source emissivity is evaluated based on a) 3HWC catalog (Albert et al., 2020), which includes 38, 32, and 10 sources, b) 4FGL catalog (Abdollahi et al., 2022), which includes 81, 73, and 25 sources, c) 1LHAASO catalog (Cao et al., 2023), which includes 37, 34, and 9 sources detected by WCDA, and 40, 37, and 10 sources detected by KM2A in the three sky regions, respectively. The total source flux is averaged over the solid angle of the corresponding sky regions. For the GDE, the error bars of Tibet AS\(\gamma\) observations correspond to \(1\,\sigma\) statistical errors and those of the LHAASO flux points correspond to the quadratic sum of the statistical and systematic errors. In the last energy bin of the Tibet AS\(\gamma\) GDE flux, the fainter data points indicate the residual intensity after removing the events relevant to Cygnus Cocoon (40%). In the Tibet Region A plot, the LHAASO flux points correspond to a similar but larger sky region, the LHAASO inner Galaxy region defined as \(15^{\circ}<l<125^{\circ}\) and \(|b|<5^{\circ}\).
mate of the average GDE in that region. Cao et al. (2023) found that the flux of the GDE of the inner Galaxy (\(15^{\circ}<l<125^{\circ}\) and \(|b|\lesssim 5^{\circ}\)) would increase by 61% when not apply any masking.
_Fermi_**-LAT**: We use the Galactic interstellar emission model (GIEM) for the 4FGL catalog analysis (Abdollahi et al., 2022) to evaluate the GDE flux 2. We note that the GDE is contributed by both the interstellar emission and unresolved sources, though the fraction of the latter is at percentage level above 10 GeV (Acero et al., 2016). The GIEM is a linear combination of emission components including the \(\pi^{0}\) decay from hadronic cosmic rays interacting with HI gas and molecular hydrogen traced by the CO emission, as well as dark gas, inverse Compton on the interstellar radiation field, and large structures such as the _Fermi_ Bubbles. The parameters of the model were obtained by fitting to the Pass 8 data. We approximate the model uncertainty with the systematic uncertainty of the Pass 8 data on the effective area 3.
Footnote 2: [https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html)
Footnote 3: [https://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT_caveats.html](https://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT_caveats.html)
De La Torre Luque et al. (2023) compared \(\gamma\)-ray emission models to the _Fermi_-LAT data from the two sky regions observed by Tibet AS\(\gamma\). They conclude that the total flux is dominated by the \(\pi^{0}\) decay of the diffuse cosmic rays at 100-300 GeV, with \(<10\%\) contributed by resolved and unresolved sources, inverse Compton and bremsstrahlung radiation from cosmic-ray electrons, and the isotropic \(\gamma\)-ray background. We therefore use the total flux of the _Fermi_-LAT data from De La Torre Luque et al. (2023) as an approximate of the GDE flux in these two regions.
### GDE vs Source Emission in the \(\gamma\)-Ray Sky
Figure 2 contrasts the intensities of the \(\gamma\)-ray emission by resolved sources and the GDE from three sky regions, from inner Galaxy to outer Galaxy: (1) Tibet region A, \(25^{\circ}<l<100^{\circ}\), \(|b|<5^{\circ}\); (2) Tibet region B, \(50^{\circ}<l<200^{\circ}\), \(|b|<5^{\circ}\); (3) LHAASO outer Galaxy, \(125^{\circ}<l<235^{\circ}\), \(|b|<5^{\circ}\). The shaded bands correspond to the sum of sources in the corresponding sky regions. When summing the sources, we add up the flux linearly and the uncertainties in quadrature for error propagation. For the total flux computed using sources from HGPS, 3HWC, and 1LHAASO catalogs, systematic errors are added with the statistical errors of the flux sum in quadrature, respectively.
Figure 2 suggests that the GDE is comparable to source emission in the inner Galaxy but may dominate over the source emission in the outer Galaxy.
## 3 Neutrino Emission
Based on the \(\gamma\)-ray observations in Section 2, we derive the upper limit on the Galactic neutrino flux expected from resolved sources and GDE. The connection between \(\gamma\)-ray and neutrino emission through hadronic processes in the Galaxy is studied in Ahlers and Murase (2014); Fang and Murase (2021) and summarized in Appendix B. Since none of the TeV \(\gamma\)-ray experiments covers the full sky, we can only estimate the neutrino emission from the GP using the portion of the plane measured by the \(\gamma\)-ray detectors, under the assumption that the unobserved region has a similar emissivity distribution as the observed region. Details regarding this deviation are described in Appendix C. The neutrino flux expected from all resolved Galactic \(\gamma\)-ray sources and the GDE is shown in Figure 5 in the Appendix.
Some classes of \(\gamma\)-ray sources show clear signatures of leptonic emission. For example, the broadband spectral
Figure 3: All-flavor flux of neutrinos expected from resolved Galactic sources (cold colors, unhatched) and GDE (warm colors, hatched) averaged over the full sky. The source emission is an upper limit based on the assumption that all \(\gamma\)-ray sources not associated with pulsars are hadronic emitters. The source flux is calculated using the measurements of 227 sources from 4FGL (Abdollahi et al., 2022), 65 sources from HGPS (Abdalla et al., 2018), 51 sources from 3HWC (Albert et al., 2020), 36 WCDA sources and 43 KM2A sources from 1LHAASO (Cao et al., 2023). The GDE intensity is converted from _Fermi_-LAT’s Galactic interstellar emission model (Abdollahi et al., 2022), LHAASO (Cao et al., 2023) and Tibet AS\(\gamma\)’s GDE observations (Amenomori et al., 2021; Fang and Murase, 2021). The hatched grey band is the IceCube measurement of the GP using the \(\pi^{0}\) template (IceCube Collaboration, 2023).
energy distribution of the Crab nebula can be well described by the synchrotron and inverse Compton emission of relativistic electrons (H. E. S. S. Collaboration, 2020; Lhaaso Collaboration et al., 2021). A systematic study of the population of pulsar wind nebulae (PWNe) in the HGPS catalog suggests that TeV emission by the population can be consistently explained by energetic leptons (H. E. S. S. Collaboration et al., 2018). TeV halos around middle-aged pulsars are a new phenomenon found by air shower detectors (Abeysekara et al., 2017). They are much more extended than PWNe, where the electron-positron plasma is confined by the ambient medium. The sizes of TeV halos can usually be explained by the cooling of electrons in the CMB, suggesting that they are also likely of the leptonic origin.
Motivated by these facts, we exclude sources in 4FGL and HGPS that are classified as pulsars or PWNe. We exclude 3HWC sources that are coincident with these TeV halo candidate pulsars (in Table 4 of Albert et al., 2020). For the 1LHAASO catalog, we remove the sources associated with pulsars (in Table 3 of Cao et al., 2023). In addition, we exclude 1LHAASO J1831\(-\)1007u\({}^{*}\) and 1LHAASO J0703+1405, which are TeV halo candidates that are removed from the 3HWC. Figure 3 presents the neutrino flux of resolved \(\gamma\)-ray sources that are not associated with pulsars.
The neutrino GDE flux is derived using the \(\gamma\)-ray GDE observations listed in Section 2.2. The red band in Figure 3 indicates the full-sky GDE derived using the LHAASO observations in both inner and outer Galaxy by assuming that cosmic-ray density follows the SNR distribution described by equation C3. We also overlay the prediction of Fang and Murase (2021) based on the Tibet AS\(\gamma\) measurement. The grey band presents the IceCube measurement of the GDE using the \(\pi^{0}\) template (IceCube Collaboration, 2023).
Figure 3 shows that in an optimistic scenario where all non-pulsar sources are hadronic emitters, the neutrino emission by the sources could be comparable to the GDE at 1-10 TeV. Above \(\sim\)30 TeV, the neutrino emission from the GP is dominated by the truly diffuse component or unresolved sources that have not been detected by any of \(\gamma\)-ray observations. Given that a significant fraction of the remaining sources are still promising leptonic emitters, such as composite SNRs (e.g., Cristafari, 2021) and \(\gamma\)-ray binaries/microquasars (e.g., Abeysekara et al., 2018), the neutrino emission of the GP is likely dominated by the emission of diffuse cosmic rays or unresolved sources that are not accounted for. In this sense, it is also intriguing to see that the sum of unresolved hypernova remnants (HNRs) (Ahlers and Murase, 2014) can match the Galactic neutrino flux allowed by the Tibet AS\(\gamma\).
Figures 3 and 5 suggest that the spectrum of neutrino emission from the GP due to resolved sources is slightly harder than that arises from the GDE, if the \(\pi^{0}\) template is correct. This implies the importance of discriminating model templates used in Galactic neutrino searches (IceCube Collaboration, 2023). Figure 4 further compares theoretical models with the derived and measured neutrino GDE. Measuring both the neutrino spectrum and flux of the GP at 1-10 TeV can help separate these two components.
At 10 TeV, the source flux derived from the HGPS catalog is a few times higher than that from the 1LHAASO and 3HWC catalogs. The sensitivities of the HGPS and 3HWC are comparable (Abdalla et al., 2018; Albert et al., 2020). Comparison of the GP observed by H.E.S.S. and HAWC at \(10^{\circ}<l<60^{\circ}\) found similar integrated fluxes above 1 TeV (Abdalla et al., 2021). As the HGPS covers only a small range of latitudes (\(|b|<3^{\circ}\)), the relatively high neutrino flux derived from the HGPS catalog is probably due to the fact that the SNR model (equation C3) used for the conversion does not sufficiently describe the clustering of \(\gamma\)-ray sources in the inner Galaxy. Furthermore, more than half of the HGPS region is in the Southern sky, which is not accessible to LHAASO and HAWC (see Figure 1). Future air shower \(\gamma\)-ray facilities in the Southern sky are needed to fully understand the difference.
Figure 4: Measured and derived all-flavor neutrino flux from GDE averaged over the full sky (warm colors, unhatched) comparing with models, including the KRA models (Gaggero et al., 2015), the CRINGE models (Schwefer et al., 2023), and the unresolved hypernova remnant model (Ahlers and Murase, 2014).
## 4 Discussion and Conclusions
We evaluated the GDE and high-energy neutrino flux from astrophysical sources of the Milky Way based on the latest \(\gamma\)-ray observations. Since the TeV-PeV \(\gamma\)-ray observations are ground-based and partial-sky, the maximum flux of neutrino emission from the entire GP is derived based on models of the source distribution in the Galaxy. When calculating the neutrino emission by sources, we removed sources classified as pulsars, PWNe, and TeV halos which are promising leptonic sources. We found that the contribution from known \(\gamma\)-ray sources is likely lower than the GDE by at least a factor of \(\sim\)2 in the neutrino sky.
The identification and measurement of Galactic neutrino or \(\gamma\)-ray sources involve a separation of the GDE component. A small fraction of the source flux could arise from the GDE and the isotropic emission (Cao et al., 2023). This would further lower the source contribution and support our conclusion.
We have assumed that \(\gamma\)-ray emission of pulsars, PWNe, and TeV halos mostly come from relativistic electrons and positrons. High-energy neutrinos could be emitted by fast-spinning newborn pulsars, although the birth rate of such sources in the local Universe is relatively low (Bednarek & Protheroe, 1997; Murase et al., 2009; Fang, 2015).
Our results confirmed the previous findings that the Galactic contribution is subdominant in the all-sky neutrino flux (Ahlers & Murase, 2014; Fang & Murase, 2021). Although our conclusion is not directly applied to quasi-isotropic emission, this has also been constrained by not only _Fermi_-LAT but also TeV-PeV \(\gamma\)-ray observations (Murase et al., 2013; Ahlers & Murase, 2014; Murase et al., 2016).
Upcoming neutrino telescopes such as KM3Net, Baikal-GVD and IceCube-Gen2 (The IceCube-Gen2 Collaboration et al., 2020) may resolve individual Galactic sources and disentangle the source emission and GDE. Future air shower \(\gamma\)-ray experiments in the Southern hemisphere such as the Southern Wide-field Gamma-ray Observatory (Albert et al., 2019) are also crucial to understanding the emission of the entire GP.
The work of K.F. is supported by the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin-Madison with funding from the Wisconsin Alumni Research Foundation. K.F. acknowledges support from National Science Foundation (PHY-2110821, PHY-2238916) and from NASA through the Fermi Guest Investigator Program (80NSSC22K1584). The work of K.M. is supported by the NSF grants No. AST-1908689, No. AST-2108466 and No. AST-2108467, and KAKENHI No. 20H01901 and No. 20H05852.
|
2305.15433 | Ultralong 100 ns Spin Relaxation Time in Graphite at Room Temperature | Graphite has been intensively studied, yet its electron spins dynamics
remains an unresolved problem even 70 years after the first experiments. The
central quantities, the longitudinal ($T_1$) and transverse ($T_2$) relaxation
times were postulated to be equal, mirroring standard metals, but $T_1$ has
never been measured for graphite. Here, based on a detailed band structure
calculation including spin-orbit coupling, we predict an unexpected behavior of
the relaxation times. We find, based on saturation ESR measurements, that $T_1$
is markedly different from $T_2$. Spins injected with perpendicular
polarization with respect to the graphene plane have an extraordinarily long
lifetime of $100$ ns at room temperature. This is ten times more than in the
best graphene samples. The spin diffusion length across graphite planes is thus
expected to be ultralong, on the scale of $\sim 70~\mu$m, suggesting that thin
films of graphite -- or multilayer AB graphene stacks -- can be excellent
platforms for spintronics applications compatible with 2D van der Waals
technologies. Finally, we provide a qualitative account of the observed spin
relaxation based on the anisotropic spin admixture of the Bloch states in
graphite obtained from density functional theory calculations. | B. G. Márkus, M. Gmitra, B. Dóra, G. Csősz, T. Fehér, P. Szirmai, B. Náfrádi, V. Zólyomi, L. Forró, J. Fabian, F. Simon | 2023-05-22T21:15:16Z | http://arxiv.org/abs/2305.15433v1 | # Ultralong 100 ns Spin Relaxation Time in Graphite at Room Temperature
###### Abstract
Graphite has been intensively studied, yet its electron spins dynamics remains an unresolved problem even 70 years after the first experiments. The central quantities, the longitudinal (\(T_{1}\)) and transverse (\(T_{2}\)) relaxation times were postulated to be equal, mirroring standard metals, but \(T_{1}\) has never been measured for graphite. Here, based on a detailed band structure calculation including spin-orbit coupling, we predict an unexpected behavior of the relaxation times. We find, based on saturation ESR measurements, that \(T_{1}\) is markedly different from \(T_{2}\). Spins injected with perpendicular polarization with respect to the graphene plane have an extraordinarily long lifetime of \(100\) ns at room temperature. This is ten times more than in the best graphene samples. The spin diffusion length across graphite planes is thus expected to be ultralong, on the scale of \(\sim 70\)\(\mu\)m, suggesting that thin films of graphite -- or multilayer AB graphene stacks -- can be excellent platforms for spintronics applications compatible with 2D van der Waals technologies. Finally, we provide a qualitative account of the observed spin relaxation based on the anisotropic spin admixture of the Bloch states in graphite obtained from density functional theory calculations.
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
Spintronic devices require materials with a suitably long spin-relaxation time, \(\tau_{\rm s}\). Carbon nanomaterials, such as graphite intercalated compounds[1], graphene[2], fullerenes[3], and carbon nanotubes[4], have been considered[5, 6, 7] for spintronics[8, 9, 10], as small spin-orbit coupling (SOC) systems with low concentration of magnetic \({}^{13}\)C nuclei which contribute to a long \(\tau_{\rm s}\). However, experimental data and the theory of spin-relaxation in carbon based materials face critical open questions. Chiefly, the absolute value of \(\tau_{\rm s}\) in graphene is debated with values ranging from \(100\) ps to \(12\) ns[11, 12, 13, 14, 15, 16], and theoretical investigations suggest an extrinsic origin of the measured short \(\tau_{\rm s}\) values[17].
Contemporary studies, in order to introduce functionality into spintronic devices[18, 19, 20, 21, 22, 23, 24, 25, 26], focus on tailoring the SOC in two-dimensional heterostructures with the help of proximity effect[27, 28, 29, 30]. Theory predicted a giant spin-relaxation anisotropy in graphene when in contact with a large-SOC material[31] that was subsequently observed in mono- and bilayer graphene[32, 33, 34, 35, 36, 37]. This is in contrast with graphene on a SOC-free substrate having a nearly isotropic spin-relaxation[38, 39, 40, 41]. It would be even better to have materials with an intrinsic spin-relaxation time anisotropy, which would enable efficient control over the spin transport and thereby boost the development of spintronic devices.
A remarkably simple example of an anisotropic carbon-based material is graphite, which, while being one of the most extensively studied crystalline materials, still holds several puzzles. Specifically, the spin-relaxation, its anisotropy, and the \(g\)-factor are not yet understood in graphite, and this represents a 70-year-old challenge. As early as 1953, the first spin spectroscopic study of graphite[42, 43] used conduction electron spin resonance (CESR). The CESR linewidth, \(\Delta B\), yields directly the spin-decoherence time: \(T_{2}=(\gamma\Delta B)^{-1}\), where \(\gamma/2\pi\approx 28\) GHz/T is the electron gyromagnetic ratio (which is related to the \(g\)-factor as \(|\gamma|=g\mu_{\rm B}/\hbar\)). Magnetic resonance is characterized by two distinct relaxation times, \(T_{1}\) and \(T_{2}\), which denote the relaxation of the components parallel and perpendicular to the external magnetic field, respectively[44]. In zero magnetic field, \(T_{1}=T_{2}=\tau_{\rm s}\) holds and the latter parameter is measured in spin-injected transport studies.
The ESR linewidth and \(g\)-factor have a peculiar anisotropy in graphite. \(\Delta B\) is about twice as large for a magnetic field |
2303.09394 | Dynamics of Voltage Driven Self-Sustained Oscillations in NdNiO$_3$
Neuristors | Active memristor elements, also called neuristors, are self-oscillating
devices that are very good approximations to biological neuronal functionality
and are crucial to the development of low-power neuromorphic hardware.
Materials that show conduction mechanisms that depend superlinearly with
temperature can lead to negative differential resistance (NDR) regimes, which
may further be engineered as self-oscillators. Thermal runaway, insulator to
metal phase transitions (IMT) can lead to such superlinearity and are being
extensively studied in systems such as TaO$_x$, NbO$_x$ and VO$_2$. However,
ReNiO$_3$ systems that offer large tunability in metal-insulator transition
temperatures are less explored so far. Here we demonstrate all-or-nothing
neuron-like self-oscillations at MHz frequency and low temperatures on thin
films of NdNiO$_3$, a model charge transfer insulator, and their frequency
coding behavior. We study the temperature dependence of NDR and show that it
vanishes even at temperatures below the IMT temperature. We also show that the
threshold voltages scale with device size and that a simple electrothermal
device model captures all these salient features. In contrast to existing
models, our model correctly predicts the independence of oscillation amplitude
with the applied voltage, offering crucial insights about the nature of fixed
points in the NDR region, and the dynamics of non-linear oscillations about
them. KEYWORDS: NDR, oscillations, thermal model. | Upanya Khandelwal, Qikai Guo, Beatriz Noheda, Pavan Nukala, Saurabh Chandorkar | 2023-03-15T07:37:57Z | http://arxiv.org/abs/2303.09394v1 | # Dynamics of Voltage Driven Self-Sustained Oscillations in NdNiO\({}_{3}\) Neuristors
###### Abstract
Active memristor elements, also called neuristors, are self-oscillating devices that are very good approximations to biological neuronal functionality and are crucial to the development of low-power neuromorphic hardware. Materials that show conduction mechanisms that depend superlinearly with temperature can lead to negative differential resistance (NDR) regimes, which may further be engineered as self-oscillators. Thermal runaway, insulator to metal phase transitions (IMT) can lead to such superlinearity and are being extensively studied in systems such as TaOx, NbOx and VO\({}_{2}\). However, ReNiO\({}_{3}\) systems that offer large tunability in metal-insulator transition temperatures are less explored so far. Here we demonstrate all-or-nothing neuron-like self-oscillations at MHz frequency and low temperatures on thin films of NdNiO\({}_{3}\), a model charge transfer insulator, and their frequency coding behavior. We study the temperature dependence of NDR and show that it vanishes even at temperatures below the IMT temperature. We also show that the threshold voltages scale with device size and that a simple electrothermal device model captures all these salient features. In contrast to existing models, our model correctly predicts the independence of oscillation amplitude with the applied voltage, offering crucial insights about the nature of fixed points in the NDR region, and the dynamics of non-linear oscillations about them.
NDR, oscillations, thermal model.
Current-controlled negative differential resistance (CC-NDR) in strongly correlated 1oxide based devices, such as niobium dioxide [2, 3, 4, 5] and vanadium dioxide [6] has recently gained a lot of attraction in the context of neuromorphic computing [7, 8]. CC-NDR is associated with conduction mechanisms that show superlinear dependence on temperature [9], typically either driven by thermal runaway effects or insulator to metal transition (IMT) triggered by Joule heating. [10, 11]. Quasistatic NDR results from device instability and local activity [12, 13], and under appropriate conditions can lead to electrical self-oscillations [14, 15, 16, 17]. Devices displaying an NDR regime are able to amplify electrical signals, within a given parameter range, and are referred to as locally active [18]. Utilizing such locally active memristors as neuronal elements in a neuromorphic architecture enables low power computing [19, 20, 21, 22].
NDR is most often reported in VO2 and NbO2 devices. The IMT transition temperature of VO2 is 340K [23], while that of NbO2 occurs at very high temperatures (~1080 K) [24]. It has recently been put forward that reaching the IMT in a device setting in NbO2 is unlikely and the observed oscillations are most likely due to thermal run-away effects [25, 26, 27] that occur at lower temperatures [7, 28]. Pervoskite rare- earth nickelates (ReNiO3, Re = Pr, Nd, Sm, Eu,.. Lu), are charge-transfer insulators showing rich correlated electron physics, and exhibit IMT owing to charge disproportionation [29]. Suitable choice of the Re element (or combinations of Re elements) allows for tunability in the transition temperatures between 100 and 800 K [29]. As a result, these systems are suitable playgrounds to independently assess the CC-NDR driven oscillatory behavior originating from IMT, thermal runaway and their coupled effects.
NDR arising from IMT has been recently shown in thin films of NdNiO3 and SmNiO3 [30]. In another work, H-doped NdNiO3 (NNO) was proposed as a potential candidate for memristor based neural network with applications in artificial intelligence, although the relevant effects arise owing to the behavior of hydrogen dopants and not the IMT behavior [31]. However, studies that explore electrical self-oscillations and their dynamics from ReNiO3 based systems are lacking or very exploratory.
In this work, we systematically study the volatile switching behavior, and for the first time determine characteristics of self-sustained electrical oscillations on a model system consisting of NNO films epitaxially grown on LaAlO3 (LAO) substrates. We fabricate lateral two-terminal devices of various dimensions and investigate the temperature and channel length
dependency of the CC-NDR behavior. We demonstrate non-linear current oscillations at ~MHz frequencies (and higher harmonics) taking place about a fixed point in the NDR region. The operating point of the dynamical system is set by a suitable choice of biasing resistor and external voltage. We show that the oscillation frequency can be tuned with external voltage, and we gain an accurate and clear understanding of all these salient features through a simple coupled electro-thermal modeling.
## 2 Results and Discussions
### Film growth and quality checks:
Epitaxial thin films of NNO (5 nm) were grown on LAO substrates using pulsed laser deposition, with conditions reported elsewhere[32]. The defect density in these films has been systematically controlled[33] and the films used in the present work are very high-quality with low defect densities, as reflected from the structural and electrical property measurements reported in ref [32]. The resistance-temperature measurements reveal a hysteretic first order IMT phase transition, with IMT Temperature (T\({}_{\text{IMT}}\)=120 K) (during heating cycle).
### Transport characteristics of devices:
On these films we fabricate two terminal lateral devices, with channel lengths[1] 200-800 nm, using e-beam lithography followed by metallization with Pt electrodes (see Fig.1a, Methods). Voltage-controlled (VC) current-voltage (I-V) sweeps (Fig.1b) on these two-terminal devices shows threshold-switching behavior similar to what is reported in[30]. The threshold voltage, defined as the voltage at which the insulating state transforms to a metallic state, decreases
with increasing ambient temperature (Fig.1c), as expected (less power is required to Joule-heat the device to TIMT and beyond). We also note that the hysteresis associated with this phase transition reduces with increasing ambient temperature. (Fig.1c). Current-controlled sweeps allow us to access the NDR regions (Fig.1b), during the transition from the insulator to the metallic phase. The device voltage at the onset of NDR(V\({}_{\text{N}}\)) (corresponds to threshold voltage in VC-IV measurements) scales with the channel length (Fig.1d).
### Characteristics and salient features of self-oscillations:
To study the self-oscillations, we added a biasing resistor (10kohm) in series to the device, and systematically increased the magnitude of voltage pulses (20 \(\upmu\)s) from 5V to 15 V while simultaneously measuring the current response. The data on a representative device (channel length=760nm) is shown in Fig.2. For applied voltages below 9V, we observe a regular charging and discharging behavior of the RC circuit element (Fig.2a). However, from 9V to 13V, the device shows periodic non-linear (multiple harmonics), asymmetric current oscillations (Fig.2b,2d). The amplitude of current oscillations remains constant at ~3.5 mA, independent of applied voltage, and this aligns with the all-or-none law in neurons[34, 35].
From the load line analysis, we show that oscillations occur only when the device operating point falls in the NDR region (Fig.3a, applied voltages: 9V to 12V). Importantly, we note that the frequency of oscillations increases with the applied voltage, and that its rate of increase, decreases with the resistance of the external resistor. In other words, the frequency of
oscillations in our devices encodes information about the applied external stimulus, a behavior known as frequency coding in the nervous system [36]. Furthermore, we note that at a particular voltage, the frequency of oscillations decreases with an increase in resistance of the external resistor (Fig.3b), as also demonstrated in other neuristor systems [37].
### Electrothermal device modeling:
To understand the various features of the observed CC-NDR and self-sustained oscillatory behavior, we carried out electrothermal device modeling using LT Spice. The coupled electrical and heat transport problem is represented by the circuits shown in Figs. S1-S3.
**Fig.3**: Load-line analysis and frequency coding (a) Load-line analysis for 10k ohm resistor. (b) Frequency of current oscillations with voltage for different biasing resistors.
The electrical model for our device consists of resistors R\({}_{\rm x}\), R\({}_{\rm d}\) and a device capacitance C\({}_{\rm d}\) in a parallel configuration. An appropriate source can be applied across the device to match the measurement protocols and a source and line capacitance has also been included to accurately model the physics of the system. It has been shown in other works [38, 39] that only a part of the channel participates in the phase transition, which is modeled here as a non-linear resistor (R\({}_{\rm d}\)) that changes with device temperature as per the conductivity vs temperature characteristics [32]. R\({}_{\rm x}\), on the other hand is the background resistance of the non-active region in the channel and is assumed to be relatively independent of temperature as substantiated by our thermal model. Incorporation of a separate R\({}_{\rm x}\) allows us to correctly predict the order of magnitude of the values of currents in self-oscillations. The output power of the electrical circuit is then used as the heat source for the thermal circuit. Heat can be dissipated vertically across various layers to the substrate, or laterally along the length of the device and along the width outside the channel. Various heat dissipation channels are lumped as thermal resistors, with equivalent thermal resistance (R\({}_{\rm t}\)) given in Eq.1. Equivalent thermal capacitances (C\({}_{\rm t}\)) of various layers are given in Eq.1.
The thermal resistance (R\({}_{\rm t}\)) and thermal capacitance (C\({}_{\rm t}\)) are estimated from the material parameters and geometry as follows:
\[R_{t}=\frac{L}{kA}\,C_{t}=\rho CV\] (Eq.1)
where, L: length of the heating element, k: thermal conductivity of material, A: cross-sectional area, \(\rho\): mass density, C: heat capacity and V: volume. (Details in supplementary).
Thermal and electrical conductivities of the device as a function of temperature across the phase transition in NNO are modeled as a sigmoidal function, as shown in equations 2 and 3 (also see Supplementary table for exact values used in the model).
\[\mathbf{\sigma} =\mathbf{\sigma}_{ins}+(\mathbf{\sigma}_{ins}-\mathbf{\sigma}_{m})\frac{1}{ \mathbf{1+exp}\frac{(Ts-T)}{a}}\] (Eq. 2) \[\mathbf{\kappa} =\mathbf{\kappa}_{ins}+(\mathbf{\kappa}_{ins}-\mathbf{\kappa}_{m})\frac{1}{ \mathbf{1+exp}\frac{(Ts-T)}{a}}\] (Eq. 3)
Here, \(\sigma_{ins},\sigma_{m}\)and \(\mathbf{\kappa}_{ins},\mathbf{\kappa}_{m}\) are the insulating and metallic electrical and thermal conductivity respectively. T\({}_{\rm s}\) and \(\mathbf{\alpha}\) are the transition temperature and the spread about the transition
temperature respectively. The various results of our electrothermal simulations are shown in Fig.5.
Our results accurately reflect the CC-NDR behavior and the reduction in V\({}_{\text{N}}\) with reduction in channel length (Fig 5a). This is a result of larger heat dissipated in longer channels, which requires them to be compensated with larger power input (or larger V\({}_{\text{N}}\)) to heat the device to T\({}_{\text{IMT}}\). The decrease of threshold voltage with increasing ambient temperature is also nicely captured (Fig. 5b), confirming the predominant role of Joule heating in IMT. Furthermore, our modeling predicts the weakening of NDR with increased ambient temperature (Fig.5c), and the complete disappearance of the same at an ambient temperature of ~60K, which is
much below T\({}_{\text{IMT}}\) (120 K). Indeed, we experimentally verify that devices operating at 80K do not exhibit a threshold voltage of transition (Fig.1c) or associated self-sustained oscillations. We also model the dependence of frequency on biasing resistor and applied voltage (Fig.5f). These curves show a non-linear dependence of frequency with voltage. Although our experiments did not reflect this non-linearity clearly, it was observed experimentally in other threshold memristor systems [40].
Next, using the same parameters for the model as that used for simulating I-V characteristics, we simulated the current oscillations for a specific device (I=760nm) at an ambient temperature of 10K, and a biasing resistor (10 k\(\Omega\)) in series, with a V\({}_{0}\) voltage applied across the circuit (Fig.S2). For V\({}_{0}\) in the range 8V to 18V, the device local temperature oscillates from 22K to 214K (Fig.5d), independent of the value of V\({}_{0}\). This corresponds to a few mA fluctuations in current (Fig.5e), also independent of V\({}_{0}\), as observed experimentally. A representative limit cycle for the oscillation of the device is shown in Fig. S4.
The non-dependence of current oscillation amplitude on applied voltage was also observed experimentally by us as well as other researchers on different Mott-insulating systems [39]. However, unlike previous models, our simple electrothermal model precisely captures this behavior. Finally, our model also captures the frequency coding of external stimulus behavior observed experimentally (Fig. 3b). This is consistent with larger input voltage requiring less time for a capacitor to charge to the threshold voltage, thereby increasing the oscillation frequency [41].
## 3 Conclusion:
We systematically explored the volatile switching behavior and characteristics of self-sustained electrical oscillations in the lateral, two-terminal NdNiO\({}_{3}\) (NNO) devices that occur about a fixed point in the NDR region of the device. Our devices emulate an all-or-nothing neuronal oscillatory behavior [34, 35], and also encode the external stimulus in the frequency of the oscillations (frequency coding). Our electrothermal model accurately reproduces all the observed behavior and shows that a superlinear dependence of conductivity on device temperature at the phase transition causes NDR and coupled current oscillations. More importantly, for the first time, our model correctly shows the independence of current fluctuation amplitude with external stimulus. Although this feature was observed in several other neuristors, it could not be captured through models [38, 39]. Furthermore, our model
seamlessly captures the I-V characteristics of devices with different lengths as well as their dependence on the ambient temperature. Our work opens up a framework to explore the family of rare earth nickelates which offer tunable transition temperatures and hence a platform to study the interplay between multiple mechanisms contributing to NDRs such as coupling of thermal runoff based and IMT based neuristor behaviors and their possible power advantages.
## Acknowledgements:
This work was partly carried out at Micro and Nano Characterization Facility(MNCF), and National Nanofabrication Centre(NNFC) located at CeNSE, IISc, Bengaluru, and benefitted from all the help and support from the staff. P.N. acknowledges Start-up grant from IISc, Infosys Young Researcher award, and DST-starting research grant SRG/2021/000285. The authors acknowledge funding support from the Ministry of Education (MoE) from Ministry of Electronics and Information Technology (MeitY) and Department of Science and Technology (DST) through NNetRA. BN, QG acknowledge the financial support of the CogniGron research center and the Ubbo Emmius Funds (University of Groningen). SC acknowledges Indian Space Research Organization, Government of India (Gol) under Grant DS_2B13012(2)/41/2018-Sec.2, by the Ministry of Electronics and Information Technology, Gol under 25(2)/2020_ESDA DT.28.05.2020 and Ministry of Human Resource and Development, Gol Grant SR/MHRD_18_0017. SC also acknowledges DRDO JATP: DFTM/02/3125/M/12/MNSST-03.
|
2302.00927 | Out-of-Time-Order Correlation as a Witness for Topological Phase
Transitions | We propose a physical witness for dynamically detecting topological phase
transitions (TPTs) via an experimentally observable out-of-time-order
correlation (OTOC). The distinguishable OTOC dynamics appears in the
topological trivial and non-trivial phases due to the topological locality. In
the long-time limit, the OTOC undergoes a {\it zero-to-finite-value transition}
at the critical point of the TPTs. This transition is robust to the choices of
the initial state of the system and the used operators in OTOC. The proposed
OTOC witness can be applied into the systems with and without chiral symmetry,
e.g., the lattices described by the SSH model, Creutz model, and Haldane model.
Moreover, our proposal, as a physical witness in real space, is still valid
even in the presence of disorder. Our work fundamentally offers a new prospect
of exploring topological physics with quantum correlations. | Qian Bin, Liang-Liang Wan, Franco Nori, Ying Wu, Xin-You Lü | 2023-02-02T07:57:22Z | http://arxiv.org/abs/2302.00927v2 | # Out-of-Time-Order Correlation as a Witness for Topological Phase Transitions
###### Abstract
We propose a physical witness for dynamically detecting topological phase transitions (TPTs) via an experimentally observable out-of-time-order correlation (OTOC). The distinguishable OTOC dynamics appears in the topological trivial and non-trivial phases due to the topological locality. In the long-time limit, the OTOC undergoes a _zero-to-finite-value transition_ at the critical point of the TPTs. This transition is robust to the choices of the initial state of the system and the used operators in OTOC. The proposed OTOC witness can be applied into the systems with and without chiral symmetry, e.g., the lattices described by the SSH model, Creutz model, and Haldane model. Moreover, our proposal, as a physical witness in real space, is still valid even in the presence of disorder. Our work fundamentally offers a new prospect of exploring topological physics with quantum correlations.
Topological phase transitions (TPTs) are fundamentally interesting in modern physics because these go beyond the paradigm of traditional phase transitions associated with symmetry breaking [1]. It offers a non-trivial paradigm for the classification of matter phases, and thus is attracting enormous attention in condensed matter physics [2; 3; 4; 5], optics [6], and non-Hermition physics [7]. The occurrence of TPTs involve the gap-closing-and-opening of band (the change of system topology) with symmetry preserving. According to the extended bulk-boundary correspondence, the \(n\)th-order TPT in a \(d\)-dimensional (dD) system leads to the appearance of a \((d-n)\)-dimensional gapless boundary state in the topological non-trivial phase [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. This symmetry-protected boundary state has strong robustness to disorder [20; 21; 22] and defects [23]. It can be used to realize topological lasers exhibiting robust transports [23; 24; 25; 26; 27], topological protected quantum coherence [28; 29], and quantum state transfer [30]. Thus, the detection of TPTs is a key for exploring topological physics. To quantitatively distinguish the topological trivial and non-trivial phases, normally one calculates topological invariants (e.g., winding number and Chern number) in momentum space [31]. However, identifying TPTs with those commonly used topological invariants is not suited for disorder systems where it is difficult to give the Hamiltonian in momentum space. Then, it becomes a significant task to identify TPTs via an alternative physical witness in real space that is robust to disorder.
The OTOC, defined as \(\mathcal{O}(t)=\langle W^{\dagger}(t)V^{\dagger}W(t)V\rangle\) with \(W(t)=e^{iHt}We^{-iHt}\), was proposed in investigating the holographic duality between a strongly interacting quantum system and a gravitational system [32; 33; 34; 35; 36; 37]. Here \(W\) and \(V\) are initially commuting operators [38]. Different from the normal time-order correlation function characterizing classical and quantum statistics [39; 40; 41; 42; 43; 44], the OTOC can quantify the temporal and spatial correlations throughout many-body quantum systems, which is closely related to information scrambling. Thus, it is a widely used tool for diagnosing chaotic behavior [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63], many-body localization [64; 65; 66; 67; 68; 69; 70; 71], entanglement [72; 73; 74; 75; 76], and quantum phase transitions [77; 78; 79; 80; 81; 82; 83; 84]. Here, many-body localization is a kind of many-body phenomenon in the nonequilibrium system caused by many-body interactions. This is essentially different from TPTs that describe the change of topological structure of systems. Under the frame of band topology theory, normally the TPTs occurs in the system without the many-body interactions. Recently, the dynamical detection of TPTs with OTOCs has been proposed [81], where the infinite-temperature OTOCs can directly probe zero-temperature quantum phases via detecting the presence of Majorana zero modes. This construct a relation between infinite-temperature information scrambling and zero-temperature \(\mathbb{Z}_{2}\) topological order. Moreover, the OTOC can also be implemented experimentally [85; 86; 87; 88; 89] by connecting the time reversal to the Loschmidt echo technique [90; 91; 92]. Further exploiting OTOC dynamics in topological systems may open a door for completing the challenging problem of identifying TPTs in the presence of disorder. Until now, the relation between zero-temperature OTOC and TPTs remains largely unexplored, which may substantially advance the fields of quantum correlation and topological physics.
Here we propose a zero-temperature OTOC witness for dynamical detecting \(\mathbb{Z}\)-type TPTs in lattice systems. As shown in Fig. 1(a), the constructed OTOC becomes an experimentally observable fidelity [85] of a final state \(\rho_{f}\) projected onto an initial state \(\rho_{0}\) by defining
\(|\psi_{0}\rangle\langle\psi_{0}|\), i.e.,
\[\mathcal{O}(t)=\mathrm{tr}[\rho_{0}e^{iHt}W^{\dagger}e^{-iHt}\rho_{0}e^{iHt}We^{- iHt}]=F(t). \tag{1}\]
Due to the topological locality, the long-time limit of the OTOC \(\mathcal{O}(t\rightarrow\infty)\) undergoes a _zero-to-finite-value transition_ along with the system entering into the non-trivial phase from the trivial phase. This sudden change is not limited by the choices of the operators \(V\) (corresponding to the initial state of system) and \(W\). In comparison with previous methods of detecting TPTs [5], the proposed OTOC, as a witness in real space, can be applied in _disordered systems_. Moreover, it is not only suitable for the systems with chiral symmetry described by the nearest-neighbor (NN) Su-Schrieffer-Heeger (SSH) model, next-next-nearest-neighbor (NNNN) SSH model and Creutz model, but also can be used to the systems without chiral symmetry, such as 2D lattices described by the Haldane model and Qi-Wu-Zhang model. We also demonstrate the validity of the OTOC witness for detecting second-order TPTs.
_Detecting TPTs in the systems with chiral symmetry._--Without loss of generality, we choose the 1D SSH model and Creutz model depicted in Figs. 1(b,c) as examples for demonstrating the validity of detecting TPTs with OTOC in the systems with chiral symmetry. The corresponding system Hamiltonians can be written as [31, 93, 94, 95]
\[H_{\mathrm{s}}\!\!=\!\!\!\sum_{n}\!\{\nu_{n}a_{n}^{\dagger}\sigma_{1}a_{n}\! \!+\!\![(\omega_{n}a_{n+1}^{\dagger}\!\!+\!\epsilon\eta a_{n+2}^{\dagger}) \frac{\sigma_{1}\!\!+\!i\sigma_{2}}{2}a_{n}\!\!+\!\mathrm{h.c.}]\}, \tag{2a}\] \[H_{\mathrm{cr}}\!\!=\!\!\sum_{n}\!\{\eta_{0}a_{n}^{\dagger}\sigma_ {1}a_{n}+\eta_{0}^{\prime}[a_{n+1}^{\dagger}\frac{\sigma_{1}\!-\!i\sigma_{3}} {2}a_{n}+\mathrm{h.c.}]\}, \tag{2b}\]
where the number of cells is \(N\), \(\sigma_{j}\) (\(j=0,1,2,3\)) is Pauli operator, and \(a_{n}^{\dagger}=(a_{n,A}^{\dagger},a_{n,B}^{\dagger})\) is the annihilation operator of the unit cell \(n\) with sublattices \(A\), \(B\). For the SSH model with Hamiltonian \(H_{\mathrm{s}}\), \(\omega_{n}=\epsilon(1+d_{1}r_{n})\) [or \(\nu_{n}=\epsilon(\nu+d_{2}r_{n}^{\prime})\)] is the intercell (or intracell) hopping strength. Disorder with the dimensionless strengths \(d_{1}\), \(d_{2}\) has been included here, and \(r_{n}\), \(r_{n}^{\prime}\) are the independent random real numbers chosen from the uniform distribution \([-0.5,0.5]\). Physically, \(\epsilon\) is the characteristic intercell strength, \(\nu\) is the ratio of intra- to inter-cell hopping in the clean system, and \(\epsilon\eta\) is the NNNN hopping strength. Here, \(H_{\mathrm{s}}\) is reduced to a standard Hamiltonian of the NN SSH model when \(\eta=0\). For the Creutz model with Hamiltonian \(H_{\mathrm{cr}}\), the arrows indicate the sign of the hopping phase, and \(\eta_{0}\) (\(\eta_{0}^{\prime}\)) is the vertical (horizontal and diagonal) hopping strength. The above models possess a chiral symmetry with a well-defined chiral operator \(\mathcal{C}_{\mathrm{1d}}\), which can reverse the energy of the system, i.e., \(\mathcal{C}_{\mathrm{1d}}H\mathcal{C}_{\mathrm{1d}}^{-1}=-H\) (\(H=H_{\mathrm{s}},H_{\mathrm{cr}}\)), where \(\mathcal{C}_{\mathrm{1d}}=\sum_{n=1}^{N}a_{n}^{\dagger}\sigma_{3}a_{n}\) for the SSH model and \(\mathcal{C}_{\mathrm{1d}}=\sum_{n=1}^{N}a_{n}^{\dagger}\sigma_{2}a_{n}\) for the Creutz model.
Let's first consider the case of no disorder, i.e., \(d_{1}=d_{2}=0\), the NN (and NNNNN) SSH model and Creutz model feature the TPTs at \(\nu=1\) (and \(\eta=0,1\)) and \(\eta_{0}=\eta_{0}^{\prime}\), respectively [31, 93, 94, 95]. To identify the topological non-trivial and trivial phases in real space, in Fig. 2, we numerically calculate the OTOC dynamics with Eq. (1), which involves the backward evolution. Note that, Fig. 2 includes the results for choosing different OTOC operators \(V\) and \(W\). It clearly shows that, both for the SSH model and Creutz model, the distinguishable OTOC dynamics appears in the non-trivial and trivial phases. Specifically, the OTOC evolves to a finite value and almost zero in the topological non-trivial and trivial phases, respectively [see the insets of Figs. 2(b,d,f)]. This relates to the physical mechanism that the information does scramble in the trivial phase, while this scrambling is suppressed immensely in the non-trivial phase. There exists a _zero-to-finite-value transition_ in the long-time limit of the OTOC, when the system enters into the non-trivial phase from the trivial phase. This distinguishable OTOC dynamics is robust to the initial state of the system (i.e., the operator \(V\)), which
Figure 1: (a) A schematic illustration of implementing the OTOC, which is equal to the fidelity \(F(t)=\mathrm{tr}[\rho_{0}\rho_{f}]\)[74, 85]. First, the initial state \(\rho_{0}\) evolves to the state \(\rho_{1}(t)\) under \(T_{-}=e^{-iHt}\). Second, the system changes from \(\rho_{1}(t)\) to \(\rho_{2}(t)\) after the operation of \(W\). Lastly, the system evolves backward to get the final state \(\rho_{f}\) under \(T_{+}=e^{iHt}\). (b, c) Schemes of the 1D SSH model and Creutz model, which describe the lattice systems with chiral symmetry. (d, e) Phase diagrams of the NN SSH model: the OTOC versus \(\epsilon t\) and \(\nu\) for (d) \(W=a_{1,A}^{\dagger}a_{1,A}\) and (e) \(W=\sum_{n=1}^{N-1}a_{n}^{\dagger}\sigma_{3}a_{n}\), where \(N=200\), \(|\psi_{0}\rangle=|1,A\rangle\), and \(d_{1}=d_{2}=0\). The topological non-trivial and trivial phases are denoted as TNP and TTP, respectively.
could be a single-site occupation or multi-site occupation state. Moreover, the averaged OTOC becomes discrete at the critical point, when the initial state is the eigenstate of the system whose eigenvalue has the lowest absolute value [96]. Figure 2 also shows that the OTOC witness is not limited by the choice of the operator \(W\). In our proposal, the operator \(W\) can either a few-site (including single-site) operation on sublattice \(A\) (e.g., \(W=\sum_{l=1}^{L}a_{l,A}^{\dagger}a_{l,A}\), \(L=1,2,3\)) or a multi-site operation on sublattices \(A\) and \(B\) (e.g., \(W=\sum_{n=1}^{N-1}a_{n}^{\dagger}\sigma_{j}a_{n}\), \(j=2,3\)), and the chosen operators \(W\) neither commute nor anti-commute with the system Hamiltonian, i.e., \([W,H]_{\pm}\neq 0\).
To fully show the dependence of the OTOC witness on system parameters, we also calculate the analytical solution of \(\mathcal{O}(t)\) under the condition of \(N\gg 1\). Let's consider the NN SSH model as an example, and choose \(|\psi_{0}\rangle=\sum_{m=1}^{M}\frac{(-1)^{m-1}}{\sqrt{M}}|m,A\rangle\), where \(M=1\) corresponds to the case of single-site occupation state, i.e., \(|\psi_{0}\rangle=|1,A\rangle\). Here, \(m\) and \(A/B\) in state \(|m,A/B\rangle\) represent the \(m\)th cell and sublattice \(A/B\), respectively. Corresponding to \(W=\sum_{l=1}^{L}a_{l,A}^{\dagger}a_{l,A}\) and \(W=\sum_{n=1}^{N-1}a_{n}^{\dagger}\sigma_{3}a_{n}\), we respectively obtain [96]
\[\mathcal{O}(t)\!\approx\![1/\!\sum_{n=0}^{N}\nu^{2n}\!+\!\sum_{k=1}^{N}\!\frac {2\epsilon^{2}\nu^{2}\cos(\lambda_{+}^{(k)}t)}{(N+1)(\lambda_{\pm}^{(k)})^{2} }\sin^{2}(\frac{k\pi}{N+1})]^{4} \tag{3}\]
and
\[\mathcal{O}(t)\!\approx\![1/\!\sum_{n=0}^{N}\!\nu^{2n}\!+\!\sum_{k=1}^{N}\! \frac{2\epsilon^{2}\nu^{2}\cos(2\lambda_{+}^{(k)}t)}{(N+1)(\lambda_{\pm}^{(k) })^{2}}\sin^{2}(\frac{k\pi}{N+1})]^{2} \tag{4}\]
for \(L,M=1\). Here \(\lambda_{\pm}^{(k)}=\pm\epsilon[1+\nu^{2}+2\nu\cos(\frac{k\pi}{N+1})]^{1/2}\) and \(k=1,2,\ldots,N\). Note that the above equations require \(\nu\neq 0\), and \(\nu=0\) means that the hopping cannot occur in the intracells, corresponding to \(\mathcal{O}(t)=1\). The similar analytical results for \(L,M>1\) are shown in the supplementary material [96]. As shown in Figs. 1(a,b), the analytical solutions also present a _zero-to-finite-value transition_ of OTOC at the critical point of TPTs. This conclusion is valid for both the cases of choosing \(W\) as a single-site operation and a multi-site operation. Figures 2(a,b) show a very good agreement between the analytical solutions and the fully numerical simulations, which demonstrates the validity of our solutions.
Now let's discuss the influence of disorder on our proposal by choosing the NN SSH model as an example. The proposed OTOC witness for identifying the TPTs is also suitable for _disordered systems_. As shown in Figs. 3(a,b), \(\mathcal{O}(t\rightarrow\infty)\) still undergoes the _zero-to-finite-value transition_ along with the occurrence of the TPTs, even when weak disorder is introduced into the system. In terms of information, this transition originally comes from the topological locality in the non-trivial phase. Specifically, the information scrambling occurs in the trivial phase, and is suppressed immensely in the non-trivial phase. Similar as the case of no disorder, this result is robust to the choices of the operator \(W\). Figures 3(a,b) also show that the above distinguishability of the OTOC dynamics disappears in the strong disorder regime (e.g., \(d>4\)). Physically, this is because the TPTs, together with the symmetry-protected boundary state, will disappear as the disorder is too large. Figures 3(c,d) further demonstrate the vanishing of the topological non-trivial phase induced by strong disorder. Moreover, the proposed OTOC witness can also be considered as an order parameter of the topological phase diagram, and predict topological Anderson insulator physics [96]. It is consistent with previous works in Refs. [20; 22], which further verify the validity of our OTOC witness.
_Detecting TPTs in the systems without chiral symmetry.--_The proposed OTOC witness for identifying the TPT is not limited to the above systems with chiral symmetry, but is applicable for the systems without chiral
symmetry, such as 2D lattice systems described by the Haldane model and Qi-Wu-Zhang model. As shown in Fig. 4(a), the Haldane model on the honeycomb lattice has Hamiltonian [108; 109]
\[H_{\rm ha} = \eta_{1}\sum_{\langle j,j^{\prime}\rangle}c_{j}^{\dagger}c_{j^{ \prime}}+\eta_{2}\sum_{\langle\langle j,j^{\prime}\rangle\rangle}e^{is_{jj^{ \prime}\phi}}c_{j}^{\dagger}c_{j^{\prime}}+\mu s^{\prime}\sum_{j}c_{j}^{\dagger }c_{j}, \tag{5}\]
where \(c_{j}^{\dagger}\) (\(c_{j}\)) is the creation (annihilation) operator of the \(j\)th site, and the summation indexes cover all sites. The symbol \(\mu\) in last term denotes the sublattice potential, where \(s^{\prime}=+1\) and \(s^{\prime}=-1\) correspond to sublattices \(A\) and \(B\), respectively. Here, \(\eta_{1}\) and \(\eta_{2}\) are the real-valued nearest- and next-nearest-neighbor hopping amplitudes, respectively. The next-nearest-neighbor hopping contains the phases \(s_{jj^{\prime}}\phi\) with \(s_{jj^{\prime}}=\pm 1\), which can break the time-reversal symmetry. The system has no chiral symmetry and is a paradigmatic example of 2D lattice featuring TPTs. For example, the parameter ranges \(|\mu/\eta_{2}|<3\sqrt{3}\) and \(\mu/\eta_{2}=other\) correspond respectively to the topological non-trivial and trivial phases when \(\phi=\pi/2\). Similar as the procedure used in 1D systems with chiral symmetry, we numerically calculate the OTOC dynamics with Eq. (1) to identify the occurrence of TPTs in real space. As shown in Fig. 4(b), the _zero-to-finite-value transition_ of \(\mathcal{O}(t\rightarrow\infty)\) can still be observed when the system enters into the topological nontrivial phase from the trivial phase. The similar results can also be obtained in the system described by the Qi-Wu-Zhang model [96].
_Application to the second-order TPTs._--Higher-order topological insulators, as an extension of the topological insulators, have recently attracted extensive attention [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. High-order TPTs usually can be identified by detecting the boundary states in real space. For example, the topological protected corner states have been used to identify the second-order TPT in a 2D system [110; 111; 112; 113]. Here, our proposed OTOC witness is also applicable for detecting second-order TPTs. As shown in Fig. 4(c), we take the extended 2D SSH model with non-zero gauge flux as an example, and its Hamiltonian reads [113]
\[H_{\rm 2s}(\mathbf{k})= (\nu^{\prime}+w\cos k_{y})\tau_{0}\otimes\sigma_{1}-w\sin k_{y} \tau_{3}\otimes\sigma_{2}\] \[-(\nu^{\prime}+w\cos k_{x})\tau_{2}\otimes\sigma_{2}-w\sin k_{x} \tau_{1}\otimes\sigma_{2}, \tag{6}\]
where \(\mathbf{k}=\{k_{x},k_{y}\}\) are the wave number, and \(\pm\nu^{\prime}\) (\(\pm w\)) is the intracell (intercell) hopping strength. This system features a second-order TPT when increasing the value of \(\nu^{\prime}/w\), i.e., \(\nu^{\prime}\!<\!w\) and \(\nu^{\prime}\!>\!w\) corresponding to the topological non-trivial and trivial phases, respectively. To identify the occurrence of second-order TPTs, in Fig. 4(d), we numerically calculate the OTOC in the lattice system with \(20\times 20\) cells when the different OTOC operators \(W\) are considered. Figure 4(d) clearly shows the distinguishable OTOC dynamics in the topological non-trivial and trivial phases. Both for \(W=a_{1,1}^{\dagger}a_{1,1}\) and \(W=a_{1,1}^{\dagger}a_{1,1}+a_{1,3}^{\dagger}a_{1,3}+a_{3,1}^{\dagger}a_{3,1}\), the _zero-to-finite-value transition_ of \(\mathcal{O}(t\rightarrow\infty)\) appears at the critical point of
Figure 3: (a,b) The dependence of \(\mathcal{O}(t\rightarrow\infty)\) on \(\nu\) for different disorder strengths \(d\) when (a) \(W=a_{1,A}^{\dagger}a_{1,A}\) and (b) \(W=\sum_{n=1}^{N-1}a_{n}^{\dagger}a_{n}a_{n}\). (c) The value of \(\mathcal{O}(t\rightarrow\infty)\) versus \(d\) for different choices of the operator \(W\) when \(\nu=0.2\). (d) The evolution of the OTOC for different \(d\) indicated by the circles in (c). Here all data are averaged over 30 independent disorder configurations, and we have chosen \(N=200\), \(d_{2}=2d_{1}=d\), and \(|\psi_{0}\rangle=|1,A\rangle\). The TNPs and TTPs are indicated by the gray shadings and write areas, respectively.
the second-order TPT. Moreover, the system is initially in the corner site \((1,1)\) (i.e., \(|\psi_{0}\rangle=|1,1\rangle\)), which is experimentally feasible. Here \((x,y)\) represents a lattice point in the square lattice, and \(|x,y\rangle\) denotes the state occupying in the site \((x,y)\). The creation (annihilation) operator of the site \((x,y)\) is denoted by \(a_{x,y}^{\dagger}\) (\(a_{x,y}\)).
_Experimental implementation and conclusions._--Regarding experimental implementations, the trapped ion [114, 115, 116, 117, 117] is an ideal candidate for our proposal. We consider a set of \(2N\) trapped ions with excited and ground states arranged along a 1D chain as the SSH model. First, the system is initialized in \(\rho_{0}=|1,A\rangle\langle 1,A|\) by applying a \(\pi\) pulse to excite the first ion in the chain into its excited state [115, 116, 117]. Then, one should make the system evolve under the Hamiltonian for a time \(t\) to the state \(\rho_{1}(t)=e^{-iHt}\rho_{0}e^{iHt}\). Subsequently, applying the operator \(W\) to get \(\rho_{2}(t)=W^{\dagger}\rho_{1}(t)W\). When the operator \(W\) is a single-site operator on sublattice \(A\), it can be achieved by removing the polarizations of the ions except for that of the first ions by using selective pulses [115, 116, 117, 85]. Next, inverting the sign of \(H\) by the spin echo technique (i.e., applying a \(\pi\) pulse to reverse the polarization of one of the ions) [90] and making the system evolve again for \(t\) to obtain the final state \(\rho_{f}=e^{iHt}\rho_{2}(t)e^{-iHt}\)[91, 92]. Finally, the OTOC can be obtained by measuring the overlap of the final state with respect to the initial state via a fluorescence detection [85, 117], similar as the many-body Loschmidt echo technique. For 2D lattice systems, the OTOC measurement is similar to that of the 1D lattice systems except for the construction of the model. Note that our proposal is not limited to this particular architecture, and could be implemented or adapted in a variety of platforms that have full local quantum control [86, 87, 88, 118, 119, 120, 121, 122, 123], such as a nuclear magnetic resonance quantum simulator [86, 87, 88] and superconducting qubit [118, 119, 120].
In conclusion, we have proposed a zero-temperature OTOC witness in real space for identifying \(\mathbb{Z}\)-type TPTs in general lattice systems with or without chiral symmetry. Our proposal is robust to the choices of the initial state of the system and the used operators in OTOC. It is also suitable for _disordered systems_, and can predict topological Anderson insulator physics in the strong disorder regime. Moreover, the proposed OTOC witness can be used to detect not only first-order TPTs, but also second-order TPTs. Applying it into non-Hermitian systems [96], the TPTs can be identified without implementing the transition from non-Bloch to Bloch theory. The generality of our proposal leads to that the proposed OTOC witness has predictive power in detecting TPTs. For example, we could construct the OTOC witness by preparing the system initially being in the first site and choosing a single-site operation as the \(W\) operator, even in a situation where we don't already understand the structure of a 1D lattice.
We thank Prof. T. Liu and Prof. J.-H. Gao for helpful discussions. This work is supported by the National Key Research and Development Program of China grant 2021YFA1400700, the National Science Foundation of China (Grants No. 11974125, No. 12205109, No. 12147143), and the China Postdoctoral Science Foundation No. 2021M701323. F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant Number JPMJMS2061], the Japan Society for the Promotion of Science (JSPS) [via the Grants-in-Aid for Scientific Research (KAKENHI) Grant No. JP20H00134], the Army Research Office (ARO) (Grant No. W911NF-18-1-0358), the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06.
|
2304.12075 | Chebotarëv's nonvanishing minors for eigenvectors of random matrices
and graphs | For a matrix $\mathbf{M} \in \mathbb{K}^{n \times n}$ we establish a
condition on the Galois group of the characteristic polynomial
$\varphi_\mathbf{M}$ that induces nonvanishing of the minors of the eigenvector
matrix of $\mathbf{M}$. For $\mathbb{K}=\mathbb{Z}$ recent results by Eberhard
show that, conditionally on the extended Riemann hypothesis, this condition is
satisfied with high probability and hence with high probability the minors of
eigenvector matrices of random integer matrices are nonzero. For random graphs
this yields a novel uncertainty principle, related to Chebotar\"ev's theorem on
the roots of unity and results from Tao and Meshulam. We also show the
application in graph signal processing and the connection to the rank of the
walk matrix. | Tarek Emmrich | 2023-04-24T13:12:33Z | http://arxiv.org/abs/2304.12075v3 | # Chebotarev's nonvanishing minors for eigenvectors of random matrices and graphs
###### Abstract
For a matrix \(M\in K^{n\times n}\) we establish a condition on the Galois group of the characteristic polynomial \(\varphi\) that induces nonvanishing of the minors of the eigenvector matrix of \(M\). For \(K=\mathbb{Z}\) recent results by Eberhard show that, conditionally on the extended Riemann hypothesis, this condition is satisfied with high probability1 and hence with high probability the minors of eigenvectors matrices of random integer matrices are nonzero. For random graphs this will yield a novel uncertainty principle, related to Chebotarev's theorem on the roots of unity and results from Tao and Meshulam.
Footnote 1: We say ”with high probability” for probability \(1-o(1)\) as \(n\to\infty\).
## 1 Introduction
For the Fourier matrix there is Chebotarev's famous theorem on the nonvanishing of the minors, see [9] for a survey.
**Theorem 1**.: _Let \(p\) be a prime and \(F\) be the Fourier matrix of order \(p\), i.e. \(F_{i,j}=\omega^{(i-1)(j-1)}\) for \(\omega=e^{\frac{2\pi i}{p}}\). Then all minors of \(F\) are nonzero._
From the viewpoint of spectral graph theory, the matrix \(F\) is a possible eigenvector matrix of the circle graph and this yields the uncertainty principle
\[\|f\|_{0}+\|\hat{f}\|_{0}\geq p+1\]
for any function \(f\) on the circle graph and its Graph Fourier transform \(\hat{f}=F^{*}f\), see Tao [10] and Meshulam [7] for uncertainty principles. We will prove the following criterion on nonvanishing of the minors of the eigenvector matrix of any matrix \(M\in K^{n\times n}\). Let \(\varphi\) be the characteristic polynomial of \(M\) and \(L\) its splitting field. We denote by \(\operatorname{Gal}(\varphi)\) the Galois group of the field extension \(K\subseteq L\). Furthermore, if \(M\) is diagonalizable, let \(U\) be the matrix with columns \(u_{i}\) that are eigenvectors corresponding to the eigenvalue \(\lambda_{i}\).
**Theorem 2**.: _If \(\operatorname{Gal}(\varphi)\geq A_{n}\), then all minors of \(U\) are nonzero._
Note that for \(\operatorname{Gal}(\varphi)\geq A_{n}\), \(M\) is always diagonalizable. The two widely studied operators on graphs are the adjacency matrix and the Laplacian matrix. For directed graphs the directed adjacency matrix seems to be the most interesting.
For a fixed finitely supported measure \(\mu\) on \(\mathbb{Z}\), let \(M\) be a random matrix with \(m_{i,j}\sim\mu\). In this case Eberhard [4] proved that, conditionally on the extended Riemann hypothesis, with high probability \(\varphi\) is irreducible and \(\operatorname{Gal}(\varphi)\geq A_{n}\). This directly leads to the following corollary, which induces an uncertainty principle for directed graphs and their adjacency matrices.
**Corollary 3**.: _Assume the extended Riemann hypothesis. For a random matrix \(M\), with \(m_{i,j}\sim\mu\), with high probability all minors of \(U\) are nonzero._
Following Eberhards results, Ferber, Jain, Sah and Sawhney [5] could show that the Galois group of a random symmetric matrix is almost surely transitive, which we will see is not sufficient for minors of any size to be nonzero, but for minors of size one.
**Corollary 4**.: _Let \(M\) be the adjacency matrix of a random Erdos-Renyi graph \(G\sim G(n,p)\) and assume the extended Riemann hypothesis. Then with high probability all eigenvectors of \(M\) do not contain a zero._
Working with the Laplacian matrix of a random graph has the difficulty that the rank of the Laplacian matrix of a connected graph is \(n-1\) by default. We can overcome this problem by using the factorization \(\varphi=\lambda\cdot P(\lambda),\) but still known results for random matrices do not hold. For applications, the most interesting case is an uncertainty principle that follows directly from our results.
**Theorem 5**.: _If \(\operatorname{Gal}(P(\lambda))\geq A_{n-1}\), then all minors of \(U\) are nonzero, i.e._
\[\|f\|_{0}+\|\hat{f}\|_{0}\geq n+1\]
_for any signal \(f\neq 0\) on the graph \(G\). Here the graph Fourier transform \(\hat{f}\) is given by \(\hat{f}=U^{*}f\)._
The Fourier matrix can be seen as the eigenvector matrix of the shift matrix. Its characteristic polynomial is
\[\lambda^{p}-1=(\lambda-1)\cdot\sum_{k=0}^{p-1}\lambda^{k}:=\lambda\cdot P(\lambda)\]
with splitting field \(L=\mathbb{Q}[\lambda]/P(\lambda)\) and Galois group \((\mathbb{Z}/p\mathbb{Z})^{\times},\) which is surprisingly not sufficient for our results to hold.
The eigenvectors of random matrices with continuous distribution have been widely studied in the past, for a survey see [8], but as far as we know the minors have not been studied in the discrete case. In the discrete setting the most common application are graphs. Brooks and Lindenstrauss [2] proved a non-localization for large regular graphs, i.e. small subsets of vertices only contain a small part of the mass of an eigenvector. Luh and O'Rourke [6] proved that the adjacency matrix almost surely is controllable.
**Outline.** After introducing the necessary background on Galois theory in Section 2, we will establish our main results in Section 3. Indeed we will show a slightly more general theorem. We discuss in Section 4 why this might be useful for Laplacian matrices, where stochastic results are not known yet. We end by giving examples.
## 2 Background
In this section we will shortly introduce the background of Galois theory. For a polynomial \(\varphi\in\mathbb{Q}[\lambda]\) of degree \(n\) the fundamental theorem of algebra tells us that \(\varphi\) has exactly \(n\) roots \(\lambda_{i},\)\(i=1,\ldots,n,\) over the complex numbers \(\mathbb{C}\). Instead of working over the complex numbers, we will work over the _splitting field_\(L=\mathbb{Q}(\lambda_{i}\colon i\in[n])\) of \(\varphi,\) which is the smallest field extension of \(\mathbb{Q}\) that contains all roots of \(\varphi.\) It can also be constructed inductively by the step \(\mathbb{Q}\subseteq K_{1}:=\mathbb{Q}[\lambda]/R(\lambda)\) that adds at least one root of \(\varphi,\) where \(R(\lambda)\) is an irreducible factor of \(\varphi\). After at most \(n\) steps all roots of \(\varphi\) are contained in the resulting field. The structure of this adding process is encoded in the Galois group
\[\operatorname{Gal}(\varphi):=\operatorname{Aut}_{\mathbb{Q}}(L)\subseteq \operatorname{Sym}(\{\lambda_{1},\ldots,\lambda_{n}\})\cong S_{n}.\]
The same construction can also be applied to any field \(K\) and any \(\varphi\in K[\lambda],\) e.g. the Galois group of the field extension \(\mathbb{F}_{p}\subseteq\mathbb{F}_{p^{*}}\) is the cyclic group of order \(e\) generated by the Frobenius homomorphism \(x\mapsto x^{p}.\)
In the generic case over the rational numbers, Hilbert's irreducibility theorem tells us that
\[\operatorname{Gal}(\varphi)\cong S_{n}.\]
Two algebraic numbers \(\mu_{1},\mu_{2}\) are called _conjugate_, if they have the same minimal polynomial \(\varphi\) over \(\mathbb{Q}\). For two conjugate algebraic numbers \(\mu_{1},\mu_{2}\), there is a \(g\in\operatorname{Gal}(\varphi)\) with \(g(\mu_{1})=\mu_{2}\). We call a group \(H\subseteq S_{n}\)_\(m\)-transitive_, if for all \(I,I^{\prime}\in\binom{[n]}{m}\) there exists a \(g\in H\) such that \(g(I)=I^{\prime}\). The prior fact tells us, that the Galois group of an irreducible polynomial is \(1\)-transitive (or just transitive). Under the assumption of Classification of simple groups, Cameron [3] proved already in 1981 that for \(m\geq 6\) there is no \(m\)-transitive group other than \(A_{n}\) and \(S_{n}\).
## 3 Minors and Galois groups
Let \(M\in K^{n\times n}\) be a diagonalizable matrix, let \(\varphi\) be its characteristic polynomial with splitting field \(L\). We denote the eigenvalues and eigenvectors by \(\lambda_{1},\ldots,\lambda_{n}\in L\) and \(u_{1},\ldots,u_{n}\in L^{n}\). The eigenvector
matrix \(U\) is the matrix with columns \(u_{i}\). For any two conjugate elements \(\lambda_{i},\lambda_{j}\) there is a \(g\in\operatorname{Gal}(\varphi)\) such that \(g(\lambda_{i})=\lambda_{j}\). Applying this \(g\) to the eigenvector equation \(Mu_{i}=\lambda_{i}u_{i}\) we get
\[M^{g}u_{i}^{g}=\lambda_{j}u_{i}^{g}. \tag{1}\]
Since \(M\in K^{n\times n}\) and \(g_{|K}\) is the identity, we have \(M^{g}=M\) and hence the equation 1 tells us \(u_{i}^{g}=u_{j}\). We denote by \(u_{i}^{J}\) the restriction of the \(i\)-th eigenvector to the rows with indices in \(J\). We are now ready to prove Theorem 2.
_Proof._ [Proof of Theorem 2] Assume there are sets \(I,J\in\binom{[n]}{m}\) for some \(m\in[n]\) such that \(\det(M_{J,I})=0\). Let \(a_{1},\ldots,a_{m}\in L\) be the coefficients such that
\[\sum_{i\in I}a_{i}u_{i}^{J}=0^{J}.\]
Since \(\operatorname{Gal}(\varphi)\) is \(m\)-transitive, there exists \(g\in\operatorname{Gal}(\varphi)\) with \(g(I)=I^{\prime}\) for all \(I^{\prime}\in\binom{[n]}{m}\). This yields
\[\sum_{i\in I^{\prime}}a_{i}^{g}u_{i}^{J}=0^{J}\]
for all \(I^{\prime}\in\binom{[n]}{m}\) and hence \(\operatorname{rank}(U_{J,[n]})=m-1\), a contradiction to the rows of \(U\) being linearly independent. \(\square\)
Corollary 3 now follows directly from Eberhard [4, Theorem 1.3] and Corollary 4 follows from the fact that 1-transitivity is equivalent to irreducibility and from Ferber, Jain, Sah and Sawhney [5, Theorem 1.1].
For the Laplacian matrix \(L=D-A\) some things slightly change. If \(c\) is the number of connected components of \(G\), then \(\operatorname{rank}(L)=n-c\). Before working with the Laplacian matrix, we state the more general and technical version of Theorem 2 for symmetric matrices. For \(\mathcal{S}\subseteq\binom{[n]}{m}\) we define \(\mathcal{T}_{m}(\mathcal{S})\) by the equivalence \(S\in\mathcal{T}_{m}(\mathcal{S})\) if and only if one of the following condition holds
1. \(S\in\mathcal{S}\),
2. \(S=(S^{\prime}\cap S^{\prime\prime})\cup\{j\}\) for \(S^{\prime},S^{\prime\prime}\in\mathcal{S}\), \(j\notin S^{\prime}\cup S^{\prime\prime}\) and \(\#(S^{\prime}\cap S^{\prime\prime})=m-1\), or
3. \(S\in\binom{S^{\prime}\cup S^{\prime\prime}}{m}\) for \(S^{\prime},S^{\prime\prime}\in\mathcal{S}\) and \(\#(S^{\prime}\cap S^{\prime\prime})=m-1\).
For \(\mathcal{S}\in\binom{[n]}{m}\) we define \(\Gamma(\mathcal{S})\) by the equivalence \(S\in\Gamma(\mathcal{S})\) if and only if
\[\Gamma(\mathcal{S})=\left\{\gamma(S):S\in\mathcal{S}\text{ and }\gamma\in \operatorname{Gal}(\varphi)\right\}.\]
For any \(\mathcal{S}\subseteq\binom{[n]}{m}\) we call \(\bar{\mathcal{S}}\) the closure of \(S\) under \(\mathcal{T}_{m}\) and \(\Gamma\), i.e. the stable set of the chain arising from alternatingly applying \(\mathcal{T}_{m}\) and \(\Gamma\). This is well defined since we have the finite maximal element \(\binom{[n]}{m}\). We furthermore define the following.
**Definition 6**.: For \(1\leq m\leq\lfloor\frac{n}{2}\rfloor\) we call a group \(H\subseteq S_{n}\)_dependence permuting_ of order \(m\), if for any \(S\in\binom{[n]}{m}\) the closure \(\bar{\mathcal{S}}\) of \(\mathcal{S}=\{S\}\) equals \(\binom{[n]}{m}\).
Note that the operation \(\mathcal{T}_{m}\) is just the usual inheriting of linear independence on sets \(S^{\prime},S^{\prime\prime}\) of size \(m\), where the second rule comes from a symmetry argument that we can use for orthogonal matrices. Let from now on \(U\in L^{n\times n}\) be an orthogonal2 matrix, e.g. the eigenvector matrix of a symmetric matrix \(M\in K^{n\times n}\) for a number field \(K\).
Footnote 2: The matrix could also be chosen orthonormal, but \(\|u\|_{2}\) does not need to be an element of \(L\).
**Lemma 7**.: _If there are sets \(W,S\in\binom{[n]}{k}\) such that the vectors \((u_{i}^{W})_{i\in S}\) are linearly dependent, then the vectors \((u_{i}^{[n]\setminus W})_{i\in[n]\setminus S}\) are also linearly dependent._
_Proof._ We know that
\[\sum_{i\in S}a_{i}u_{i}^{W}=0^{W}.\]
Since for \(j\notin S\) we have \(\langle u_{j},u_{i}\rangle=0\) for all \(i\in S\) it follows that
\[0 =\langle u_{j},\sum_{i\in S}a_{i}u_{i}\rangle\] \[=\langle u_{j}^{W},\sum_{i\in S}a_{i}u_{i}^{W}\rangle+\langle u_{j }^{W},\sum_{i\in S}a_{i}u_{i}^{W}\rangle\] \[=0+\langle u_{j}^{W},\sum_{i\in S}a_{i}u_{i}^{\bar{W}}\rangle.\]
Note that \(\sum_{i\in S}a_{i}u_{i}^{\bar{W}}\neq 0^{\bar{W}}\) and that the vectors \(u_{j}^{W}\) lie in the hyperplane orthogonal to \(\sum_{i\in S}a_{i}u_{i}^{\bar{W}}\) and hence must be linearly dependent.
**Lemma 8**.: _Let \(\mathrm{Gal}(\varphi)\) be dependence permuting of order \(m\) and \(W,S\in\binom{[n]}{m}\) be such that_
\[\det\left((u_{i}^{W})_{i\in S}\right)=0.\]
_Then_
\[\mathrm{rank}\left((u_{i}^{W})_{i\in[n]}\right)=m-1.\]
Proof.: Let \(a_{1},\ldots,a_{k}\in K\) be the coefficients such that
\[\sum_{i\in S}a_{i}u_{i}^{W}=0^{W}.\]
For each \(\gamma\in\mathrm{Gal}(\varphi)\) we immediately get
\[\sum_{i\in\gamma(S)}a_{i}^{\gamma}u_{i}^{W}=0^{W}.\]
and hence we know that the eigenvectors of \(\gamma(S)\) are also linearly dependent on \(W\). If there are two sets \(S^{\prime},S^{\prime\prime}\) with \(\#(S^{\prime}\cap S^{\prime\prime})=m-1\) such that the vectors \((u_{i}^{W})_{i\in S^{\prime}}\) and \((u_{i}^{W})_{i\in S^{\prime\prime}}\) are each linearly dependent we can immediately find a linear dependence on each set \(S\in\binom{S^{\prime}\cup S^{\prime\prime}}{m}\). Using Lemma 7 we also find a linear dependence on each
\[S\in\binom{[n]\setminus(S^{\prime}\cap S^{\prime\prime})}{n-m}\]
on the vertex set \(\bar{W}\). These sets can also be discived as
\[S=([n]\setminus(S^{\prime}\cap S^{\prime\prime}))\setminus\{j\})\]
for \(j\notin(S^{\prime}\cup S^{\prime\prime})\). Again using Lemma 7 these dependencies yield a dependence on \((S^{\prime}\cap S^{\prime\prime})\cup\{j\}\) for \(j\in[n]\setminus(S^{\prime}\cap S^{\prime\prime})\). These two rules are exactly the operations \(\mathcal{T}_{m}\) and \(\Gamma\), hence we have a linear dependence for every \(\tilde{S}\in\binom{[n]}{m}\) and thus
\[\mathrm{rank}\left((u_{i}^{W})_{i\in[n]}\right)=m-1.\]
**Theorem 9**.: _If \(\mathrm{Gal}(\varphi)\) is dependence permuting of order \(m\), then we have_
\[\det\left((u_{i}^{W})_{i\in S}\right)\neq 0\]
_for all \(W,S\in\binom{[n]}{m}\)._
Proof.: If \(\det\left((u_{i}^{W})_{i\in S}\right)=0\) for some \(W,S\), we know by Lemma 8 that \(\mathrm{rank}\left((u_{i}^{W})_{i\in[n]}\right)=m-1\), a contradiction.
### Laplacian matrix
The Laplacian matrix of a connected graph has the property that \(L\cdot\mathds{1}=0\), hence \(\lambda_{1}=0\) and \(u_{1}=\frac{1}{\sqrt{n}}\). We will work with the factorization \(\varphi=\lambda\cdot P(\lambda)\) and analyze the minors of \(U\) under conditions on the Galois group \(\operatorname{Gal}(P)\subseteq S_{n-1}\) of \(P\).
**Theorem 10**.: _If \(\operatorname{Gal}(P)\) is dependence permuting of order \(m\), then we have_
\[\det\left((u_{i}^{W})_{i\in S}\right)\neq 0\]
_for all \(W\in\binom{[n]}{m}\) and \(S\in\binom{\{2,\ldots,n\}}{m}\)._
_Proof._ If \(\det\left((u_{i}^{W})_{i\in S}\right)=0\) for some \(W,S\), we know by Lemma 8 that \(\operatorname{rank}\left((u_{i}^{W})_{i\in[n]}\right)=m-1\) and thus there are coefficients \(a_{w}\in K\) for \(w\in W\) such that
\[\sum_{w\in W}a_{w}\tilde{u}(w)=0,\]
where \(\tilde{u}(w)=(u_{i}(w))_{i=2,\ldots,n}\). For \(v\notin W\) we know that \(\langle u(v),u(w)\rangle=0\) and hence
\[0 =\langle u(v),\sum_{w\in W}a_{w}u(w)\rangle\] \[=\langle u_{1}(v),\sum_{w\in W}a_{w}u_{1}(w)\rangle+\langle\tilde {u}(v),\sum_{w\in W}a_{w}\tilde{u}(v)\rangle\] \[=u_{1}(v)\cdot\sum_{w\in W}a_{w}u_{1}(w).\]
Now since the vectors \(u(w)\) are linearly independent for \(w\in W\) we know that \(\sum_{w\in W}a_{w}u_{1}(w)\neq 0\) and together with \(u_{1}(v)=\frac{1}{\sqrt{n}}\neq 0\) this yields a contradiction. \(\square\)
Theorem 5 follows now from the fact that \(A_{n-1}\) and \(S_{n-1}\) are \(m\)-transitive and Lemma 7. Moreover, if \(P\) is irreducible, it also follows directly from Theorem 10 that all eigenvectors do not contain a zero. Furthermore, if \(\operatorname{Gal}(P)\) is dependence permuting of order \(m\) for each \(1\leq m\leq\frac{n}{2}\), then all minors of \(U\) are nonzero.
## 4 Discussion
Assuming the extended Riemann hypothesis, for random matrices the Galois group of the characteristic polynomial almost surely equals \(A_{n}\) or \(S_{n}\), so it might seem quite laborious to do all the work with dependence permuting groups. Following the introduction of Bhargava [1], we will now discuss, why it still might be useful to have this vocabulary. For \(h\in\mathbb{N}\) there are \((2h+1)^{n}\) monic integer polynomials \(x^{n}+a_{1}x^{n-1}+\ldots+a_{0}\) of degree \(n\) and absolute value of the coefficients bounded by \(h\). Bhargava [1, Theorem 1] proved that there are \(\mathcal{O}(h^{n-1})\) polynomials with Galois group not \(S_{n}\). With \(a_{0}=0\) the lower bound can also be verified.
Those Galois groups of a irreducible polynomial that are not dependence permuting seem to be imprimitive. Now interestingly the number of polynomials with bounded coefficients that have a imprimitive Galois group was computed by Widmer [11] and has order \(\mathcal{O}(h^{n/2+2})\), which is a much smaller order than \(\mathcal{O}(h^{n-1})\) and it is conceivable that one can use arguments involving these terms to get rid of the assumption on the extended Riemann hypothesis or to prove results on the characteristic polynomial of the Laplacian matrix. We finally provide two examples.
**Example 11**.: The graph in figure 1 has the characteristic polynomial
\[\varphi=\lambda\cdot\left(\lambda^{8}-28\lambda^{7}+332\lambda^{6}-2170 \lambda^{5}+8516\lambda^{4}-20440\lambda^{3}+29105\lambda^{2}-22288\lambda+695 7\right).\]
The Galois group of \(P\) has order \(384\) and is given by
\[\operatorname{Gal}(P)\cong\left\{\sigma\in S_{8}\colon\sigma(i+4)=\begin{cases} \sigma(i)+4\text{, for }\sigma(1)\leq 4,\\ \sigma(i)-4\text{, for }\sigma(1)\geq 5,\end{cases}\quad\text{ for }1\leq i\leq 4 \right\}.\]
This group is imprimitive but the minors of any size are nonzero. We see that our criterion is not necessary.
**Example 12**.: The graph in figure 2 has the characteristic polynomial
\[\varphi=\lambda\cdot\left(\lambda^{6}-12\lambda^{5}+54\lambda^{4}-114\lambda^{3} +115\lambda^{2}-50\lambda+7\right).\]
The Galois group has order \(72\) and is given by
\[\operatorname{Gal}(P)\cong\left\{\sigma\in S_{6}\colon\sigma(\{1,2,3\})\in\{ \{1,2,3\},\{4,5,6\}\}\right\}.\]
This group is imprimitive and the vanishing minors of size \(3\) are supported on the columns \(\{1,2,3\}\) or \(\{4,5,6\}\).
|
2301.06806 | Convergence of First-Order Algorithms for Meta-Learning with Moreau
Envelopes | In this work, we consider the problem of minimizing the sum of Moreau
envelopes of given functions, which has previously appeared in the context of
meta-learning and personalized federated learning. In contrast to the existing
theory that requires running subsolvers until a certain precision is reached,
we only assume that a finite number of gradient steps is taken at each
iteration. As a special case, our theory allows us to show the convergence of
First-Order Model-Agnostic Meta-Learning (FO-MAML) to the vicinity of a
solution of Moreau objective. We also study a more general family of
first-order algorithms that can be viewed as a generalization of FO-MAML. Our
main theoretical achievement is a theoretical improvement upon the inexact SGD
framework. In particular, our perturbed-iterate analysis allows for tighter
guarantees that improve the dependency on the problem's conditioning. In
contrast to the related work on meta-learning, ours does not require any
assumptions on the Hessian smoothness, and can leverage smoothness and
convexity of the reformulation based on Moreau envelopes. Furthermore, to fill
the gaps in the comparison of FO-MAML to the Implicit MAML (iMAML), we show
that the objective of iMAML is neither smooth nor convex, implying that it has
no convergence guarantees based on the existing theory. | Konstantin Mishchenko, Slavomír Hanzely, Peter Richtárik | 2023-01-17T11:04:10Z | http://arxiv.org/abs/2301.06806v1 | # Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes
###### Abstract
In this work, we consider the problem of minimizing the sum of Moreau envelopes of given functions, which has previously appeared in the context of meta-learning and personalized federated learning. In contrast to the existing theory that requires running subsolvers until a certain precision is reached, we only assume that a finite number of gradient steps is taken at each iteration. As a special case, our theory allows us to show the convergence of First-Order Model-Agnostic Meta-Learning (FO-MAML) to the vicinity of a solution of Moreau objective. We also study a more general family of first-order algorithms that can be viewed as a generalization of FO-MAML. Our main theoretical achievement is a theoretical improvement upon the inexact SGD framework. In particular, our perturbed-iterate analysis allows for tighter guarantees that improve the dependency on the problem's conditioning. In contrast to the related work on meta-learning, ours does not require any assumptions on the Hessian smoothness, and can leverage smoothness and convexity of the reformulation based on Moreau envelopes. Furthermore, to fill the gaps in the comparison of FO-MAML to the Implicit MAML (iMAML), we show that the objective of iMAML is neither smooth nor convex, implying that it has no convergence guarantees based on the existing theory.
## 1 Introduction
Efficient optimization methods for empirical risk minimization have helped the breakthroughs in many areas of machine learning such as computer vision Krizhevsky et al. (2012) and speech recognition Hinton et al. (2012). More recently, elaborate training algorithms have enabled fast progress in the area of meta-learning, also known as learning to learn Schmidhuber (1987). At its core lies the idea that one can find a model capable of retraining for a new task with just a few data samples from the task. Algorithmically, this corresponds to solving a bilevel optimization problem Franceschi et al. (2018), where the inner problem corresponds to a single task, and the outer problem is that of minimizing the post-training error on a wide range of tasks.
The success of Model-Agnostic Meta-Learning (MAML) and its first-order version (FO-MAML) Finn et al. (2017) in meta-learning applications has propelled the development of new gradient-based meta-learning methods. However, most new algorithms effectively lead to new formulations of meta-learning. For instance, iMAML Rajeswaran et al. (2019) and proximal meta-learning Zhou et al. (2019) define two MAML-like objectives with implicit gradients, while Reptile Nichol et al. (2018) was proposed without defining any objective at all. These dissimilarities cause fragmentation of the field and make it particularly hard to have a clear comparison of meta-learning theory. Nonetheless, having a good theory helps to compare algorithms as well as identify and fix their limitations.
Unfortunately, for most of the existing methods, the theory is either incomplete as is the case with iMAML or even completely missing. In this work, we set out to at least partially mitigate this issue by proposing a new analysis for minimization of Moreau envelopes. We show that a general family of algorithms with multiple gradient steps is stable on this objective and, as a special case, we obtain results even for FO-MAML. Previously, FO-MAML was viewed as a heuristic to approximate MAML Fallah et al. (2020), but our approach reveals that FO-MAML can be regarded as an algorithm for a the sum of Moreau envelopes. While both perspectives show only approximate convergence, the main justification for the sum of Moreau envelopes is that requires unprecedentedly
mild assumptions. In addition, the Moreau formulation of meta-learning does not require Hessian information and is easily implementable by any first-order optimizer, which Zhou et al. (2019) showed to give good empirical performance.
### Related work
MAML Finn et al. (2017) has attracted a lot of attention due to its success in practice. Many improvements have been proposed for MAML, for instance, Zhou et al. (2020) suggested augmenting each group of tasks with its own global variable, and Antoniou et al. (2018) proposed MAML++ that uses intermediate task losses with weights to improve the stability of MAML. Rajeswaran et al. (2019) proposed iMAML that makes the objective optimizer-independent by relying on _implicit_ gradients. Zhou et al. (2019) used a similar implicit objective to that of iMAML with an additional regularization term that, unlike iMAML, does not require inverting matrices. Reptile Nichol et al. (2018) is an even simpler method that merely runs gradient descent on each sampled task. Based on generalization guarantees, Zhou et al. (2020) also provided a trade-off between the optimization and statistical errors for a multi-step variant MAML, which shows that it may not improve significantly from increasing the number of gradient steps in the inner loop. We refer to Hospedales et al. (2021) for a recent survey of the literature on meta-learning with neural networks.
On the theoretical side, the most relevant works to ours is that of Zhou et al. (2019), whose main limitation is that it requires a high-precision solution of the inner problem in Moreau envelope at each iteration. Another relevant work that studied convergence of MAML and FO-MAML on the standard MAML objective is by Fallah et al. (2020), but they do not provide any guarantees for the sum of Moreau envelopes and their assumptions are more stringent. Fallah et al. (2020) also study a Hessian-free variant of MAML, but its convergence guarantees still require posing assumptions on the Hessian Lipschitzness and variance.
Some works treat meta-learning as a special case of compositional optimization Sun et al. (2021) or bilevel programming Franceschi et al. (2018) and develop theory for the more general problem. Unfortunately, both approaches lead to worse dependence on the conditioning numbers of both inner and outer objective, and provide
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Algorithm & \(F_{i}\): meta-loss of task \(i\) & Hessian-free & Arbitrary number of steps & No matrix inversion & Preserves convexity & Preserves smoothness & Reference \\ \hline MAML & \(f_{i}(x-\alpha\nabla f_{i}(x))\) & ✗ & ✗ & ✓ & ✗ & ✗ & Finn et al. (2017) \\ Multi-step & \(f_{i}(GD(f_{i},x))^{(1)}\) & ✗ & ✓ & ✓ & ✗ & ✗ & Finn et al. (2017) \\ MAML & \(f_{i}(z_{i}(x))\), where & & & ✗ & ✗ & ✗ & Ji et al. (2020) \\ iMAML & \(z_{i}(x)=x-\alpha\nabla f_{i}(z_{i}(x))\) & ✗ & ✓ & ✗ & (Theorem 1) & (Theorem 2) & Rajeswaran et al. (2019) \\ Reptile & N/A & & ✓ & ✓ & ✓ & N/A & N/A & Nichol et al. (2018) \\ FO-MAML & \(f_{i}(x-\alpha\nabla f_{i}(x))\) & ✓ & ✗ & ✓ & ✗ & ✗ & Finn et al. (2017) \\ (original) & \(\underset{x_{i}}{\min}\{f_{i}(x_{i})+\frac{1}{2\alpha}\|z_{i}-x\|^{2}\}\) & ✓ & ✗\({}^{(6)}\) & ✓ & ✓ & ✓ & Zhou et al. (2019) \\ Meta-MinibatchProx & \(\underset{x_{i}}{\min}\{f_{i}(x_{i})+\frac{1}{2\alpha}\|z_{i}-x\|^{2}\}\) & ✓ & ✓ & ✓ & ✓ & ✓ & **This work** \\ \hline \hline \end{tabular}
* Multi-step MAML runs an inner loop with gradient descent applied to task loss \(f_{i}\), so the objective of multi-step MAML is \(F_{i}(x)=f_{i}(x_{x}(x))\), where \(x_{0}=x\) and \(x_{j+1}=x_{j}-\alpha\nabla f_{i}(x_{j})\) for \(j=0,\ldots,s-1\).
* To the best of our knowledge, iMAML is not guaranteed to work; Rajeswaran et al. (2019) studied only the approximation error for gradient computation, see the discussion in our special section on iMAML.
* Reptile was proposed as an algorithm in its own, without providing any optimization problem. This makes it hard to say how it affects smoothness and convexity. Balcan et al. (2019) and Khodak et al. (2019) studied convergence of Reptile on the average loss over the produced iterates, i.e., \(F_{i}(x)=\frac{1}{m}\sum_{j=0}^{s}f_{i}(x_{j})\), where \(x_{0}=x\) and \(x_{j+1}=x_{j}-\alpha\nabla f_{i}(x_{j})\) for \(j=0,\ldots,s-1\). Analogously to the loss of MAML, this objective seems nonconvex and nonsmooth.
* Zhou et al. (2019) assumed that the subproblems are solved to precision \(\varepsilon\), i.e., \(x_{i}\) is found such that \(\|\nabla f_{i}(x_{i})+\frac{1}{\alpha}(x_{i}-x)\|\leq\varepsilon\) with an absolute constant \(\varepsilon\).
\end{table}
Table 1: A summary of related work and conceptual differences to our approach. We mark as “N/A” unknown properties that have not been established in prior literature or our work. We say that \(F_{i}\) “Preserves convexity” if for convex \(f_{i}\), \(F_{i}\) is convex as well, which implies that \(F_{i}\) has no extra local minima or saddle points. We say that \(F_{i}\) “Preserves smoothness” if its gradients are Lipschitz whenever the gradients of \(f_{i}\) are, which corresponds to more stable gradients. We refer to Fallah et al. (2020) for the claims regarding nonconvexity and nonsmoothness of the MAML objective.
very pessimistic guarantees. Bilevel programming, even more importantly, requires computation of certain inverse matrices, which is prohibitive in large dimensions. One could also view minimization-based formulations of meta-learning as instances of empirical risk minimization, for which FO-MAML can be seen as instance of inexact (biased) SGD. For example, Ajalloeian and Stich (2020) analyzed SGD with deterministic bias and some of our proofs are inspired by theirs, except in our problem the bias is not deterministic. We will discuss the limitations of their approach in the section on inexact SGD.
Several works have also addressed meta-learning from the statistical perspective, for instance, Yoon et al. (2018) proposed a Bayesian variant of MAML, and Finn et al. (2019) analyzed convergence of MAML in online learning. Another example is the work of Konobeev et al. (2021) who studied the setting of linear regression with task-dependent solutions that are sampled from same normal distribution. These directions are orthogonal to ours, as we want to study the optimization properties of meta-learning.
## 2 Background and mathematical formulation
Before we introduce the considered formulation of meta-learning, let us provide the problem background and define all notions. As the notation in meta-learning varies between papers, we correspond our notation to that of other works in the next subsection.
### Notation
We assume that training is performed over \(n\) tasks with task losses \(f_{1},\ldots,f_{n}\) and we will introduce _implicit_ and _proximal_ meta-losses \(\{F_{i}\}\) in the next section. We denote by \(x\) the vector of parameters that we aim to train, which is often called _model_, _meta-model_ or _meta-parameters_ in the meta-learning literature, and _outer variable_ in the bilevel literature. Similarly, given task \(i\), we denote by \(z_{i}\) the _task-specific parameters_ that are also called as _ground model_, _base-model_, or _inner variable_. We will use letters \(\alpha,\beta,\gamma\) to denote scalar hyper-parameters such as stepsize or regularization coefficient.
Given a function \(\varphi(\cdot)\), we call the following function its _Moreau envelope_:
\[\Phi(x)=\min_{z\in\mathbb{R}^{d}}\left\{\varphi(x)+\frac{1}{2\alpha}\|z-x\|^{ 2}\right\},\]
where \(\alpha>0\) is some parameter. Given the Moreau envelope \(F_{i}\) of a task loss \(f_{i}\), we denote by \(z_{i}(x)\) the solution to the inner objective of \(F_{i}\), i.e., \(z_{i}(x)\stackrel{{\text{def}}}{{=}}\operatorname*{argmin}_{z\in \mathbb{R}^{d}}\left\{f_{i}(z)+\frac{1}{2\alpha}\|z-x\|^{2}\right\}\).
Finally, let us introduce some standard function properties that are commonly used in the optimization literature Nesterov (2013).
**Definition 1**.: _We say that a function \(\varphi(\cdot)\) is \(L\)-smooth if its gradient is \(L\)-Lipschitz, i.e., for any \(x,y\in\mathbb{R}^{d}\),_
\[\|\nabla\varphi(x)-\nabla\varphi(y)\|\leq L\|x-y\|.\]
**Definition 2**.: _Given a function \(\varphi(\cdot)\), we call it \(\mu\)-strongly convex if it satisfies for any \(x,y\in\mathbb{R}^{d}\),_
\[\varphi(y)\geq\varphi(x)+\langle\nabla\varphi(x),y-x\rangle+\frac{\mu}{2}\|y -x\|^{2}.\]
_If the property above holds with \(\mu=0\), we call \(\varphi\) to be convex. If the property does not hold even with \(\mu=0\), we say that \(\varphi\) is nonconvex._
### MAML objective
Assume that we are given \(n\) tasks, and that the performance on task \(i\) is evaluated according to some loss function \(f_{i}(x)\). MAML has been proposed as an algorithm for solving the following objective:
\[\min_{x\in\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{n}f_{i}(x-\alpha\nabla f_{i}(x )), \tag{1}\]
where \(\alpha>0\) is a stepsize. Ignoring for simplicity minibatching, MAML update computes the gradient of a task meta-loss \(\varphi_{i}(x)=f_{i}(x-\alpha\nabla f_{i}(x))\) through backpropagation and can be explicitly written as
\[x^{k+1}=x^{k}-\beta\left(\mathbf{I}-\alpha\nabla^{2}f_{i}(x^{k})\right)\nabla f _{i}(x^{k}-\alpha\nabla f_{i}(x^{k})),\] (MAML update)
where \(\beta>0\) is a stepsize, \(i\) is sampled uniformly from \(\{1,\ldots,n\}\) and \(\mathbf{I}\in\mathbb{R}^{d\times d}\) is the identity matrix. Sometimes, MAML update evaluates the gradient of \(\varphi_{i}\) using an additional data sample, but Bai et al. (2021) recently showed that this is often unnecessary, and we, thus, skip it.
Unfortunately, objective (1) might be nonsmooth and nonconvex even if the task losses \(\{f_{i}\}\) are convex and smooth Fallah et al. (2020). Moreover, if we generalize this objective for more than one gradient step inside \(f_{i}(\cdot)\), its smoothness properties deteriorate further, which complicates the development and analysis of multistep methods.
### iMAML objective
To avoid differentiating through a graph, Rajeswaran et al. (2019) proposed an alternative objective to (1) that replaces the gradient step inside each function with an _implicit_ gradient step. In particular, if we define \(z_{i}(x)\stackrel{{\text{def}}}{{=}}\operatorname*{argmin}_{z\in \mathbb{R}^{d}}\left\{f_{i}(z)+\frac{1}{2\alpha}\|z-x\|^{2}\right\}\), then the objective of iMAML is
\[\min_{x\in\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{n}f_{i}\left(x-\alpha\nabla f _{i}(z_{i}(x))\right).\]
The idea of iMAML is to optimize this objective during training so that at inference, given a new function \(f_{n+1}\) and solution \(x_{\operatorname*{iMAML}}\) of the problem above, one can find an approximate solution to \(\min_{z\in\mathbb{R}^{d}}\left\{f_{n+1}(z)+\frac{1}{2\alpha}\|z-x_{ \operatorname*{iMAML}}\|^{2}\right\}\) and use it as a new model for task \(f_{n+1}\).
Rajeswaran et al. (2019) proved, under some mild assumptions, that one can efficiently obtain an estimate of the gradient of \(\varphi_{i}(x)\stackrel{{\text{def}}}{{=}}f_{i}\left(x-\alpha \nabla f_{i}(z_{i}(x))\right)\) with access only to gradients and Hessian-vector products of \(f_{i}\), which rely on standard backpropagation operations. In particular, Rajeswaran et al. (2019) showed that
\[\nabla\varphi_{i}(x)=\left(\mathbf{I}+\alpha\nabla^{2}f_{i}(z(x))\right)^{-1} \nabla f_{i}(z(x)),\]
where \(\mathbf{I}\) is the identity matrix, and they proposed to run the conjugate gradient method to find \(\nabla\varphi_{i}(x)\). However, it is not shown in Rajeswaran et al. (2019) if the objective of iMAML is solvable and what properties it has. Moreover, we are not aware of any result that would show when the problem is convex or smooth. Since SGD is not guaranteed to work unless the objective satisfies at least some properties Zhang et al. (2020), nothing is known about convergence of SGD when applied to the iMAML objective.
As a sign that the problem is rather ill-designed, we present the following theorem that gives a negative example on the problem's convexity.
**Theorem 1**.: _There exists a convex function \(f\) with Lipschitz gradient and Lipschitz Hessian such that the iMAML meta-objective \(\varphi(x)\stackrel{{\text{def}}}{{=}}f(z(x))\) is nonconvex, where \(z(x)=x-\alpha\nabla f(z(x))\)._
Similarly, we also show that the objective of iMAML may be harder to solve due to its worse smoothness properties as given by the next theorem.
**Theorem 2**.: _There exists a convex function \(f\) with Lipschitz gradient and Lipschitz Hessian such that the iMAML meta-objective \(\varphi(x)\stackrel{{\text{def}}}{{=}}f(z(x))\) is nonsmooth for any \(\alpha>0\), where \(z(x)=x-\alpha\nabla f(z(x))\)._
### Our main objective: Moreau envelopes
In this work we consider the following formulation of meta-learning
\[\min_{x\in\mathbb{R}^{d}}F(x)\stackrel{{\text{def}}}{{=}} \frac{1}{n}\sum_{i=1}^{n}F_{i}(x), \tag{2}\] \[\text{where}\quad F_{i}(x)\stackrel{{\text{def}}}{{=}} \min_{z\in\mathbb{R}^{d}}\left\{f_{i}(z)+\frac{1}{2\alpha}\|z-x\|^{2}\right\},\]
and \(\alpha>0\) is a parameter controlling the level of adaptation to the problem. In other words, we seek to find a parameter vector \(x\) such that somewhere close to \(x\) there exists a vector \(z_{i}\) that verifies that \(f_{i}(z)\) is sufficiently small. This formulation of meta-learning was first introduced by Zhou et al. (2019) and it has been used by Hanzely et al. (2020) and T. Dinh et al. (2020) to study personalization in federated learning.
Throughout the paper we use the following variables for minimizers of meta-problems \(F_{i}\):
\[z_{i}(x)\stackrel{{\text{def}}}{{=}}\operatorname*{argmin}_{z\in \mathbb{R}^{d}}\left\{f_{i}(z)+\frac{1}{2\alpha}\|z-x\|^{2}\right\},i=1,\dots,n. \tag{3}\]
One can notice that if \(\alpha\to 0\), then \(F_{i}(x)\approx f_{i}(x)\), and Problem (2) reduces to the well-known empirical risk minimization:
\[\min_{x\in\mathbb{R}^{d}}f(x)\stackrel{{\text{def}}}{{=}}\frac{1} {n}\sum_{i=1}^{n}f_{i}(x).\]
If, on the other hand, \(\alpha\to+\infty\), the minimization problem in (2) becomes essentially independent of \(x\) and it holds \(z_{i}(x)\approx\operatorname*{argmin}_{z\in\mathbb{R}^{d}}f_{i}(z)\). Thus, one has to treat the parameter \(\alpha\) as part of the objective that controls the similarity between the task-specific parameters.
We denote the solution to Problem (2) as
\[x^{*}\stackrel{{\text{def}}}{{=}}\arg\min_{x\in\mathbb{R}^{d}}F( x). \tag{4}\]
One can notice that \(F(x)\) and \(x^{*}\) depend on \(\alpha\). For notational simplicity, we keep \(\alpha\) constant throughout the paper and do not explicitly write the dependence of \(x^{*},F,F_{1},z_{1},\dots,F_{n},z_{n}\) on \(\alpha\).
### Formulation properties
We will also use the following quantity to express the difficulty of Problem (2):
\[\sigma_{*}^{2}\stackrel{{\text{def}}}{{=}}\frac{1}{n}\sum_{i=1} ^{n}\|\nabla F_{i}(x^{*})\|^{2}. \tag{5}\]
Because \(\nabla F(x^{*})=0\) by first-order optimality of \(x^{*}\), \(\sigma_{*}^{2}\) serves as a measure of gradient variance at the optimum. Note that \(\sigma_{*}\) is always finite because it is defined on a single point, in contrast to the _maximum_ gradient variance over all space, which might be infinite.
Now let's discuss properties of our formulation 2. Firstly, we state a standard result from Beck (2017).
**Proposition 1** (Theorem 6.60 in Beck (2017)).: _Let \(F_{i}\) be defined as in eq. (2) and \(z_{i}(x)\) be defined as in eq. (3). If \(f_{i}\) is convex, proper and closed, then \(F_{i}\) is differentiable and \(\frac{1}{\alpha}\)-smooth:_
\[\nabla F_{i}(x)=\frac{1}{\alpha}(x-z_{i}(x))=\nabla f_{i}(z_{i}( x)), \tag{6}\] \[\|\nabla F_{i}(x)-\nabla F_{i}(y)\|\leq\frac{1}{\alpha}\|x-y\|. \tag{7}\]
The results above only hold for convex functions, while in meta-learning, the tasks are often defined by training a neural network, whose landscape is nonconvex. To address such applications, we also refine Proposition 1 in the lemma bellow, which also improves the smoothness constant in the convex case. This result is similar to Lemma 2.5 of Davis and Drusvyatskiy (2021), except their guarantee is a bit weaker because they consider more general assumptions.
**Lemma 1**.: _Let function \(f_{i}\) be \(L\)-smooth._
* _If_ \(f_{i}\) _is nonconvex and_ \(\alpha<\frac{1}{L}\)_, then_ \(F_{i}\) _is_ \(\frac{L}{1-\alpha L}\)_-smooth. If_ \(\alpha\leq\frac{1}{2L}\)_, then_ \(F_{i}\) _is_ \(2L\)_-smooth._
* _If_ \(f_{i}\) _is convex, then_ \(F_{i}\) _is_ \(\frac{L}{1+\alpha L}\)_-smooth. Moreover, for any_ \(\alpha\)_, it is_ \(L\)_-smooth._
* _If_ \(f_{i}\) _is_ \(\mu\)_-strongly convex, then_ \(F_{i}\) _is_ \(\frac{\mu}{1+\alpha\mu}\)_-strongly convex. If_ \(\alpha\leq\frac{1}{\mu}\)_, then_ \(F_{i}\) _is_ \(\frac{\mu}{2}\)_-strongly convex._
_Whenever \(F_{i}\) is smooth, its gradient is given as in equation (6), i.e., \(\nabla F_{i}(x)=\nabla f_{i}(z_{i}(x))\)._
The takeaway message of Lemma 1 is that the optimization properties of \(F_{i}\) are always at least as good as those of \(f_{i}\) (up to constant factors). Furthermore, the _conditioning_, i.e., the ratio of smoothness to strong convexity, of \(F_{i}\) is upper bounded, up to a constant factor, by that of \(f_{i}\). And even if \(f_{i}\) is convex but nonsmooth (\(L\to+\infty\)), \(F_{i}\) is still smooth with constant \(\frac{1}{\alpha}\).
Finally, note that computing the exact gradient of \(F_{i}\) requires solving its inner problem as per equation (6). Even if the gradient of task \(\nabla f_{i}(x)\) is easy to compute, we still cannot obtain \(\nabla F_{i}(x)\) through standard differentiation or backpropagation. However, one can approximate \(\nabla F_{i}(x)\) in various ways, as we will discuss later.
## 3 Can we analyze FO-MAML as inexact SGD?
As we mentioned before, the prior literature has viewed FO-MAML as an inexact version of MAML for problem (1). If, instead, we are interested in problem (2), one could still try to take the same perspective of inexact SGD and see what convergence guarantees it gives for (2). The goal of this section, thus, is to refine the existing theory of inexact SGD to make it applicable to FO-MAML. We will see, however, that such approach if fundamentally limited and we will present a better alternative analysis in a future section.
### Why existing theory is not applicable
Let us start with a simple lemma for FO-MAML that shows why it approximates SGD for objective (2).
**Lemma 2**.: _Let task losses \(f_{i}\) be \(L\)-smooth and \(\alpha>0\). Given \(i\) and \(x\in\mathbb{R}^{d}\), we define recursively \(z_{i,0}\stackrel{{\text{def}}}{{=}}x\) and \(z_{i,j+1}\stackrel{{\text{def}}}{{=}}x-\alpha\nabla f_{i}(z_{i,j})\). Then, it holds for any \(s\geq 0\)_
\[\|\nabla f_{i}(z_{i,s})-\nabla F_{i}(x)\|\leq(\alpha L)^{s+1}\|\nabla F_{i}(x )\|.\]
_In particular, the iterates of FO-MAML (Algorithm 1) satisfy for any \(k\)_
\[\big{\|}\nabla f_{i}(z_{i}^{k})-\nabla F_{i}(x^{k})\big{\|}\leq(\alpha L)^{2 }\|\nabla F_{i}(x^{k})\|.\]
Lemma 2 shows that FO-MAML approximates SGD step with error proportional to the stochastic gradient norm. Therefore, we can write
\[\nabla f_{i}(z_{i}^{k})=\nabla F(x^{k})+\underbrace{\nabla F_{i}(x^{k})- \nabla F(x^{k})}_{\stackrel{{\text{def}}}{{=}}\xi_{i}^{k}\;( \text{noise})}+\underbrace{b_{i}^{k}}_{\text{bias}},\]
where it holds \(\mathbb{E}[\xi_{i}^{k}]=0\), and \(b_{i}^{k}\) is a bias vector that also depends on \(i\) but does not have zero mean. The best known guarantees for inexact SGD are provided by Ajalloain and Stich (2020), but they are, unfortunately, not applicable because their proofs use independence of \(\xi_{i}^{k}\) and \(b_{i}^{k}\). The analysis of Zhou et al. (2019) is not applicable either because their inexactness assumption requires the error to be smaller than a predefined constant \(\varepsilon\), while the error in Lemma 2 can be unbounded. To resolve these issues, we provide a refined analysis in the next subsection.
```
1:Input:\(x^{0}\), \(\beta>0\), accuracy \(\delta\geq 0\) or \(\varepsilon\geq 0\).
2:for\(k=0,1,\ldots\)do
3: Sample a subset of tasks \(T_{k}\)
4:for each sampled task \(i\)in\(T_{k}\)do
5: Find \(z_{i}^{k}\) s.t. \(\left\|\frac{1}{\alpha}\left(x^{k}-z_{i}^{k}\right)-\nabla F_{i}(x^{k})\right\| \leq\delta\left\|\nabla F_{i}(x^{k})\right\|\)
6:endfor
7:\(x^{k+1}=x^{k}-\beta\,\frac{1}{|T_{k}|}\sum_{i\in T_{k}}\nabla f_{i}(z_{i}^{k})\)
8:endfor
```
**Algorithm 2** FO-MuML: First-Order Multistep Meta-Learning (general formulation)
### A new result for inexact SGD
For strongly convex objectives, we give the following result by modifying the analysis of Ajalloian and Stich (2020).
**Theorem 3** (Convergence of FO-MAML, weak result).: _Let task losses \(f_{1},\ldots,f_{n}\) be \(L\)-smooth and \(\mu\)-strongly convex. If \(|T_{k}|=\tau\) for all \(k\), \(\beta\leq\frac{1}{20L}\) and \(\alpha\leq\frac{1}{4\sqrt{\kappa L}}\), where \(\kappa\stackrel{{\text{def}}}{{=}}\frac{L}{\mu}\), then for the iterates \(x^{1},x^{2}\ldots\) of Algorithm 1, it holds_
\[\mathbb{E}\left[\|x^{k}-x^{*}\|^{2}\right]\leq\left(1-\frac{\beta\mu}{4} \right)^{k}\|x^{0}-x^{*}\|^{2}+\frac{16}{\mu}\left(\frac{2\alpha^{2}L^{2}}{ \mu}+\frac{\beta}{\tau}+\beta\right)\sigma_{*}^{2}.\]
Let us try to compare this result to that of vanilla SGD as studied by Gower et al. (2019). Since the first term decreases exponentially, it requires us \(\mathcal{O}\left(\frac{1}{\beta\mu}\log\frac{1}{\varepsilon}\right)\) iterations to make it smaller than \(\varepsilon\). The second term, on the other hand, only decreases if we decrease \(\alpha\) and \(\beta\). Decreasing \(\beta\) corresponds to using decreasing stepsizes in SGD, which is fine, but \(\alpha\) is a parameter that defines the objective, so in most cases, we do not want to decrease it. Moreover, the assumptions of Theorem 3 require \(\alpha\) to be smaller than \(\frac{1}{\sqrt{\kappa L}}\), which seems quite restrictive. This is the main limitation of this result as it shows that FO-MAML as given in Algorithm 1 may not converge to the problem solution.
To fix the nonconvergence of FO-MAML, let us turn our attention to Algorithm 2, which may perform multiple first-order steps.
**Theorem 4**.: _Let task losses \(f_{1},\ldots,f_{n}\) be \(L\)-smooth and \(\mu\)-strongly convex. If \(|T_{k}|=\tau\) for all \(k\), \(\alpha\leq\frac{1}{L},\beta\leq\frac{1}{20L}\), and \(\delta\leq\frac{1}{4\sqrt{\kappa}}\), where \(\kappa\stackrel{{\text{def}}}{{=}}\frac{L}{\mu}\), then the iterates of Algorithm 2 satisfy_
\[\mathbb{E}\left[\|x^{k}-x^{*}\|^{2}\right]\leq\left(1-\frac{\beta\mu}{4} \right)^{k}\|x^{0}-x^{*}\|^{2}+\frac{16}{\mu}\left(\frac{2\delta^{2}}{\mu}+ \frac{\beta}{\tau}+\beta\delta^{2}\right)\sigma_{*}^{2}.\]
The result of Theorem 4 is better than that of Theorem 3 since it only requires the inexactness parameter \(\delta\) to go to \(0\) rather than \(\alpha\), so we can solve the meta-learning problem (2) for any \(\alpha\leq\frac{1}{L}\). The rate itself, however, is not optimal, as we show in the next section with a more elaborate approach.
## 4 Improved theory
In this section, we provide improved convergence theory of FO-MAML and FO-MuML based on a sequence of virtual iterates that appear only in the analysis. Surprisingly, even though the sequence never appears in the algorithm, it allows us to obtain tighter convergence bounds.
### Perturbed iterate is better than inexact gradient
Before we introduce the sequence, let us make some observations from prior literature on inexact and biased variants of SGD. For instance, the literature on asynchronous optimization has established that getting gradient at a wrong point does not significantly worsen its rate of convergence Mania et al. (2017). A similar analysis with additional virtual sequence was used in the so-called error-feedback for compression Stich et al. (2018), where the goal of the sequence is to follow the path of _exact_ gradients even if _compressed_ gradients are used by the algorithm itself. Motivated by these observations, we set out to find a virtual sequence that could help us analyze FO-MAML.
### On what vector do we evaluate the gradients?
The main difficulty that we face is that we never get access to the gradients of \(\{F_{i}\}\) and have to use the gradients of \(\{f_{i}\}\). However, we would still like to write
\[x^{k+1}=x^{k}-\frac{\alpha}{\tau}\sum_{i\in T_{k}}\nabla f_{i}(z_{i}^{k})=x^{k}- \frac{\alpha}{\tau}\sum_{i\in T_{k}}\nabla F_{i}(y_{i}^{k})\]
for some point \(y_{i}^{k}\). If this is possible, using point \(y_{i}^{k}\) would allow us to avoid working with functions \(f_{i}\) in some of our recursion.
Why exactly would this sequence help? As mentioned before, FO-MAML is a biased method, so we cannot evaluate expectation of \(\mathbb{E}\left[\nabla f_{i}(z_{i}^{k})\right]\). However, if we had access to \(\nabla F_{i}(x^{k})\), its expectation would be exactly \(\nabla F(x^{k})\). This suggests that if we find \(y_{i}^{k}\) that satisfies \(\nabla F_{i}(y_{i}^{k})\approx\nabla F_{i}(x^{k})\), then
\[x^{k+1}=x^{k}-\frac{\alpha}{\tau}\sum_{i\in T_{k}}\nabla F_{i}(y_{i}^{k}) \approx x^{k}-\frac{\alpha}{\tau}\sum_{i\in T_{k}}\nabla F_{i}(x^{k}),\]
which would allow us to put the bias _inside_ the gradient.
Fortunately, objective (2) allows us to find such point easily. In particular, for Moreau Envelopes, the following proposition holds.
**Lemma 3**.: _For any points \(z,y\in\mathbb{R}^{d}\) it holds \(y=z+\alpha\nabla f_{i}(z)\) if and only if \(z=y-\alpha\nabla F_{i}(y)\). Therefore, given \(z\), we can define \(y=z+\alpha\nabla f_{i}(z)\) and obtain \(\nabla f_{i}(z)=\nabla F_{i}(y)\)._
Proof.: The result follows immediately from the last statement of Lemma 1.
The second part of Lemma 3 is exactly what we need. Indeed, we can choose \(y_{i}^{k}\stackrel{{\text{def}}}{{=}}z_{i}^{k}+\alpha\nabla f_{i }(z_{i}^{k})\) so that \(z_{i}^{k}=y_{i}^{k}-\alpha\nabla F_{i}(y_{i}^{k})\) and \(\nabla f_{i}(z_{i}^{k})=\nabla F_{i}(y_{i}^{k})\). As we have explained, this can help us to tackle the bias of FO-MAML.
### Main results
We have established the existence of variables \(y_{i}^{k}\) such that \(\nabla f_{i}(z_{i}^{k})=\nabla F_{i}(y_{i}^{k})\). This allows us to write
\[\nabla f_{i}(z_{i}^{k})=\nabla F_{i}(y_{i}^{k})=\nabla F(x^{k})+\underbrace{ \nabla F_{i}(x^{k})-\nabla F(x^{k})}_{\text{noise}}+\underbrace{\nabla F_{i}( y_{i}^{k})-\nabla F_{i}(x^{k})}_{\text{reduced bias}}.\]
As the next theorem shows, we can use this to obtain convergence guarantee to a neighborhood even with a small number of steps in the inner loop.
**Theorem 5**.: _Consider the iterates of Algorithm 2 (with general \(\delta\)) or Algorithm 1 (for which \(\delta=\alpha L\)). Let task losses be \(L\)-smooth and \(\mu\)-strongly convex and let objective parameter satisfy \(\alpha\leq\frac{1}{\sqrt{6L}}\). Choose stepsize \(\beta\leq\frac{\tau}{4L}\), where \(\tau=|T_{k}|\) is the batch size. Then we have_
\[\mathbb{E}\left[\left\|x^{k}-x^{*}\right\|^{2}\right]\leq\left(1-\frac{\beta \mu}{12}\right)^{k}\left\|x^{0}-x^{*}\right\|^{2}+\frac{6\left(\frac{\beta}{ \tau}+3\delta^{2}\alpha^{2}L\right)\sigma_{*}^{2}}{\mu}.\]
Similarly to Theorem 3, the theorem above guarantees convergence to a neighborhood only. However, the radius of convergence is now \(\mathcal{O}\left(\frac{\frac{\beta}{2}+\alpha^{2}L}{\mu}\right)\) in contrast to \(\mathcal{O}\left(\frac{\beta+\kappa\alpha^{2}L}{\mu}\right)\). If the first term is dominating, then it implies an improvement proportional to the batch size \(\tau\). If, in contrast, the second term is larger, then the improvement is even more significant and the guarantee is \(\mathcal{O}(\kappa)\) times better, which is often a very large constant.
The proof technique for this theorem also uses recent advances on the analysis of biased SGD methods by Mishchenko et al. (2020). In particular, we show that the three-point identity (provided in the Appendix) is useful for getting a tighter recursion.
Next, we extend this result to the nonconvex convergence as given under the following assumption on bounded variance.
**Assumption 1**.: _We assume that the variance of meta-loss gradients is uniformly bounded by some \(\sigma^{2}\), i.e.,_
\[\mathbb{E}\left[\|\nabla F_{i}(x)-\nabla F(x)\|^{2}\right]\leq\sigma^{2}. \tag{8}\]
The new assumption on bounded variance is different from the one we used previously of variance being finite at the optimum, which was given in equation (5). At the same time, it is very common in literature on stochastic optimization when studying convergence on nonconvex functions.
**Theorem 6**.: _Let Assumption 1 hold, functions \(f_{1},\ldots,f_{n}\) be \(L\)-smooth and \(F\) be lower bounded by \(F^{*}>-\infty\). Assume \(\alpha\leq\frac{1}{4L},\beta\leq\frac{1}{16L}\). If we consider the iterates of Algorithm 1 (with \(\delta=\alpha L\)) or Algorithm 2 (with general \(\delta\)), then_
\[\min_{t\leq k}\mathbb{E}\left[\|\nabla F(x^{t})\|^{2}\right]\leq\frac{4}{ \beta k}\mathbb{E}\left[F(x^{0})-F^{*}\right]+4(\alpha L)^{2}\delta^{2}\sigma ^{2}+32\beta(\alpha L)^{2}\left(\frac{1}{|T_{k}|}+(\alpha L)^{2}\delta^{2} \right)\sigma^{2}.\]
Notice that this convergence is also only until some neighborhood of first-order stationarity, since the second term does not decrease with \(k\). This size of the upper bound depends on the product \(\mathcal{O}((\alpha L)^{2}\delta^{2})\), so to obtain better convergence one can simply increase approximation accuracy to make \(\delta\) smaller. However, the standard FO-MAML corresponds to \(\delta=\alpha L\), so its convergence guarantees directly depend on the problem parameter \(\alpha\).
For Algorithm 3, we have \(\delta=\mathcal{O}((\alpha L)^{s})\) as per Lemma 2, and we recover convergence guarantee up to a neighborhood of size \(\mathcal{O}((\alpha L)^{2}\delta^{2})=\mathcal{O}((\alpha L)^{2s+2})\). Therefore, to make this smaller than some given target accuracy \(\varepsilon>0\), we need at most \(s=\mathcal{O}(\log\frac{1}{\epsilon})\) inner-loop iterations. If we can plug-in \(s=1\), we also get that FO-MAML converges to a neighborhood of size \(\mathcal{O}((\alpha L)^{4})\).
Our Theorem 6 is very similar to the one obtained by Fallah et al. (2020), except their convergence neighborhood depends on \(\alpha\) as \(\mathcal{O}(\alpha^{2})\), whereas ours is of size \(\mathcal{O}(\alpha^{4})\), which goes to 0 much faster when \(\alpha\to 0\). Moreover, in contrast to their theory, ours does not require any assumptions on the Hessian smoothness. Note, in addition, that the main difference comes from the kind of objectives that we study, as Fallah et al. (2020) considered minimization of problems not involving Moreau envelopes.
## 5 Conclusion
In this paper, we presented a new analysis of first-order meta-learning algorithms for minimization of Moreau envelopes. Our theory covers both nonconvex and strongly convex smooth losses and guarantees convergence of the family of methods covered by Algorithm 2. As a special case, all convergence bounds apply to Algorithm 3 with an arbitrary number of inner-loop steps. Compared to other results available in the literature, ours are more general as they hold with an arbitrary number of inner steps and do not require Hessian smoothness. The main theoretical difficulty we faced was the limitation of the inexact SGD framework, which we overcame by presenting a refined
analysis using virtual iterates. As a minor contribution, we also pointed out that standard algorithms, such as SGD, are not immediately guaranteed to work on the iMAML objective, which might be nonconvex and nonsmooth even for convex and smooth losses. To show this, we presented examples of losses whose convexity and smoothness cease when the iMAML objective is constructed.
|
2305.05620 | Normalized logistic wavelets: Applications to COVID-19 data in Italy | In this paper we deal with the logistic wavelets introduced in \cite{RF}. We
modify them by multiplying by appropriate coefficients so that their norm in
the space $L^{2}(R)$ is equal to 1. We calculate the normalization coefficients
using the Grosset-Veselov formula \cite{GV}, Eulerian numbers and Bernoulli
numbers. Then we apply the logistic wavelets to model of the first wave of
Covid-19 deaths in Italy in 2020. This example shows that even asymmetric and
skewed data can be modeled, with high accuracy, by a sum of logistic functions. | Grzegorz Rządkowski | 2023-05-07T18:36:02Z | http://arxiv.org/abs/2305.05620v1 | # Normalized logistic wavelets: Applications to COVID-19 data in Italy
###### Abstract
In this paper we deal with the logistic wavelets introduced in [19]. We modify them by multiplying by appropriate coefficients so that their norm in the space \(L^{2}(R)\) is equal to \(1\). We calculate the normalization coefficients using the Grosset-Veselov formula [9], Eulerian numbers and Bernoulli numbers. Then we apply the logistic wavelets to model of the first wave of Covid-19 deaths in Italy in 2020. This example shows that even asymmetric and skewed data can be modeled, with high accuracy, by a sum of logistic functions.
Keywords: Logistic wavelet, logistic equation, logistic function, COVID-19, Eulerian number, Bernoulli number, Riccati's differential equation.
2020 Mathematics Subject Classification: 92D30, 65T60, 11B83
## 1 Introduction
The logistic equation defining the logistic function \(x=x(t)\) has the form (cf. [19])
\[x^{\prime}(t)=\frac{s}{x_{max}}\:x(x_{max}-x),\quad x(0)=x_{0}. \tag{1}\]
where \(t\) is time, and parameters \(s\)-steepness or slope coefficient and \(x_{max}\)-saturation level are constants. The integral curve \(x(t)\) of equation (1) satisfying the condition \(0<x(t)<x_{max}\) is called the logistic function. The logistic function is used to describe and model various phenomena in physics, economics, medicine, biology, engineering, sociology and many other sciences. Logistic functions now seem even more important from the point of view of their possible applications, due to the theory of the Triple Helix (TH) developed in the 1990s by Etzkowitz and Leydesdorff [6] (see also Leydesdorff [14]). This theory explains the phenomenon of creating and introducing innovations under the influence of the interaction of three factors University-Industry-Government and relations between them. According to the TH theory, the phenomenon of the emergence of innovations can be described by means of logistic functions. Ivanova [11], [12], [13] has shown that the KdV equation naturally appears in TH theory and has also applied it to other fields such as the COVID-19 pandemic or financial markets.
After solving the differential equation (1) we obtain the logistic function in the form
\[x(t)=\frac{x_{max}}{1+e^{-s(t-t_{0})}}, \tag{2}\]
where \(t_{0}\) is the inflection point associated with the initial condition \(u(0)=u_{0}=\frac{u_{max}}{1+e^{st_{0}}}\), then \(t_{0}=\frac{1}{s}\log\Big{(}\frac{x_{max}-x_{0}}{x_{0}}\Big{)}\). At the point \(t_{0}\), \(x(t_{0})=x_{max}/2\). Equation (1) is a special case of the Riccati equation with constant coefficients
\[x^{\prime}(t)=r(x-x_{1})(x-x_{2}), \tag{3}\]
where constants \(r\neq 0,\;x_{1},\;x_{2}\) can be real or more generally complex numbers.
If \(x=x(t)\) is the solution of (3) then its \(n\) derivative \(x^{(n)}(t)\) (\(n=2,3,4,\ldots\)) is a polynomial of the function \(x(t)\)[17], [18], [7]
\[x^{(n)}(t)=r^{n}\sum_{k=0}^{n-1}\genfrac{\langle}{\rangle}{\rangle}{0.0pt}{}{n }{k}(x-x_{1})^{k+1}(x-x_{2})^{n-k} \tag{4}\]
for \(n=2,3,\ldots\), where \(\left\langle\begin{matrix}n\\ k\end{matrix}\right\rangle\) denotes Eulerian number (the number of permutations \(\{1,2,\ldots,n\}\) having exactly \(k,\;(k=0,1,2,\ldots,n-1)\) ascents, Graham et al [8].
Formula (4) applied to the logistic equation (1) yields:
\[x^{(n)}(t)=\left(-\frac{s}{x_{max}}\right)^{n}\;\sum_{k=0}^{n-1}\left\langle \begin{matrix}n\\ k\end{matrix}\right\rangle x^{k+1}(x-x_{max})^{n-k}. \tag{5}\]
The paper has the following structure. In Section 2 we first briefly describe the general wavelet theory and then the logistic wavelets introduced in article A. Then we compute the normalizing coefficients for them. Section 3 is devoted to an application of logistic wavelets to model the spread of the COVID-19 pandemic in Italy in 2020. The paper is concluded in Section 4. All data used in the paper were obtained from the website Our World in Data [21].
## 2 Wavelets and logistic wavelets
### Wavelets
Let us now recall some general facts about wavelet theory (cf. [3, 15, 16]) which we will use later. A wavelet or mother wavelet ( Daubechies [3], p.24 ) is an integrable function \(\psi\in L^{1}(\mathbb{R})\) with the following admissibility condition:
\[C_{\psi}=2\pi\int_{-\infty}^{\infty}|\xi|^{-1}|\widehat{\psi}(\xi)|^{2}d\xi<\infty, \tag{6}\]
where \(\widehat{\psi}(\xi)\) is the Fourier transform of \(\psi\)
\[\widehat{\psi}(\xi)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\psi(x)e^{-i \xi x}dx.\]
Since the function \(\psi\in L^{1}(\mathbb{R})\), then \(\widehat{\psi}(\xi)\) is a continuous function, and condition (6) is satisfied only when \(\widehat{\psi}(0)=0\) or \(\int_{-\infty}^{\infty}\psi(x)dx=0\). On the other hand, Daubechies [3], p.24 shows that condition \(\int_{-\infty}^{\infty}\psi(x)dx=0\) together with the second condition, slightly stronger than integrability, namely \(\int_{-\infty}^{\infty}|\psi(x)|(1+|x|)^{\alpha}dx<\infty\), for some \(\alpha>0\) are sufficient for (6). Usually much more is assumed about the function \(\psi\),so from a practical point of view the conditions \(\int_{-\infty}^{\infty}\psi(x)dx=0\) and (6) are equivalent.Suppose furthermore that \(\psi\) is also square integrable, \(\psi\in L^{2}(\mathbb{R})\) with the norm
\[||\psi||=\left(\int_{-\infty}^{\infty}|\psi(x)|^{2}dx\right)^{1/2}.\]
Using the mother wavelet, by dilating and translating, a double-indexed family of wavelets is obtained
\[\psi^{a,b}(x)=\frac{1}{\sqrt{|a|}}\psi\Big{(}\frac{x-b}{a}\Big{)},\]
where \(a,b\in\mathbb{R},\;a\neq 0\). The normalization has been chosen so that \(||\psi^{a,b}||=||\psi||\) for all \(a,b\). In order to be able to compare different wavelet families with each other, it is usually assumed that \(||\psi||=1\). Continuous Wavelet Transform (CWT) of a function \(f\in L^{2}(\mathbb{R})\) with respect to a given wavelet family is defined as
\[(T^{wav}f)(a,b)=\langle f,\psi^{a,b}\rangle=\int_{-\infty}^{\infty}f(x)\psi^{ a,b}(x)dx. \tag{7}\]
### Logistic wavelets
Logistic mother wavelets understood as derivatives of the logistic function \(x(t)=\frac{1}{1+e^{-t}}\), which is a solution to the logistic equation
\[x^{\prime}(x)=x(1-x)=-x(x-1), \tag{8}\]
are described in [19]. We will use them here as well, but multiplied by appropriate factor so that their norms in the space \(L^{2}(\mathbb{R})\) are equal to \(1\). Now we will show how these factors can be calculated. Obviously wavelets modified in this way also satisfy the admissibility condition (6). Formulas (4) or (5) applied to equation (8) give:
\[x^{(n)}(t)=(-1)^{n}\sum_{k=0}^{n-1}\binom{n}{k}x^{k+1}(x-1)^{n-k}=\sum_{k=0}^{ n-1}(-1)^{k}\binom{n}{k}x^{k+1}(1-x)^{n-k}, \tag{9}\]
for \(n=2,3,\ldots\). We will use the following Grosset and Veselov formula [9]
\[\int_{-\infty}^{+\infty}\left(\frac{d^{n-1}}{dt^{n-1}}\frac{1}{\cosh^{2}t} \right)^{2}dt=(-1)^{n-1}2^{2n+1}B_{2n}, \tag{10}\]
where \(n=1,2,\ldots\) and \(B_{2n}\) is the \(2n\)th Bernoulli number. Other proofs of the Grosset-Veselow formula can be found in [2], [20]. Bernoulli numbers have the following generating function (see Graham, Knuth, Patashnik [8])
\[B(\xi)=B_{0}+B_{1}\xi+B_{2}\frac{\xi^{2}}{2!}+\cdots=\frac{\xi}{e^{\xi}-1}, \qquad|\xi|<2\pi.\]
It is known that \(B_{n}\) is zero for all odd numbers \(n\geq 3\). These numbers are rational and occur in formulas such as
\[\sum_{k=1}^{\infty}\frac{1}{k^{2n}}=(-1)^{n+1}\frac{2^{2n-1}\pi^{2n}}{(2n)!}B _{2n}\quad n=1,2,\ldots\]
The first few Bernoulli numbers are as follows
\[B_{0}=1,\;B_{1}=-\frac{1}{2},\;B_{2}=\frac{1}{6},\;B_{4}=-\frac{1}{30},\;B_{6} =\frac{1}{42},\;B_{8}=-\frac{1}{30},\;B_{10}=\frac{5}{66},\;B_{12}=-\frac{691 }{2730}.\]
Returning to the problem of normalizing the derivatives \(x^{(n)}(t)\) of the function \(x(t)=\frac{1}{1+e^{-t}}\), note that the integral (10) can be written in the following form ( we put \(\tau=2t\) at the end)
\[\int_{-\infty}^{+\infty} \left(\frac{d^{n-1}}{dt^{n-1}}\frac{1}{\cosh^{2}t}\right)^{2}dt= \int_{-\infty}^{+\infty}\left(\frac{d^{n-1}}{dt^{n-1}}\frac{4e^{-2t}}{(1+e^{- 2t})^{2}}\right)^{2}dt=4\int_{-\infty}^{+\infty}\left(\frac{d^{n}}{dt^{n}} \frac{1}{1+e^{-2t}}\right)^{2}dt\] \[=4(2^{n})^{2}\int_{-\infty}^{+\infty}(x^{(n)}(2t))^{2}dt=2(2^{n} )^{2}\int_{-\infty}^{+\infty}(x^{(n)}(\tau))^{2}d\tau. \tag{11}\]
Comparing (11) with (10) we get
\[\int_{-\infty}^{+\infty}(x^{(n)}(t))^{2}dt=(-1)^{n-1}B_{2n}=|B_{2n}|. \tag{12}\]
Then, based on (12), we can redefine, with respect to [19], the logistic mother wavelet \(\psi_{n}(t)\) of order \(n=2,3,\ldots\) as
\[\psi_{n}(t)=\frac{1}{\sqrt{|B_{2n}|}}x^{(n)}(t), \tag{13}\]
with the norm \(||\psi_{n}||=||\psi_{n}||_{L^{2}}=1\).
In particular, for \(n=2\) from the formula (9) or directly from (8) we have
\[x^{\prime\prime}(t)=x(1-x)(1-2x),\]
and then by (13), wavelet \(\psi_{2}(t)\) (Fig. 1) is as follows
\[\psi_{2}(t)=\frac{\sqrt{30}}{1+e^{-t}}\Big{(}1-\frac{1}{1+e^{-t}}\Big{)}\Big{(}1 -\frac{2}{1+e^{-t}}\Big{)}=\frac{\sqrt{30}(e^{-2t}-e^{-t})}{(1+e^{-t})^{3}}. \tag{14}\]
For \(n=2\) or more generally for \(n=2,3,\ldots\) we create,by dilating and translating, a doubly indexed family of wavelets (children wavelets)
\[\psi_{n}^{a,b}(t)=\frac{1}{\sqrt{|a|}}\psi_{n}\Big{(}\frac{t-b}{a}\Big{)},\]
where \(a,b\in\mathbb{R},\;a\neq 0\).
We implement the \(\psi_{2}(t)\) wavelet in Matlab (Matlab's wavelet toolbox) with the following code:
function [psi,t] = logist(LB,UB,N,\(\sim\))
\(\%\)_LOGISTIC Logistic wavelet._
\(\%\)_[PSI,T] = LOGIST(LB,UB,N) returns values of_
\(\%\)_the Logistic wavelet on an N point regular_
\(\%\)_grid in the interval [LB,UB]._
\(\%\)_Output arguments are the wavelet function PSI_
\(\%\)_computed on the grid T._
\(\%\)_This wavelet has [-7 7] as effective support._
\(\%\)_See also WAVEINFO._
\(\%\)_Compute values of the Logistic wavelet._
t = linspace(LB,UB,N); \(\%\)_wavelet support._
psi = sqrt(30)* (exp(-2*t)-exp(-t))./(1+exp(-t)).\({}^{\wedge}3\);
end
## 3 An application to COVID-19 data in Italy
Let us consider the time series of daily deaths during the COVID-19 pandemic in Italy in the period from February 28, 2020 to September 14, 2020 (200 days), Fig. 2. These data are known as the "first wave" of
Figure 1: Wavelet \(\psi_{2}(t)\)
deaths in Italy and have already been analyzed many times by various authors (cf. e.g., Bezzini et al. [1], Dorrucci et al. [4]). It is clear that the time series Fig. 2 is not symmetric and is skewed to the right. It could be modeled by using, for example, the Gompertz function or another right-skewed distribution, which are broadly applied, e.g, in insurance [10]. We will show that, however, the time series of the total number of deaths can also be modeled with high accuracy by a sum of logistic functions.
Let \((y_{n})\) be the smoothed (by using 7-day moving averages) time series of the total, reported number of Covid-19 deaths in Italy, up to \(n\)th day. Then we calculate its first differences, i.e., the daily numbers of deaths
\[\Delta^{1}y_{n}=y_{n}-y_{n-1},\]
and the central second differences (changes in daily deaths)
\[\Delta^{2}y_{n}=\Delta^{1}y_{n+1}-\Delta^{1}y_{n}=y_{n+1}-2y_{n}+y_{n-1}.\]
Assuming that \((y_{n})\) follows locally a logistic function \(y_{n}\approx y(n)=y_{max}/(1+\exp(-(n-b)/a))\) and applying definition (14) we have
\[y^{\prime\prime}(t)=\frac{y_{max}}{\sqrt{30}\cdot a^{3/2}}\psi_{2}^{a,b}(t). \tag{15}\]
For the second differences \(\Delta^{2}y_{n}\), we will apply the CWT transform (7). Directly from the CWT scalogram we can find, for a given logistic wave, such values of the parameters \(b\) and \(a\) so that the value of the Index (16) at the point with coordinates \((b,a)\) is maximal. By (15) we have
\[\mbox{Index}=\sum_{n} \Delta^{2}y_{n}\psi_{2}^{a,b}(n)\approx\sum_{n}\Delta^{2}y(n) \psi_{2}^{a,b}(n)\approx\int_{-\infty}^{\infty}y^{\prime\prime}(t)\psi_{2}^{a,b}(t)dt=\int_{-\infty}^{\infty}\frac{y_{max}}{\sqrt{30}\cdot a^{3/2}}\psi_{2 }^{a,b}(t)\psi_{2}^{a,b}(t)dt\] \[=\frac{y_{max}}{\sqrt{30}\cdot a^{3/2}}\int_{-\infty}^{\infty}( \psi_{2}^{a,b}(t))^{2}dt=\frac{y_{max}}{\sqrt{30}\cdot a^{3/2}}. \tag{16}\]
Using (16) we can estimate the saturation level \(y_{max}\) as follows
\[y_{max}\approx\sqrt{30}\cdot a^{3/2}\sum_{n}\Delta^{2}y_{n}\psi_{2}^{a,b}(n)= \sqrt{30}\cdot a^{3/2}\mbox{Index}. \tag{17}\]
Let us note that the parameter \(b\) is the time (day) when a given logistic wave reached the inflection point (maximum of daily values). The parameter \(a\) can be interpreted in terms of the length of a given logistic wave. Namely, for a logistic wave \(x(t)\) of the form
\[x(t)=\frac{x_{max}}{1+\exp(-\frac{t-b}{a})}\]
Figure 2: The ‘first wave’ of deaths in Italy, 28/02/2020–14/09/2020
we can define e.g., 95% confidence interval by cutting off 2.5% of the left and right values. Denoting by \(t_{1}\) the left end of this interval we have
\[x(t_{1})=\frac{x_{i,max}}{1+\exp(-\frac{t_{1}-b}{a})}=0.025x_{max},\]
where, after an easy calculation, we get
\[b-t_{1}=3.66a.\]
From the symmetry of the logistic function, the length of the confidence interval is \(7.32a\). Thus, in practice, we can assume that the length of this logistic wave is
\[\text{wavelength}=7.32a. \tag{18}\]
Now we will model the time series (\(y_{n}\)) by a sum of logistic functions
\[f(t)=\sum_{i=1}^{k}\frac{x_{i,max}}{1+\exp(-\frac{t-b_{i}}{a_{i}})}, \tag{19}\]
\(i=1,2,\ldots,k\), where \(k\) is the number of logistic waves.
If there are several overlapping logistic waves, occuring in the same time period, then the higher intensity waves (with larger Index) may cause the lower intensity waves to be invisible on the CWT scalogram. Therefore, in order to find waves of lower intensity, we will remove the first wave with the highest intensity by subtracting it from the time series (\(y_{n}\)):
\[y_{n}^{(1)}=y_{n}-\frac{x_{1,max}}{1+\exp(-\frac{n-b_{1}}{a_{1}})}.\]
Then, for the time series (\(y_{n}^{(1)}\)), we calculate its first and second differences and for the latter we perform the CWT analysis again. The above process may be repeated several times if necessary.
Fig. 3(d) shows that the logistic wave #1 with the highest intensity has parameters \(b_{1}=33\) and \(a_{1}=6.4\) and the Index value is 175. The saturation level (17) is
\[x_{1,max}=\sqrt{30}\cdot 6.4\cdot\sqrt{6.4}\cdot 175=15,519.\]
After removing wave #1, according to the procedure described above, and performing the CWT analysis, we get the scalogram shown in Fig. 4, from which we read the parameters of wave #2: \(b_{2}=54\) and \(a_{2}=6.3\), and calculate the saturation level (17) \(x_{2,max}=6782\).
In order to find logistic waves with lower intensities, we also remove wave #2. After performing the CWT analysis, we get the scalogram Fig. 5.
Taking into account the three waves shown in the Fig. 5, we obtain function \(f(t)\) (19), approximating the time series (\(y_{n}\)), in the form of the following sum of five logistic functions (see also Fig. 6):
\[f(t)=\sum_{i=1}^{5}\frac{x_{i,max}}{1+\exp(-\frac{t-b_{i}}{a_{i} })}= \frac{15,519}{1+\exp(-\frac{t-33}{6.4})}+\frac{6,782}{1+\exp(- \frac{t-54}{6.3})}+\frac{10,692}{1+\exp(-\frac{t-71}{14.1})}\] \[+\frac{2,098}{1+\exp(-\frac{t-27}{4.4})}+\frac{269}{1+\exp(- \frac{t-172}{2.3})} \tag{20}\]
The saturation levels of waves #4 and #5 in (20) were calculated according to formula (17). However, for wave #3 there is no clear maximum of the Index. Therefore, and in order to compensate the influence of other, small waves, not included in the model, we have calculated its saturation level \(x_{3,max}=10,692\) to minimize the value of the RMSE error:
\[\text{RMSE}=\sqrt{\frac{1}{200}\sum_{i=1}^{200}(y_{n}-f(n))^{2}}.\]
Note that for model (20) \(\text{RMSE}=231.02\)
## 4 Conclusions
In this paper we deal with logistic wavelets and their normalization in \(L^{2}(R)\) space. We then use them to study the spread of the first wave of COVID-19 deaths in Italy in 2020. It turned out that this wave,
Figure 4: Scalogram CWT after removing wave #1
Figure 3: ‘First wave’ of COVID-19 deaths in Italy, 28/02/2020–14/09/2020
although asymmetric, can be described by the sum of five logistic functions (curves). Wave #3 lasted as long as (18), \(7.32\cdot 14.1=103\) days, while waves #1 and #2 were of similar length, 47 and 46 days respectively. The peaks of daily deaths for waves #1 and #2 were \(b_{2}-b_{1}=54-33=21\) days apart. So, against the background of the long wave #3, occurred waves #1 and #2, about half as long. Wave #4 arrived a few days earlier than wave #1, but was much less intense. Wave #5 was a single pulse, with a low value of the saturation level.
**Funding statement**
The research of the author was partially funded by the 'IDUB against COVID-19' project granted by the Warsaw University of Technology (Warsaw, Poland) under the program Excellence Initiative: Research University (IDUB), grant no 1820/54/201/2020.
**Conflict of Interests**
The author declares that there is no any conflict of interest in the submitted manuscript.
Figure 5: Scalogram CWT after removing waves #1, #2
Figure 6: Approximating function \(f(t)\) for time series (\(y_{n}\)) |
2304.05509 | Control invariant set enhanced reinforcement learning for process
control: improved sampling efficiency and guaranteed stability | Reinforcement learning (RL) is an area of significant research interest, and
safe RL in particular is attracting attention due to its ability to handle
safety-driven constraints that are crucial for real-world applications of RL
algorithms. This work proposes a novel approach to RL training, called control
invariant set (CIS) enhanced RL, which leverages the benefits of CIS to improve
stability guarantees and sampling efficiency. The approach consists of two
learning stages: offline and online. In the offline stage, CIS is incorporated
into the reward design, initial state sampling, and state reset procedures. In
the online stage, RL is retrained whenever the state is outside of CIS, which
serves as a stability criterion. A backup table that utilizes the explicit form
of CIS is obtained to ensure the online stability. To evaluate the proposed
approach, we apply it to a simulated chemical reactor. The results show a
significant improvement in sampling efficiency during offline training and
closed-loop stability in the online implementation. | Song Bo, Xunyuan Yin, Jinfeng Liu | 2023-04-11T21:27:36Z | http://arxiv.org/abs/2304.05509v1 | Control invariant set enhanced reinforcement learning for process control: improved sampling efficiency and guaranteed stability
###### Abstract
Reinforcement learning (RL) is an area of significant research interest, and safe RL in particular is attracting attention due to its ability to handle safety-driven constraints that are crucial for real-world applications of RL algorithms. This work proposes a novel approach to RL training, called control invariant set (CIS) enhanced RL, which leverages the benefits of CIS to improve stability guarantees and sampling efficiency. The approach consists of two learning stages: offline and online. In the offline stage, CIS is incorporated into the reward design, initial state sampling, and state reset procedures. In the online stage, RL is retrained whenever the state is outside of CIS, which serves as a stability criterion. A backup table that utilizes the explicit form of CIS is obtained to ensure the online stability. To evaluate the proposed approach, we apply it to a simulated chemical reactor. The results show a significant improvement in sampling efficiency during offline training and closed-loop stability in the online implementation.
## I Introduction
In process control, model predictive control (MPC) is a standard approach to optimal control. It is formulated as a constraint optimization problem in which the safety constraints are taken into account explicitly [1]. However, for large-scale systems, MPC may suffer from high computational complexity. Reinforcement learning (RL), as one main component of machine learning, provides an alternative to MPC for optimal control and can shift the complex optimization calculations to offline training based on a model [2]. It has gain significant attention in different industries, (for example, game [3], finance [4], energy [5]) for decision-making and control purposes.
RL is a class of optimal control algorithms that enables machines to learn an optimal policy (closed-loop control law), by maximizing future rewards through repetitive interactions with the environment [2]. It uses a trial-and-error approach to interact with the environment, allowing it to learn and find the optimal policy even in the absence of prior knowledge of the process. In addition, the consideration of future rewards in RL ensures that current decisions are beneficial in the long run. However, the standard RL approach does not incorporate safety constraints in its design and does not guarantee closed-loop stability, which limits its use in real-world applications [6]. To address these challenges, safe RL algorithms have been developed, which explicitly consider safety constraints during training and ensure closed-loop stability in the learned policy.
Safe RL is a class of RL algorithms that aims to achieve a stable closed-loop control system by taking safety constraints into account. Different approaches to designing safe RL have been summarized in the literature [6, 7, 8]. One existing approach is to consider the constrained Markov decision process (CMDP), in which a cost function is used to penalize undesired actions, transforming the original optimization problem into a new problem where both reward maximization and action cost minimization are required [9, 10]. This approach increases the probability of safe actions but without guarantees. Another approach is to use MPC to guide the RL algorithm by treating MPC as parameterized value or policy neural networks [11, 12]. However, these approaches still require solving the MPC optimization problem recursively, resulting in a high computational burden.
Sampling efficiency is another critical issue that has limited real-world applications of RL [13]. In the context of stability, an RL algorithm with low sampling efficiency requires a significant number of agent-environment interactions to achieve a stable and optimal control policy, leading to prohibitively high costs in its application. To address this challenge, an intuitive solution is to allow the agent to interact only with the controllable states of the environment. By doing so, the agent can find a policy that maintains the system within the controllable state space, achieving the stability guarantee while eliminating unnecessary interactions with uncontrollable states and improving sampling efficiency. Unfortunately, such environments are generally unavailable.
In the field of control theory, it is widely acknowledged that control invariant sets (CIS) play a crucial role in ensuring the stability of a control system [14]. These sets characterize the states within which a feedback control law is always available to maintain the system within the set [15]. Incorporating the concept of CIS in RL is expected to improve the stability guarantee and sampling efficiency by restricting the agent's interactions with the system to only controllable states. This way, the agent can find a policy that maintains the system within the CIS and achieves stability with fewer interactions with the environment. The concept of CIS has indeed been adopted in RL design to achieve closed-loop stability. The main idea is to filter or project risky actions to safe ones, typically by adding a stand-alone safety filter after the learned RL policy [16, 17, 18, 19]. However, because the filter only considers safety, the optimality that the controller is trying to achieve is not always preserved. To address this, [17] proposes embedding CIS in the last layer of the RL policy network to enable back-to-back training and achieve both safety and optimality. Due to the
challenge of obtaining a CIS for a general nonlinear system, researchers have shifted their focus towards implicit methods that utilize control barrier functions (CBF), Hamilton-Jacobi (HJ) reachability analysis, and safe backup controllers to define safety constraints and design filters indirectly [20].
Though the above algorithms take into account safety in the training of the RL, the sampling efficiency remains as a critical issue. Moreover, it is worth noting that these studies combining CIS and RL have been conducted mainly in robotics, and limited research has been carried out in process control. In the realm of process control, process systems are in general highly nonlinear, tightly interconnected and of large-scale. These features present challenges in applying the above mentioned CBF and HJ analysis based algorithms. Furthermore, control problems beyond set-point or reference trajectory tracking, such as zone tracking [14, 21] and economic optimization [22], are common in process control. These control objectives add additional complexities that make the above mentioned approaches difficult to use.
While the construction of a CIS is not a trivial task, various methods have been developed in the past decade. For example, algorithms for constructing or approximating the CIS for constrained linear systems [23, 24] and general nonlinear systems [25, 26] have been proposed. Graph-based approaches to find the outer and inner approximations of robust CIS of nonlinear systems has also been developed [27]. Over the past couple years, data-driven approaches have also been used to find the invariant sets of nonlinear systems, which approximate invariant sets using neural nets [28, 29]. These approaches facilitate the study of safe RL that utilizes a CIS explicitly.
The above considerations motivate us to study the explicit integration of RL and CIS for process control, where the CIS can serve as a state space for the RL agent to explore, safely. Minimal modification to the RL algorithms is required, while the reward function design can incorporate both economic or zone tracking objectives which are common in process control. Specifically, in this work, the CIS of a nonlinear process is assumed to be available. Then, a two-stage CIS enhanced RL is proposed to improve the sampling efficiency and guarantee the stability. The first stage involves offline training with a process model and the CIS. Due to the potential disastrous consequences of failed process control, the use of a model to pre-train the RL offline can provide a significant amount of data with strong temporal correlation and broad coverage of various scenarios. The introduction of CIS has the potential to narrow down the state space, reduce the training dataset size, and provide guidance on agent exploration. However, exhaustive training cannot guarantee that the RL agent has encountered every scenario, which may result in instability in online implementation. Hence, the second online implementation stage involves online learning when the safety constraint is violated. A new control implementation strategy is proposed to ensure closed-loop stability. The proposed approach is applied to a chemical reactor to demonstrate its applicability and efficiency.
## II Preliminaries
### _System description_
In this work, we consider a class of nonlinear processes that can be described by the following discrete-time state-space form:
\[x_{k+1}=f(x_{k},u_{k}) \tag{1}\]
where \(x\in X\subseteq\mathbb{R}^{n}\) and \(u\in U\subseteq\mathbb{R}^{m}\) denote the state and the input vectors of the process with \(X\) and \(U\) being the physical constraints on \(x\) and \(u\), \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is a nonlinear function mapping the present state to the state at the next time instant, \(k\) represents the time index.
### _Reinforcement learning_
Reinforcement learning broadly represents the class of data-driven learning algorithms in which an agent is trying to learn a closed-loop policy \(\pi(u|x)\), a conditional probability of prescribing \(u\) at given state \(x\), by interacting with the environment. The Markov decision process (MDP) is utilized to formulate the environment. The environment receives the action prescribed by the agent and provides the resulting reward and state of the system back to the RL agent. The state transition dynamics of the MDP is shown below:
\[P(r_{k+1},x_{k+1}|x_{k},u_{k}) \tag{2}\]
where \(x_{k}\) denotes the current state of the environment, \(u_{k}=\pi(x_{k})\) is the action prescribed by the agent based on the learned policy, \(r_{k+1}\) represents the reward used for criticizing the action, \(x_{k+1}\) represents the state sampled at the next sampling time instant, \(P\) denotes the conditional probability of the state transition.
As in [2], the RL problem can be formulated as the following:
\[\pi^{*}=\operatorname*{argmax}_{\pi}\mathbb{E}_{\pi}[G_{k}|x_{k},u_{k}] \tag{3}\]
where \(G_{k}\) denotes the return accumulating the reward \(r\) in long run. The optimal policy \(\pi^{*}\) is found when the expected return following the such policy is maximized.
In this work, the environment dynamics that describe the transition from \(x_{k}\) to \(x_{k+1}\) is represented by the nonlinear system of Eq. (1). Note that in system (1), uncertainty is not considered for brevity.
### _Control invariant sets_
A control invariant set of a system is a set of states in which the system can stay inside all the time by following a feedback control law. The definition of the control invariant set is given below:
**Definition 1** (c.f. [15]): _The set \(R\subseteq X\) is said to be a control invariant set for system (1) if for any \(x_{k}\in R\), there exists an input \(u_{k}\in U\) such that \(x_{k+1}\in R\)._
In the control literature, CISs play an important role in ensuring the stability of the closed-loop systems. For example, CISs are commonly used in MPC designs as a terminal constraint for achieving guaranteed stability and feasibility [1, 30, 14].
### _Problem formulation_
Standard RL does not consider the safety constraints which obstructs its application. Also, conventional RL typically requires a significant amount of data for training. A CIS inherently provides the region of operation that is stable and may be integrated in RL offline and online training to ensure closed-loop stability. In addition, the introduction of the CIS is possible to narrow down the state space, to reduce the training dataset size, and to provide guidance on agent exploration.
The objective of this work is to propose a CIS enhanced RL and its training method to guarantee the closed-loop stability and improve the sampling efficiency during RL training, by incorporating the CIS knowledge into RL. The RL optimization can be described as follows:
\[\begin{split}\pi^{*}=\underset{\pi}{\text{argmax}}& \mathbb{E}_{\pi}[G_{k}|x_{k},u_{k}]\\ \text{s.t.}& x_{k}\in R\\ & u_{k}\in U\end{split} \tag{4}\]
where the state constraint as well as the input constraint are considered.
## III Proposed approach
In this section, we present the proposed CIS enhanced RL. In the proposed approach, it includes both offline training and online training. The first step is to train the RL with the CIS information offline to achieve a near-optimal policy. The incorporation of the CIS in the offline training can significantly improve its sampling efficiency since the amount of data needed for training is reduced; this will be demonstrated in the simulations. While the CIS is used in offline training, the offline trained policy does not guarantee the closed-loop stability. In order to ensure the closed-loop stability, the RL should further be trained during its online implementation. A new control implementation strategy is proposed to ensure that the applied control actions ensure the closed-loop stability. Figure 1 illustrates the proposed approach and the difference between the proposed approach and the fundamental RL are highlighted in blue. The details of the two steps are explained below.
### _Offline training_
It is assumed that a CIS of system (1), \(R\), is available before the training of the RL. It is preferred that the CIS is the maximum CIS of (1) within the physical constraint \(X\), which ensures that the RL can explore within the biggest feasible and stable operating region. Meanwhile, we note that the maximum CIS is not a requirement in the proposed approach, any CIS can be used in the proposed approach.
In the proposed RL offline training, the system model (1) is used in the training of the RL. In this work, we focus on demonstrating the concept of the proposed approach and assume that there is no model uncertainty. The CIS information is used in two ways. First, the CIS is used to penalize the RL agent when it drives the system state outside of the CIS. This can be achieved by designing the reward function appropriately. A discrete reward function shown in Eq. (5) may be used.
\[r(x_{k},u_{k})=\begin{cases}r_{1},&\text{if }x_{k+1}\in R\\ r_{2}&\text{otherwise}\end{cases} \tag{5}\]
where \(R\subset X\) denotes the CIS of system (1) used in the RL training, \(r_{1}\in\mathbb{R}\) and \(r_{2}\in\mathbb{R}\) denote the reward values associated with the prescribed action \(u_{k}\) based on the current state \(x_{k}\). Note that \(r_{1}\) should be greater than \(r_{2}\) (\(r_{1}>r_{2}\)) to guide the RL to prescribe control actions that can maintain the system state within the CIS. Specifically, at the time instant \(k\), the system is at state \(x_{k}\). When the RL prescribes the control action \(u_{k}\), it is sent to the model and the model will propagate into the next time instant and obtain \(x_{k+1}\). If \(x_{k+1}\) is within the CIS (\(x_{k+1}\in R\)), the RL will receive a higher reward \(r_{1}\); if \(x_{k+1}\) is outside of the CIS (\(x_{k+1}\notin R\)), indicating the prescribed action resulting an unstable operation, the RL receives a relatively lower reward (or penalty) \(r_{2}\).
Moreover, the CIS is used for initial state sampling in RL offline training. In RL offline training, an RL needs to sample the initial state of the system randomly for many times. Typically, the RL is restricted to sample the initial condition within the physical constraint set \(X\). In the proposed approach, instead of using \(X\), we propose to sample the initial state \(x_{0}\) of the system within the CIS \(R\). If the CIS is the maximum CIS, and if the system starts from an initial state outside of the CIS, the RL is not able to stabilize the system and drive the system back into the CIS. Such a case indeed does not provide too much information for learning a policy that ensures the stability. If the system starts within
Fig. 1: Flow diagram of the proposed RL training approach.
the CIS, then RL is able to find a control action to stabilize the system and learn the optimal and non-optimal actions based on the reward function (5). Therefore, the sampling efficiency can be improved by restricting the RL to sample initial states within the CIS. This will be demonstrated in the simulations in the next section.
Another technique we propose in the offline training is to reset the state to its previous value when the state is outside of the CIS. Assume that at time \(k\), \(x_{k}\in R\). The RL agent prescribes a control action \(u_{k}\) and drives the system state to be outside of the CIS; that is, \(x_{k+1}\notin R\). In such a case, the RL will get a lower reward \(r_{2}\) according to (5). Since once the system state is outside of the CIS, there is no control action that can drive the system back to the CIS (if the CIS is the maximum one within \(X\)) and the system becomes unstable, the interaction between the RL and the system will not further bring much useful information towards learning the optimal control law. Therefore, we propose to reset the state to its previous value; that is, set \(x_{k+1}=x_{k}\) and then continue the training process. By implementing this state resetting technique, the RL learns from this failure experience and will get second or more chances to learn at the same state \(x_{k}\) towards the stable and optimal policy.
### _Online training and stability guaranteed implementation strategy_
After offline training, the RL learns a policy and the RL policy is saved as a pre-calculated controller for online implementation. However, due to the sampling nature of the RL training, it is impossible for the offline trained RL agent to encounter all situations. Therefore, the offline learned policy may not guarantee the closed-loop stability. To address this issue, we propose to implement the RL policy using a stability guaranteed strategy and to further train the RL policy online when a new situation is encountered. As shown in Figure 1, a safety supervisor is placed in between the RL agent and the environment. The detailed description of the safety supervisor is shown below.
Let us consider the current sampling time is \(k\). With the state feedback \(x_{k}\), the RL prescribes the control action \(u_{k}\) according to the learned policy. Based on \(u_{k}\) and the system model (1), the state \(x_{k+1}\) at the next sampling time is predicted. If the predicted state \(x_{k+1}\) is within the CIS (\(x_{k}\in R\)), then the control action \(u_{k}\) is actually applied to the system; if the predicted state \(x_{k+1}\) is outside of the CIS (\(x_{k}\notin R\)), then the RL is switched again to the training mode and the policy is updated with the new interaction experience \((x_{k},u_{k},r_{k+1},x_{k+1})\). The updated policy will prescribe the updated action based on the state \(x_{k}\) again. Unless the new policy guarantees that the predicted state \(x_{k+1}\) is within the CIS, the agent will keep learning at the current state \(x_{k}\) until a pre-determined maximum number of iterations (\(maxltr\)) is reached. This online training/updating approach can significantly enhance the safety of the RL. However, the closed-loop stability is still not guaranteed. It is possible that within the maximum iteration \(maxltr\), the online updating of the RL may not converge to a stable action (the RL cannot find a stable action for the particular state \(x_{k}\) within \(maxltr\) online updates). It should be pointed out that when \(maxltr\) is very large, the online training is expected to find a safe action for every state it encounters since the CIS provides the guarantee of the existence of a safe action for all the states within it.
To address the above issue and to guarantee the closed-loop stability, we propose the use of a backup table to save the safe actions for the states such that the above online training fail to find a safe action within \(maxltr\) iterations. The safe actions can either be found using another stabilizing but not optimal control law, or by sampling the control action space randomly, or by leveraging the information contained in the explicit form of the CIS. Note that in some CIS construction algorithms [31], the corresponding safe action space for each state is also found, which can be taken advantage of in creating the backup table. This approach provides a safety guarantee for the RL.
The proposed stability guaranteed implementation strategy and online training is summarized in the following algorithm:
```
Input:\(x_{k}\), \(k\), \(maxltr\) Output: Safe \(u_{k}\) \(notSafe\gets True\), \(update=1\); whilenotSafedo Calculate \(u_{k}\) at \(x_{k}\) based on the learned RL policy; Based on the model and \(u_{k}\), predict \(x_{k+1}\); if\(x_{k+1}\in R\)then \(notSafe\gets False\); else if\(update\leq maxltr\)then Update RL policy with \((x_{k},u_{k},r_{k+1},x_{k+1})\); \(update\gets update+1\); else Get safe action \(u_{k}\) from the backup table; \(notSafe\gets False\); end if end while Apply \(u_{k}\) to system (1) and obtain \(x_{k+1}\); Reinitialize the algorithm with \(k\gets k+1\)
```
**Algorithm 1**Safety supervisor in online implementation for stability guarantee
In the algorithm, the parameter \(maxltr\) can be defined by the user to balance between the computational complexity and optimality of the RL agent. A larger value will allow the agent to be trained on a state for a longer time, which can potentially result in a better safety performance. However, this comes at the cost of increased online computational complexity. On the other hand, a small value, or even zero, will ensure stability and online implementation feasibility given that the backup table is designed well. However, it may not achieve optimal performance when relying on the backup plan for safe actions, because the selection among safe actions at the current state does not consider the optimality.
One more factor to consider in picking, \(maxltr\), is the sampling time of the system (time interval between two discrete time instants). It should be ensured that within one sampling time a stabilizing control action can be prescribed which indeed limits the maximum value of \(maxltr\).
Note that in the above discussion, the primary focus was on maintaining the system state within the CIS (\(x_{k}\in R\)), which is also the concept of stability considered in this work. Since \(R\subset X\), it also ensures that the state constraint (\(x_{k}\in X\)) is satisfied. A rigorous stability proof of the proposed design is omitted for brevity. One interesting feature of the proposed approach is that maintaining \(x_{k}\in R\) is handled through the incorporation of the CIS in the offline training and the online implementation. This provides flexibility in the proposed approach to incorporate other control objectives such as set-point tracking, zone tracking or economic optimization in the design of the reward function \(r\).
Note also that in this work, model uncertainty is not considered. When model uncertainty presents, the proposed design can be adapted to account for uncertainty, for example, by considering a robust CIS.
## IV Simulation results and discussion
### _Process description_
In order to study the sampling efficiency and the closed-loop stability guarantee of the proposed RL training and implementation approach, the application of the proposed approach to the control of a continuously stirred tank reactor (CSTR) is considered in this section. The reaction happening inside of the reactor is an irreversible and exothermic reaction with first-order reaction rate. The CSTR is also installed with a cooling jacket outside of it for maintaining the temperature of the reaction mixture. The mathematical model contains two nonlinear ordinary differential equations with the following representation [14]:
\[\frac{dc_{A}}{dt}=\frac{q}{V}(c_{Af}-c_{A})-k_{0}exp(-\frac{E}{RT})c_{A}\]
\[\frac{dT}{dt}=\frac{q}{V}(T_{f}-T)+\frac{-\Delta H}{\rho c_{p}}k_{0}exp(- \frac{E}{RT})c_{A}+\frac{UA}{V\rho c_{p}}(T_{c}-T)\]
where \(c_{A}\ (mol/L)\) and \(T\ (K)\) denote the concentration of the reactant and the temperature inside of the reaction mixture, respectively. \(c_{Af}\ (mol/L)\) and \(T_{f}\ (L)\) represent the concentration of the reactant and temperature of the inlet stream. \(T_{c}\ (K)\) is the temperature of the coolant stream used for cooling the reactor temperature. The remaining parameters are summarized in Table I. The parameter \(q\) is the inlet and outlet flow rate of the reactor, \(V\) is the volume of the reaction mixture, \(k_{0}\) is the pre-exponential factor of the Arrhenius rate constant, \(E\) represents the activation energy required by the reaction, \(R\) denotes the universal gas constant, \(\Delta H\) is the change of the enthalpy used for approximating the change of internal energy of the reaction, \(\rho\) is the density of the reaction mixture, \(c_{p}\) denotes the specific heat capacity of the reaction mixture, \(UA\) is the heat transfer coefficient between the reactor and the cooling jacket.
In the following closed-loop control problem study, \(c_{A}\) and \(T\) are the two states of the system; \(T_{c}\) is the manipulated variable. They are subject to the following physical constraints:
\[0.0\leq c_{A}\leq 1.0 \tag{6}\] \[345.0\leq T\leq 355.0\] (7) \[285.0\leq T_{c}\leq 315.0 \tag{8}\]
The control objective is to train an RL policy to maintain a stable operation of the CSTR such that the two states are maintained within the physical constraints shown in (6) and (7) all the time.
The maximum CIS of the CSTR is calculated using the graph-based algorithm developed in [27]. The physical constraints and the calculated maximum CIS over the state space are shown in Figure 2. According to Figure 2, the CIS spans over the entire temperature space and shrinks over the concentration space. Hence, if the concentration of the reactant is lower than the minimum value of \(c_{A}\) in CIS (the top left point of CIS), no matter how \(T_{c}\) is manipulated by the controller, the system becomes unstable. The same observation applies to when the system has a concentration that is higher than the maximum value of \(c_{A}\) in CIS (bottom right point of CIS). Since the calculated CIS is the maximum one, when the system is outside of the CIS, there is no feedback control law that is able to bring the system back into CIS again and the system state will eventually diverge. This implies that the physical constraints will be violated.
Fig. 2: The physical constraints and the maximum CIS of the CSTR.
### _RL training setup and results_
In the training of the proposed RL, the maximum CIS is used. The proximal policy optimization (PPO) is used as the optimization algorithm in RL training. Even though, the nature of the CSTR system is a continuous process, the RL agent is trained under the episodic setting. The reason is that when the system reaches its steady state as time goes by, the collected data, the steady state transition, will not provide any new information to the agent. Though the exploration-exploitation embedded in PPO algorithm may prescribe actions enhancing the exploration, meaning at the given steady state PPO will not prescribe the corresponding steady input, the probability of the exploration becomes extremely low. Hence, in order to favor the exploration, once the agent interacting with the environment for a user-defined number of time steps, the episode is terminated and a new initial state is sampled. During the experiment, 10,000 episodes and 200 steps per episode were used to train the RL agent. The batch size was defined as 10 episodes, meaning that the RL agent would be updated only when all 10 episodes were finished and the RL would learn from the data of the 10 episodes at once. The learning rate was defined as \(10^{-4}\), discounted factor was 0.99. It was noticed that the offline training took overall 2,000,000 steps which might be computational expensive. However, the trained policy can be implemented online as a pre-calculated function which will require less online computation resources.
The consideration of CIS in the proposed RL training setup is reflected in two steps, the sampling of the initial states and the design of reward function. According to Section III-A, since the largest CIS is known, the initial state has to be inside of CIS to ensure the following states get the chance to be inside of CIS. Hence, all 10,000 initial states for 10,000 episodes were sampled within the CIS. In addition, because it was undesired for the system to enter the space outside of CIS, a discrete reward function was proposed as the following:
\[r(x_{k},u_{k})=\begin{cases}10,000,&\text{if }x_{k+1}\in\textit{CIS}\\ -1,000&\text{otherwise}\end{cases} \tag{9}\]
Based on aforementioned RL training setup, 20 offline training were executed in parallel and the learning performance was calculated as the average of performance over 20 training. The average training reward plot, representing the learning performance, is shown in Figure 3. The orange horizontal line represents the maximum score each episode can achieve if the RL agent maintain the system within the CIS for all 200 steps; the maximum score for each episode is \(200\times 10,000=2\times 10^{6}\). The mean curve was calculated based on all 20 training and blue shaded area shows the one standard deviation. Please note, in order to smooth out the fluctuations of the scores among episodes, the running average, which recursively calculates the average of scores of past 100 episodes, was used to plot the figure. As RL agent interacting with the environment for more episodes, the score of episode increased, meaning RL agent was able to learn from the training. From the beginning to around 2,000\({}^{th}\) episodes, the RL agents had a higher learning rate with a larger variance between 20 training. After that, the learning slowed down and gradually reached the plateau with a decreased variance.
### _RL testing setup and results_
Since in the RL training approach proposed in Section III, an RL policy is trained offline before interacting with the environment online, it enables us to test the performance of the offline RL policy. The policy was tested with the model for 10,000 episodes and each episodes lasted for 200 steps. During the tests, the reward value was collected for evaluation purpose and not for learning purpose. Note that in all these tests, the control actions prescribed by the RL agents were applied directly to the CSTR and the proposed stability guaranteed online implementation strategy was not used.
Table II summarizes the failure rates of the 20 trained RL policies. The failure is defined as when the policy cannot maintain the process within the CIS in an episode. According to the table, Run 18 was able to achieve a 0.02% of failure rate. In other words, out of 10,000 episodes, there were only 2 episodes that the agent was failed to keep them inside of the CIS. Run 12 has the highest failure rate of 13.90%. Overall, the proposed CIS enhanced offline RL training was able to achieve an average of 8.42% failure rate.
Fig. 3: Average training score of proposed RL design
### _Study of stability guarantee_
After the offline RL training, one RL agent was saved and treated as the pre-calculated feedback controller. Then by following the algorithm proposed in Section III-B, the agent was implemented online. The agent interacted with the environment for 10,000 episodes and each episodes lasted for 200 steps. In order to examine the benefits of the proposed online implementation, RL Run 1 obtained from Section IV-B was picked, because a near average test performance was observed.
First of all, with the proposed online implementation strategy, the agent was able to maintain the system within the CIS for all 10,000 episodes. This is expected since the proposed online implementation is stability guaranteed.
Second, since the agent was retrained during its implementation, it was expected that the retrained RL agent would be able to achieve a better performance in terms of stability. Therefore, two agents, one from offline training and one from online implementation, were compared and tested on one set of 10,000 initial states. Since the same set of initial states were used, the RL agent obtained after offline training was able to achieve a 7.91% of failure rate which is the same as the value shown in Table II. The RL agent obtained after online training was able to reach a 0.02% failure rate. By comparing these values, it shows that the proposed online implementation not only is able to ensures the stability, but also obtain a better RL agent.
### _Study of sampling efficiency_
In order to study and quantify the benefits brought by utilizing the CIS in RL offline training, the sampling efficiency was studied. The study was conducted by comparing the results shown in Section IV-B with an RL training setup that did not utilize the CIS information. Hence, in this RL without CIS training, 10,000 of initial states were sampled within the physical constraints and the reward condition was extended from CIS in Eq. (9) to the whole physical constraint. In addition, the reset of the state was done when the system state was outside of the physical constraints. Other parameters remained the same.
Since two RL training setups had different reward functions, it was impossible to compare their training plots directly. Hence, they were tested by comparing the failure rates introduced in Section IV-C. The same 10,000 initial states were tested. Table III shows that the RL with CIS was able to achieve 8.42% of failure rate over 10,000 episodes and RL without CIS could only achieve 34.30%. Therefore, using the same amount of data in training, utilizing the knowledge of CIS facilitates the learning process. Then both RL agents were trained with 20,000, 30,000, 40,000 and 50,000 episodes, then tested with the same 10,000 initial states that were used before. The failure rate of RL with CIS had a minor improvement to 4.84% and that of RL without CIS had a relative bigger improvement to 14.58%. However, the failure rate of RL without CIS training using 50,000 episodes (14.58%) was still higher than that of RL with CIS trained only using 10,000 episodes (8.42%). Hence, the utilization of CIS improves the sampling efficiency of RL training process.
## V Concluding remarks
A CIS enhanced RL training and online implementation approach was proposed to obtain stability guaranteed RL implementation. The offline training stage incorporated the CIS in reward function design, the initial state sampling and the state reset technique. A stability guaranteed online implementation strategy was proposed for the implementation of the offline trained RL and the RL was also retrained if a new situation was encountered. The approach was applied on a two-dimensional nonlinear CSTR system. The results showed that the offline training stage was able to provide an agent with a lower failure rate as compared to RL without CIS. Also, the sampling efficiency was significantly improved as CIS was utilized in offline training. The online implementation stage ensured the stability and resulted in a better RL agent in terms of maintaining system inside of CIS.
|
2308.05804 | Search for Electroweakinos in R-Parity Violating SUSY with Long-Lived
Particles at HL-LHC | We investigate the R-parity violating (RPV) supersymmetric (SUSY) model at
the High-Luminosity Large Hadron Collider (HL-LHC) in the context of compact
muon solenoid (CMS) experiment assuming a total integrated luminosity of
$\mathcal{L}=3000~\text{fb}^{-1}$ at $\sqrt{s}=$ 14 TeV. We focus on the pair
production of electroweakinos, specifically, $\chi_2^0$ and $\chi_1^{\pm}$ in
wino and higgsino states in a particular scenario where $\chi_2^0$ and
$\chi_1^{\pm}$ decay into a Higgs boson and W boson, respectively, along the
long-lived lightest supersymmetric particle (LSP), $\chi_1^0$, which decays to
three quarks via $\lambda^{''}$ RPV couplings leading to the prompt as well as
displaced signatures in the final state. To select events at the level-1 (L1)
trigger system, we employ dedicated and standard triggers followed by an
offline analysis integrating information from the tracker, electromagnetic
calorimeter (ECAL) and minimum ionising particle (MIP) timing detector (MTD).
We observe that wino-like $\chi_2^0/\chi_1^{\pm}$ with a mass of 1900 GeV and
$\chi_1^0$ with a mass greater than 800 GeV can be probed across a decay length
ranging from 1 cm to 200 cm. In the case of higgsino-like pair production of
$\chi_2^0/\chi_1^{\pm}$, we can probe $\chi_2^0/\chi_1^{\pm}$ with a mass of
1600 GeV, and $\chi_1^0$ with a mass greater than 700 GeV, across a decay
length range of 1 cm to 200 cm. | Biplob Bhattacherjee, Prabhat Solanki | 2023-08-10T18:01:04Z | http://arxiv.org/abs/2308.05804v1 | # Search for Electroweakinos in R-Parity Violating SUSY with Long-Lived Particles at HL-LHC
###### Abstract
We investigate the R-parity violating (RPV) supersymmetric (SUSY) model at the High-Luminosity Large Hadron Collider (HL-LHC) in the context of compact muon solenoid (CMS) experiment assuming a total integrated luminosity of \({\cal L}=3000\) fb\({}^{-1}\) at \(\sqrt{s}=14\) TeV. We focus on the pair production of electroweakinos, specifically, \(\chi^{0}_{2}\) and \(\chi^{\pm}_{1}\) in wino and higgsino states in a particular scenario where \(\chi^{0}_{2}\) and \(\chi^{\pm}_{1}\) decay into a Higgs boson and W boson, respectively, along the long-lived lightest supersymmetric particle (LSP), \(\chi^{0}_{1}\), which decays to three quarks via \(\lambda^{{}^{\prime\prime}}\) RPV couplings leading to the prompt as well as displaced signatures in the final state. To select events at the level-1 (L1) trigger system, we employ dedicated and standard triggers followed by an offline analysis integrating information from the tracker, electromagnetic calorimeter (ECAL) and minimum ionising particle (MIP) timing detector (MTD). We observe that wino-like \(\chi^{0}_{2}/\chi^{\pm}_{1}\) with a mass of 1900 GeV and \(\chi^{0}_{1}\) with a mass greater than 800 GeV can be probed across a decay length ranging from 1 cm to 200 cm. In the case of higgsino-like pair production of \(\chi^{0}_{2}/\chi^{\pm}_{1}\), we can probe \(\chi^{0}_{2}/\chi^{\pm}_{1}\) with a mass of 1600 GeV, and \(\chi^{0}_{1}\) with a mass greater than 700 GeV, across a decay length range of 1 cm to 200 cm.
## 1 Introduction
With a growing and urgent need to search for physics beyond the standard model (BSM), there is an ongoing effort to look for signatures of new physics in the long-lived sector on both the phenomenological and experimental sides. Numerous phenomenological studies focusing on a wide range of BSM models and signatures have been performed to search for long-lived particles (LLPs); references to some of these studies can be found here [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. On the experimental side, LHC's two general-purpose detectors, ATLAS and CMS, have been actively searching for displaced signatures at the colliders. Studies done at ATLAS and CMS look for a wide range of experimental signatures using vertex and non-vertex-based methods. For vertex-based searches, signatures include displaced jets, vertices, and leptons. On the other hand, non-vertex-based searches feature signatures such as emerging jets, trackless jets, disappearing tracks, non-pointing photons, and jets with low electromagnetic energy fraction. CMS [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36] and ATLAS [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56] have extensively documented these studies. Along with CMS and ATLAS, LHCb has also carried out numerous LLP searches involving displaced jets, dark photons and displaced leptons [57; 58; 59; 60; 61; 62]. Significant efforts are also being intensively made in the field of hardware development to improve the detection of LLPs at the large lifetime frontier. This includes the development of new detectors like FASER [63], MATHUSLA [64; 65] and CODEX-b [66; 67], and hardware specifically designed for search of displaced physics at the LHC's general purpose detectors along with application of innovative analysis techniques that utilise a variety of information from the different sub-detectors at HL-LHC. There are several proposals for dedicated detectors for
LLP searches at future colliders like FCC-ee [68; 69]. For FCC-hh, a transverse detector, DELIGHT [19], and a forward detector, FOREHUNT [70] have been proposed.
In the context of LLPs, Supersymmetry (SUSY) [71; 72; 73] has been one of the most studied BSM theory. In MSSM, conservation of R-parity [74] ensures that SUSY particles can be pair-produced with odd R-parity where decay of each of them should lead to an odd number of sparticles and lightest SUSY particle (LSP) is stable. Numerous studies have investigated the phenomenological implications of R-parity conserving (RPC) scenarios [75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91]. Although, R-parity conservation is required to avoid unwanted B- and L- violating effects [92; 93] such as proton decay but it is not the utmost requirement as presence of some other symmetries can allow R-parity violation (RPV) while forbidding proton decay [94; 95; 96; 97]. All standard model particles are assigned \(R_{p}=1\) while their super-partners are assigned \(R_{p}=\) -1.
A viable MSSM superpotential comprising of gauge invariant and R-parity violating terms [98; 99] can be constructed as follows-
\[W=\mu_{i}H_{u}L_{i}+\frac{1}{2}\lambda_{ijk}L_{i}L_{j}E_{k}^{c}+\frac{1}{2} \lambda^{{}^{\prime}}_{ijk}L_{i}Q_{j}D_{k}^{c}+\frac{1}{2}\lambda^{{}^{\prime \prime}}_{ijk}U_{i}^{c}D_{j}^{c}D_{k}^{c} \tag{1}\]
Where \(\lambda_{ijk}\), \(\lambda^{{}^{\prime}}_{ijk}\) and \(\lambda^{{}^{\prime\prime}}_{ijk}\) are various RPV yukawa couplings with i, j, k being generation indices. E, U and D represents the superfields for right-handed lepton, up-type quark and down-type quark respectively while L and Q corresponds to left-handed lepton and quark superfields respectively while \(H_{u}\) represents superfield for up-type Higgs. In current study, we only focus on \(\lambda^{{}^{\prime\prime}}_{ijk}\) yukawa coupling where a sparticle decays to quarks with very small coupling leading to the sparticle being long-lived. Examples of such LLPs in context of MSSM can be electroweakinos (\(\chi^{\pm}_{1}\), \(\chi^{0}_{2}\) and \(\chi^{0}_{1}\) ) and gluinos. Various experiments have set an upper limit on RPV couplings to be very small leading to SUSY particles being produced with longer lifetimes. In [100], where bounds on \(\lambda^{{}^{\prime\prime}}_{ijk}\) are calculated from double nucleon decay into two kaons, \(\lambda^{{}^{\prime\prime}}_{112}\) less than \(10^{-15}R^{-5/2}\) is excluded where R represents ratio between hadronic and supersymmetric scales and can vary from \(10^{-3}\) to \(10^{-6}\). Accordingly, \(\lambda^{{}^{\prime\prime}}_{112}\) can vary from value as low as \(10^{-7}\) to 1. Indirect bound on \(\lambda^{{}^{\prime\prime}}_{113}\) comes from neutron oscillations where \(\lambda^{{}^{\prime\prime}}_{113}\) less than \(10^{-4}\) is excluded for \(m_{\tilde{q}}=\) 100 GeV [101].
Several displaced jets searches have been performed at CMS and ATLAS to specifically set exclusion limits on the mass, lifetime and production cross-section of LLPs decaying to jets assuming different SUSY models. CMS has conducted studies on RPC SUSY scenarios involving LLPs, setting constraints on their production. Detailed results and models are elaborated in the reference [33].
For RPV SUSY model where gluinos are pair-produced with each gluino decaying to a top, bottom and strange quark through \(\lambda^{{}^{\prime\prime}}_{323}\) type UDD coupling, CMS rules out gluino pair production cross-section exceeding 0.1 fb when \(c\tau\) ranges between 3 and 1490 mm and \(m_{\tilde{g}}\) is 2400 GeV. Between \(c\tau\) 3 mm and 1000 mm, gluinos up to 2500 GeV mass are excluded [33]. CMS also studies two other RPV models where top squarks are pair-produced and each squark subsequently decays to a lepton and a bottom or a down type quark via \(\lambda^{{}^{\prime}}_{x33}\) or \(\lambda^{{}^{\prime}}_{x13}\) LQD type RPV coupling. For RPV model with \(\lambda^{{}^{\prime}}_{x13}\) LQD type coupling, production cross-sections of stop above 0.1 fb are excluded for \(c\tau\) between 8 mm and 160 mm for
1600 GeV. For \(c\tau\) between 5 mm and 240 mm, top squark masses up to 1600 GeV are excluded [33].
For RPV model with \(\lambda^{{}^{\prime}}_{\chi 33}\) LQD type coupling, stop production cross-sections exceeding 0.1 fb are excluded for \(c\tau\) between 7 mm and 220 mm for \(m_{\tilde{t}}=1600\) GeV. Top squark masses up to 1600 GeV are excluded for \(c\tau\) between 3 mm and 360 mm [33]. Another study done at CMS to study nonholomorphic RPV coupling where top squarks undergo pair-production and then decay to two down type anti-quarks each, for a top squark mass \(m_{\tilde{t}}=1600\) GeV, the production cross sections exceeding 0.1 fb are ruled out for \(c\tau\) ranging between 3 mm and 820 mm. Additionally, for \(c\tau\) values between 2 mm and 1320 mm, top squark masses up to 1600 GeV are excluded [33].
A recent study performed at the ATLAS experiment has set up very stringent exclusion limits on masses of displaced electroweakinos in two benchmark scenarios of LLPs decaying to jets via UDD-type RPV coupling [56]. In the first scenario, electroweakinos are pair-produced in pure higgsino state which includes four possible combinations of electroweakinos: \(\chi^{\pm}_{1}\chi^{0}_{2}\), \(\chi^{0}_{2}\chi^{0}_{1}\), \(\chi^{+}_{1}\chi^{-}_{1}\) and \(\chi^{\pm}_{1}\chi^{0}_{1}\) while the other scenario involves pair-production of gluinos (\(\tilde{g}\)) where each gluino decays promptly to a long-lived neutralino and a quark and anti-quark pair with 100% branching ratio. In each scenario, electroweakinos decay to light flavour quarks via the \(\lambda^{{}^{\prime\prime}}\) coupling with 100% branching ratio. Electroweakinos with masses less than 1500 GeV are excluded for mean proper lifetime between 0.03 ns (\(c\tau=0.9\) cm) to 1 ns (\(c\tau=30\) cm) for pair-produced electroweakinos, while electroweakinos with masses less than 1500 GeV are excluded for mean proper lifetime between 0.02 ns (\(c\tau=0.6\) cm) to 4 ns (\(c\tau=120\) cm) for electroweakinos produced through the decay of gluinos with a mass of 2.4 TeV. In the context of the present analysis concerning pair-produced electroweakinos, we observe weaker limits as we increase the decay length of the LLPs above 30 cm.
In conclusion, based on the displaced searches performed at both CMS and ATLAS, we observe that exclusion limits set for the masses of displaced gluinos are significantly high, with gluinos having masses up to 2.5 TeV already excluded at CMS [33]. However, the limits imposed on the masses of displaced electroweakinos are moderate and can still be probed at future colliders like HL-LHC [102]. It is also important to highlight that while we observe stronger limits for LLPs with smaller lifetimes, the limits placed on the electroweakinos are considerably lenient for highly displaced ones. For example, in the scenario described in [56], where electroweakinos are pair-produced with a decay length of about 500 cm, the excluded electroweakino mass reduces from 1500 GeV to roughly 1050 GeV.
In this paper, we exclusively focus on the CMS detector at HL-LHC, one of the general-purpose detectors that will undergo several major hardware and software upgrades. At the HL-LHC, the peak instantaneous luminosity is set to rise to \(5\times 10^{34}\) (\(7.5\times 10^{34}\)) cm\({}^{-2}\)s\({}^{-1}\), with each \(pp\) collision witnessing 140 (200) pile-up interactions. HL-LHC is projected to record data corresponding to an integrated luminosity of 3000 (4000) fb\({}^{-1}\) during its lifetime. In order to deal with increased PU interactions and maintain the optimal physics performance of the detectors, several hardware upgrades will take place starting with the upgrade of trigger and data acquisition systems (DAQ). With the upgrade of both the inner
and outer tracker and the implementation of Field Programmable Gate Arrays (FPGA), there will be a significant overhaul in the data acquisition process at level-1 (L1) of the trigger system [103]. This upgrade enables the availability of tracking information at L1. Additionally, calorimeter information from ECAL and HCAL will also be made available at L1 [103]. The improved data acquisition and processing architecture at L1 will make it possible to implement particle flow and machine learning techniques, along with higher-level object reconstruction, to be used in the trigger system. This will be immensely helpful in recording rare BSM events, such as events containing displaced objects, that would have otherwise gone unrecorded. The implementation of extended tracking at L1 will enable the reconstruction of displaced tracks up to a certain transverse impact parameter which will again be very helpful in selecting events with displaced signatures at L1. Displaced particle searches will also benefit from the availability of timing information from the upgraded ECAL at L1 [103, 104] and the inclusion of an all-new MIP timing detector (MTD) between the tracker and calorimeter system [105]. Additionally, a new high granularity calorimeter (HGCAL) will replace the existing endcap calorimeter [106], enhancing the physics performance in the forward region under the harsher conditions at the HL-LHC.
The upgrades planned for the HL-LHC will substantially boost physics sensitivity and increase the probing potential of LLPs at HL-LHC. However, there are not many comprehensive and realistic phenomenological studies explicitly designed for HL-LHC that fully consider the effect of increased PU and make the most of the impending hardware upgrades at HL-LHC. This motivates us to investigate the lifetime frontier of BSM physics in the context of RPV SUSY, considering increased PU conditions at HL-LHC.
Rest of the paper is structured as follows: In Section 2, we outline the signal model, background sources, and the simulation setup for both signal and background events. Section 3 explains the implemented triggering strategy at L1, where we select events at L1 by utilizing available information from the upgraded detector systems. In Section 4, we perform a detailed analysis of the events selected at L1. This analysis involves studying various physics variables constructed using offline information from different sub-detectors at the CMS. The analysis is divided into three parts: a cut-based analysis and two independent multi-variate analyses. Section 5 presents the signal significance for various LLP benchmark points, providing quantitative results for our analysis. Finally, in Section 6, we summarize and draw conclusions based on our analysis.
## 2 Signal Model, Backgrounds, and Simulation Setup
In this paper, we study R-parity violating yukawa coupling of type \(\lambda^{{}^{\prime\prime}}\) within the framework of MSSM. Our focus is on the associated production of electroweakinos, specifically the \(\chi^{0}_{2}\) and \(\chi^{\pm}_{1}\) where \(\chi^{0}_{2}\) decays to lightest supersymmteric particle (LSP), \(\chi^{0}_{1}\), and the 125 GeV Higgs boson while the \(\chi^{\pm}_{1}\) decays to a W-boson and \(\chi^{0}_{1}\). Due to a very small \(\lambda^{{}^{\prime\prime}}\) coupling, the \(\chi^{0}_{1}\) exhibits a long lifetime. We consider decay of \(\chi^{0}_{1}\) to light flavor quarks (u, d, and s) with 100% branching ratio. We assume a 100% branching fraction for the decays \(\chi^{0}_{2}\to\chi^{0}_{1},h\) and \(\chi^{\pm}_{1}\to\chi^{0}_{1},W^{\pm}\). The inclusive decays of the Higgs boson and W boson are considered, with their respective branching ratios taken from [107]. Quarks resulting from decay of
undergo showering and hadronization leading to the production of multiple displaced jets in the final state. Feynman diagram illustrating the cascaded decay process assuming only one decay mode for both Higgs boson and W boson is shown in Figure 1. In this diagram, the Higgs boson decays exclusively into two b-jets, while the W boson decays into leptons.
The pair production cross-section of the neutralino-chargino pair at \(\sqrt{s}=14\) TeV is calculated at the next-to-leading order (NLO) with the incorporation of next-to-leading-log (NLL) effects, using the RESUMMINO code [108]. For the current analysis, we solely focus on the pair production of electroweakinos with degenerate masses (\(m_{\chi^{0}_{2}}=m_{\chi^{\pm}_{1}}\)). The SUSY cross-sections for electroweakino pair production provided by the LHC collaboration [109] matches with those we got from RESUMMINO1.
Footnote 1: RESUMMINO gives a cross-section of 0.124 and 0.051 fb for pair production of 1500 GeV wino-like and 1400 GeV higgsino-like electroweakino masses.
We study LLPs, \(\chi^{0}_{1}\), in a mass range varying from 500 GeV to 1 TeV, with mean proper decay length varying from 1 cm to 500 cm. For signal generation, as well as for showering and hadronization, we utilize pythia8[110]. The signal samples are generated using the CTEQ6L1 PDF (Parton Distribution Function) [111] with the CUETP8S1-CTEQ6L1 CMS tune [112]. During the sample generation, we adjust the decay width of the LLPs in the input SLHA (Supersymmetry Les Houches Accord) file to modify the decay length of the LLPs.
Since our signal signature includes multiple jets in the final state, the main source of background arises from the instrumental effects and QCD multijet processes. Additionally, due to the presence of leptons in the final state, a sub-dominant contribution to the background comes from \(t\bar{t}\) events in which the top quark can decay leptonically or hadronically. We also anticipate background contribution from W+jets events, where the W boson decays inclusively. Here, we would like to mention that simulating and characterising the instrumental effects are out of the scope of the current study. Instead, we focus on the mitigation strategy for the instrumental background outlined in the subsequent sections.
QCD events are generated in the bins of parton level \(H_{T}\) (\(H_{T}^{gen}\)). Here, \(H_{T}^{gen}\) is calculated by summing the transverse momenta of all partons involved in the event. \(H_{T}^{gen}\) bins used in this study include following ranges - 500 - 600 GeV, 600 - 700 GeV, 700 - 800
Figure 1: Feynman diagram of cascade decay of electroweakinos (\(\chi^{0}_{2}/\chi^{\pm}_{1}\)) where \(\chi^{0}_{2}\) decays to \(\chi^{0}_{1}\) and 125 GeV Higgs boson while \(\chi^{\pm}_{1}\) decays to a W boson and \(\chi^{0}_{1}\).
GeV, 800 - 1000 GeV and \(>\)1000 GeV. \(H_{T}^{gen}\) bins are selected based on the analysis strategy, where events in the offline stage of the study after being triggered at level-1 by the triggers, as elaborated in Section 3, are required to have high event \(H_{T}\). This is because signal events can easily surpass \(H_{T}>500\) GeV threshold due to significant hadronic activity in the final state. Therefore, QCD multijet events are generated in \(H_{T}^{gen}\) bins starting with \(H_{T}^{gen}>500\) GeV in order to ensure sufficient background statistics. Generation of background events is done in madgraph[113, 114] while showering is done using pythia8.
We use Delphes-3.5.0[115] for simple detector simulation. To accurately replicate the conditions at HL-LHC, which are characterized by a high PU environment, our analysis takes into account the effects of PU. PU originates from the multiple soft proton-proton interactions that occur within a single bunch crossing, along with a hard collision. We use PYTHIA8 to generate 1 million soft QCD events which are utilized as PU events. The PileUpMerger module in Delphes subsequently merges these PU events with the hard process. Both signal and background events have an average of 140 PU events.
We use default CMS card provided with Delphes for HL-LHC for detector simulation. However, we make specific modifications to certain Delphes modules as elaborated in one of our previous studies [10]. To form jets using energy deposits from the calorimeters, ECAL and HCAL, we use the anti-\(k_{T}\) jet clustering algorithm [116] with a cone size of R = 0.3. Using a narrower jet cone size instead of the standard R = 0.4 was motivated by the need to mitigate contamination from PU interactions, which can significantly affect the measurement of physics variables. The amount of PU contamination within a jet relies on the jet area, as PU is distributed throughout the detector. A reduction in the jet area leads to smaller PU contribution. By shrinking the jet cone size, the effects of PU can be effectively reduced, assuming that the jets from the signal process remain unaffected, and that the majority of the hadronic activity from the signal is captured within a reduced cone radius. This approach aligns with our analysis, as prior studies [3, 5, 9, 10] have shown that displaced jets resulting from LLP decays typically concentrate energy within a more confined region of the \(\eta-\phi\) plane. Consequently, opting for a narrow cone size for jets can aid in minimizing the impact of PU on LLP jets.
Before we proceed further, let's define two signal benchmark points (BP) for our analysis:
* **BP-1**: \(M_{\chi^{0}_{2}}/M_{\chi^{\pm}_{1}}\) = 1600 GeV, \(M_{\chi^{0}_{1}}\) = 800 GeV, and \(c\tau\) = 10 cm.
* **BP-2**: \(M_{\chi^{0}_{2}}/M_{\chi^{\pm}_{1}}\) = 1600 GeV, \(M_{\chi^{0}_{1}}\) = 800 GeV, and \(c\tau\) = 100 cm.
Here, \(c\tau\) represents the mean proper decay length of the LLP. We have selected benchmark points (BP-1 and BP-2), keeping in mind the stringent limit on the masses of electroweakinos. Both BP-1 and BP-2 feature moderately heavy LLPs resulting from the decay of significantly heavy electroweakinos, \(M_{\chi^{0}_{2}}/M_{\chi^{\pm}_{1}}\) = 1600 GeV. These electroweakinos have an extremely small pair-production cross-section. We are examining two decay length scenarios: one involves a shorter decay length of 10 cm, and the other features a considerably longer decay length of 100 cm, for which the limits are still lenient. Throughout the rest of
the paper, we will use the aforementioned shorthand notation to refer to the signal benchmark points. QCD events with \(H_{T}^{gen}\in\{500,\,600\}\) GeV will be represented as "QCD," and top quark pair events will be denoted as "\(t\bar{t}\)". We generate 5 million \(t\bar{t}\) events, 3 million QCD dijet events spread across the mentioned \(H_{T}^{gen}\) bins, and 0.6 million W+jets events. For each signal benchmark point, 0.5 million events are generated. The generation and analysis of large background datasets involving 140 PU interactions present a significant challenge. The size of the simulated events using Delphes, including only tracks, towers, and jet branches, can reach up to 15 GB for 5000 events. This makes it impractical to produce extensive background datasets that surpass what we have already generated due to our computational limitations.
## 3 Triggering LLP events at L1
The CMS experiment employs a two-level trigger system, consisting of the Level-1 (L1) and the High-Level Trigger (HLT), to identify and select interesting events for offline analysis. The HLT is a software-based trigger, while the L1 trigger is a hardware-based system with an extremely short latency period that determines the time window within which the decision to record an event is made. Because of this low latency period, performing complex physics calculations and constructing high-level physics objects using information from multiple sub-detectors can be challenging and inefficient. However, with the proposed upgrades to the data acquisition system, it becomes possible to reconstruct certain high-level physics objects and apply machine-learning (ML) techniques at the L1 trigger stage in the context of the HL-LHC. (For more comprehensive information about implementation of ML algorithms at FPGAs, please refer to [117; 118] and references therein). These upgrades will involve increasing the latency period and enhancing the data bandwidth, measured in terms of event rate. These improvements will enable the design of triggers aimed explicitly at searching for LLPs. Therefore, it is crucial to efficiently utilise available resources to select events at L1 that do not overlook exotic LLP events, which typically have a very small cross-section. The final state signature for our study consists of displaced jets and prompt leptons. Our primary focus will be on triggering events using dedicated triggers to detect events containing these specific physics objects.
At the HL-LHC, CMS has proposed two dedicated triggers explicitly designed to select events with a displaced jets signature [103]. In addition to these dedicated LLP triggers, single-lepton triggers can further maximize the trigger efficiency [11]. We will explain the triggers used in our analysis in detail below-
* **Track-\(\mathbf{H_{T}}\)**: At the HL-LHC, CMS plans to upgrade the inner tracker by replacing both the pixel and strip tracking detectors with smaller pixel sensors. The outer tracker will also be improved by incorporating strip and macro pixel sensors with stacked strip modules. The main requirements for the upgraded tracker system at HL-LHC include high radiation tolerance, increased granularity, improved track separation, availability of tracking information at L1, and extended tracking acceptance. The upgraded outer tracker will facilitate the reconstruction of track candidates at
L1, operating at a rate of 40 MHz, for \(|\eta|<\)2.4. This will be achieved through an increased latency period and the implementation of FPGAs, enabling the construction of track-based triggers at L1. The availability of tracking information at L1 will warrant the identification of the primary vertex and will be immensely useful in mitigating charged PU. In addition to the advantages mentioned above of including tracking information at L1, one particular advantage relevant to this analysis is the extension of the L1 tracking algorithm to reconstruct tracks displaced within the detector. Our analysis considers a track displaced from the beamline if it has transverse impact parameter (\(|d_{0}|\)) greater than 1.5 mm. These tracks may originate from a secondary vertex following the decay of an LLP. The efficiency of track reconstruction for displaced tracks at L1 will depend on the \(|d_{0}|\) of the tracks, with efficiency decreasing as \(|d_{0}|\) increases. Tracking at L1 will be available for particles with transverse momentum (\(p_{T}\)) greater than 2 GeV within the pseudorapidity (\(|\eta|\)) range of less than two. It will follow a track reconstruction efficiency curve as shown in the reference [103]. To highlight the importance of displaced tracking at L1, Figure 2 illustrates the L1 displaced track multiplicity within a \(\Delta R<0.3\) cone around the jet axis for two LLP benchmark scenarios: BP-1 (decay length of 10 cm) and BP-2 (decay length of 100 cm), with \(M_{\chi^{0}_{1}}=800\) GeV and \(M_{\chi^{0}_{2}}=1600\) GeV for jets with \(p_{T}>40\) GeV and \(|\eta|<2.5\). The figure also shows the displaced track multiplicity for two primary background sources: \(t\bar{t}\) and QCD dijet events.
The figure underlines the importance of displaced tracks within jets for distinguishing between long-lived signal and background events, as we observe that the displaced
Figure 2: Comparison of displaced track multiplicity within \(\Delta R<\)0.3 of jet for BP-1 (10 cm) and BP-2 (100 cm) LLP benchmarks, with \(M_{\chi^{0}_{1}}=800\) GeV and \(M_{\chi^{0}_{2}}=1600\) GeV, along with \(t\bar{t}\) and QCD background for jets with \(p_{T}>\) 40 GeV and \(|\eta|<2.5\)
track multiplicity is significantly lower for the backgrounds compared to the signal benchmark points. Moreover, the LLP benchmark (BP-1) with a shorter mean proper decay length of 10 cm exhibits a higher number of reconstructed displaced tracks which is evident from the longer tail observed in the multiplicity distribution compared to the benchmark with a longer decay length (BP-2). This observation aligns with our expectations, as LLPs with longer decay lengths will have larger value of \(|d_{0}|\) and, therefore, fewer displaced tracks will be reconstructed.
CMS has proposed a dedicated trigger for LLPs called "Track-\(H_{T}\)" to identify events with displaced jets originating from LLPs using the upgraded tracker's improved tracking capabilities in making triggering decisions using the tracking information at L1. This trigger is specifically designed to get a handle on the events with LLPs exhibiting shorter decay lengths. The current analysis uses a track-based trigger influenced by the CMS Track-\(H_{T}\) trigger [119]. The Track-\(H_{T}\) trigger used in the current study selects events where at least one displaced jet is present, and it works by calculating \(H_{T}\) from track-based jets. To form track-based jets, we begin by grouping tracks with a \(p_{T}\) greater than 2 GeV within a \(|\eta|<2\) range. These tracks are then binned based on their closest approach to the beam line in the z-direction, \(z_{0}\), with a bin size of 6 cm. The \(z_{0}\) bin is chosen based on the highest scalar sum of \(p_{T}\) of tracks. Subsequently, the tracks in the chosen \(z_{0}\) bin are clustered into jets using the anti-\(k_{t}\) algorithm with a cone radius of R = 0.3 for each event. Jets with \(p_{T}>5\) GeV are considered for further analysis. Jets with at least two displaced tracks (\(|d_{0}|>1.5\) mm) as constituents are classified as displaced jets. For events that contain at least one displaced jet in the collection, \(H_{T}\) is calculated by summing the \(p_{T}\) of all the jets, including those classified as displaced. In our study, an event must have a track-based \(H_{T}\) threshold greater than 160 GeV to trigger, as inferred from [119].
* **Displaced Calo-Jet**: The upgraded ECAL at HL-LHC will provide precise timing information for ECAL energy deposits, with a timing resolution of approximately 30 ps for a 20 GeV energy deposit during the initial runs of HL-LHC [104]. However, it is important to note that timing resolution may degrade over time as more data is collected. To utilize this timing information at the L1 trigger level and trigger events with displaced jets, the CMS experiment has proposed an L1 trigger incorporating ECAL timing information. For the current analysis, we utilise the L1 trigger developed in [10] that uses ECAL timing information for identifying displaced jets. For the trigger, energy deposits from ECAL and HCAL are clustered to form jets within the \(|\eta|<1.5\) region, utilizing the anti-\(k_{T}\) algorithm with a cone size of R = 0.3. Each ECAL tower is required to have an energy deposit of at least 0.5 GeV, while each HCAL tower needs an energy deposit of at least 1 GeV [103]. The clustering of jets is done using inputs from both ECAL and HCAL, but only the ECAL inputs are used to determine the timing of the jet. A jet is selected if at least one of the ECAL towers in its constituents has an energy deposit greater than 1 GeV [103]. Each ECAL tower's timing is calibrated relative to the origin. The jet's timing is determined using the energy-weighted average of the timings from the ECAL towers
inside that jet. Figure 3 shows the energy-weighted mean timing of jets with \(p_{T}>40\) GeV and \(|\eta|<1.5\) for two LLP benchmark scenarios, BP-1 and BP-2, along with the two main background sources.
Figure 3 shows that LLPs in the benchmark scenario BP-2, characterized by longer decay lengths, exhibit higher timing values than BP-1 with shorter decay length. Furthermore, LLPs in BP-2 demonstrate significantly higher timing values than the background. For our current study, we select an event at L1 if it contains at least one jet with a timing value (\(\Delta T_{\rm mean}^{\rm Ewt}\)) greater than 1.2 ns, a jet transverse momentum (\(p_{T}^{\rm jet}\)) greater than 35 GeV, and at least 4 ECAL towers in the jet. The threshold values used in our study are determined through explicit rate calculations as described in [10]. These calculations consider the background rate constraint for the specific scenario of 200 PU with the timing resolution at the integrated luminosity of 1000 fb\({}^{-1}\).
* **Single TkIsoElectron-** Requires at least one prompt, isolated electron from the primary vertex (PV) with \(p_{T}\) greater than 28 GeV, within \(|\eta|<2.4\). The isolation of each electron is computed by adding the \(p_{T}\) of all tracks within a cone of size \(\Delta R<0.3\), not including the \(p_{T}\) of the electron, divided by the sum of the \(p_{T}\) of all tracks within the same \(\Delta R\) cone. Here, \(\Delta R\) is computed as \(\sqrt{\Delta\eta^{2}+\Delta\phi^{2}}\), where \(\Delta\phi\) and \(\Delta\eta\) are the differences in azimuthal angle and pseudorapidity, respectively, between the electron and the tracks. For the current study, a fairly isolated electron is required, with an isolation factor (sum of \(p_{T}\) of tracks divided by sum of \(p_{T}\) of all tracks) less than 0.1. The trigger thresholds for our study are adopted from the L1 trigger menu designed for the HL-LHC, as outlined in reference [103].
* **Single TkIsoMuon**- Requires at least one prompt, isolated muon with \(p_{T}>22\) GeV from PV, within \(|\eta|<2.4\). The isolation of each muon is calculated in the same way as explained above for the electron trigger, i.e., by summing the \(p_{T}\) of all tracks within a \(\Delta R<0.3\) cone around the muon, excluding the muon's own \(p_{T}\), divided by the sum of the \(p_{T}\) of all tracks within the same \(\Delta R\) cone. Similarly, for this trigger, the muon isolation factor is required to be less than 0.1. The trigger thresholds used in our study are obtained from the L1 trigger menu for the HL-LHC as provided in reference [103].
Various thresholds for object \(p_{T}\), isolation, \(H_{T}\) and jet timing for above mentioned L1 triggers are summarised in Table 1.
Figure 4 displays the variation of trigger efficiency with mean proper decay length for the triggers mentioned above, as well as the combined trigger efficiency for four different LLP scenarios. Although, we only consider LLPs with masses higher than 500 GeV in the current analysis, trigger efficiency for LLPs with masses ranging from light (\(M_{\chi^{0}_{1}}=50\) GeV) to very heavy(\(M_{\chi^{0}_{1}}=1400\) GeV) is shown to depict the variation of trigger efficiency with mass of LLP. We also show the variation of trigger efficiency for one of the benchmark point with \(M_{\chi^{0}_{2}}/M_{\chi^{\pm}_{1}}=1600\) GeV and \(M_{\chi^{0}_{1}}=800\) GeV. LLPs with decay lengths ranging from 1 cm to 500 cm and originating from the decay of \(\chi^{0}_{2}/\chi^{\pm}_{1}\) with masses varying from 250 GeV to 1600 GeV are considered. The observations obtained from Figure 4 are summarized as follows:
* The displaced Calo-Jet trigger is particularly effective for LLPs with longer decay lengths, especially for heavier LLPs. This is because the trigger utilizes timing information from the ECAL deposits, and LLPs that decay later in the detector will exhibit a more significant time delay. As the mass of the LLPs increases, the time delay also increases, as these heavier LLPs travel at a slower pace, resulting in a more considerable time delay.
* Furthermore, the track \(H_{T}\) trigger performs best for LLPs with smaller decay lengths, as the efficiency of extended track reconstruction degrades with increasing c\(\tau\). The trigger efficiency is also reduced for LLP benchmarks with smaller mass differences between \(\chi^{0}_{2}/\chi^{\pm}\) and their decay products due to less hadronic activity in the calorimeter.
\begin{table}
\begin{tabular}{|c|c||} \hline L1 Trigger & Online thresholds \\ \hline \hline IsoElectron & \(p_{T}>\) 28 GeV, \(Iso<0.1\) \\ \hline IsoMuon & \(p_{T}>\) 22 GeV, \(Iso<0.1\) \\ \hline Track \(H_{T}\) & \(H_{T}>\) 160 GeV, \(p_{T}^{jet}>\) 5 GeV \\ \hline Calo-jet & \(\Delta T>\) 1.2 ns, \(p_{T}^{jet}>\) 35 GeV, \(N_{tow}\geq\) 4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Selection cuts for the L1 triggers
* The lepton trigger efficiency remains unaffected by the \(c\tau\) parameter, but it decreases as the mass degeneracy between the LLP and \(\chi_{2}^{0}/\chi^{\pm}\) increases, due to kinematic suppression. This implies that the efficiency of lepton triggers will be lower for benchmark points with smaller mass differences between the LLP and \(\chi_{2}^{0}/\chi^{\pm}\).
* The combined trigger efficiency decreases with increasing \(c\tau\) for every benchmark point, which can be explained by looking at the individual trigger efficiencies. The decrease in efficiency as the decay length increases is likely because LLP events with longer decay lengths have a higher chance of escaping the detection region before being triggered.
* Single TkIsoLepton triggers, along with the Track-\(H_{T}\) trigger and displaced Calo-Jet trigger, complement each other in selecting LLP events in both the lower and higher ends of the decay length spectrum for both lighter and heavier LLPs. This implies that combining these triggers can effectively select LLP events across a wide range of decay lengths and masses.
It is important to highlight the significance of displaced jet triggers in detecting LLP events, especially since lepton triggers are limited to selecting events with prompt leptons. In the current study, prompt leptons mainly come from the inclusive decay of W and Higgs bosons, which have relatively low branching fractions. For instance, in a scenario where the LLP originates from the decay of a 1000 GeV particle with a mass of 500 GeV and a decay length of 10 cm, the efficiency of the lepton trigger is approximately 30%. However, the overall efficiency increases significantly when displaced jet triggers are included. With a decay length of 10 cm, the efficiency rises to around 91%, and for a longer decay length of 100 cm, the efficiency remains high at 89%. This demonstrates that incorporating displaced jet triggers significantly enhances the efficiency of detecting LLPs with shorter as well as
Figure 4: Variation of trigger efficiency for displaced Calo-Jet, Track-\(H_{T}\), and single TkIsoLepton triggers with decay length for four LLP scenarios with one benchmark scenario (BP). The combined trigger efficiency is also shown.
longer decay lengths. In LLP scenarios with shorter decay lengths, the track \(H_{T}\) trigger is more effective, while in contrast, the Calo-Jet trigger is more effective for events with longer decay lengths. In conclusion, the most effective approach to efficiently select LLP events with varying decay lengths, from very small to very large, is to use a combination of different L1 triggers.
## 4 Offline analysis
After triggering the events at L1, the next step is to analyze the selected events offline to remove the background events that have huge cross-sections. We begin by reconstructing the secondary displaced vertex, a key characteristic of the decay of LLPs, for the selected events using the set of displaced tracks. In our analysis, we reconstruct tracks taking into consideration the track reconstruction efficiency, which varies with the transverse displacement of tracks from the beam-line as achievable at Phase-I of the LHC for CMS [120] since no specific information about the track reconstruction efficiency is available for Phase-II. However, we assume that offline track reconstruction in terms of transverse displacement from the beam-line will remain the same for Phase-II as in Phase-I. Nevertheless, updated information will be needed to confirm this assumption. We form displaced vertices by clustering displaced tracks with transverse impact parameter \(|d_{0}|>1.5\) mm based on their spatial position. We identify vertices with at least two displaced tracks associated with the vertex. Each vertex is assigned a unique ID and stored for further analysis. Next, we compute two physics variables related to the each selected displaced vertex-
* The number of displaced tracks associated with the secondary vertex.
* The invariant mass of the displaced secondary vertex, which is calculated using the displaced tracks that are associated with it.
In Figure 5, we show the two-dimensional distribution of displaced track multiplicity (\(N_{trk}^{disp}\)) and invariant mass of the displaced vertex (\(M_{DV}\)). The distributions are shown for two LLP benchmark points, BP-1 and BP-2, as well as for the QCD and \(t\bar{t}\) background. To ensure proper normalization of the data, each bin in the distribution is re-weighted such that the sum of the fraction of entries falling in every bin equals unity.
As shown in Figure 5, the LLP benchmarks exhibit a significantly higher number of displaced tracks associated with the displaced vertex as the invariant mass of the displaced vertex increases compared to the backgrounds. This indicates that applying a suitable 2-dimensional cut on the displaced track multiplicity and the invariant mass of the displaced vertex can effectively reduce the contribution from the background events. In addition to mitigating the background events from QCD and \(t\bar{t}\), implementing a higher threshold cut on both \(N_{trk}^{disp}\) and \(M_{DV}\) can effectively remove the displaced vertices originating from the instrumental background as shown in [29; 33; 56]. This is because the displaced vertices from the instrumental background are typically collimated and have lower multiplicity and smaller invariant mass than the signal. As shown in [56], instrumental background can be effectively mitigated by the requirement on \(M_{DV}\) and \(N_{trk}^{disp}\) where they implement a
threshold of 10 GeV on \(M_{DV}\) and a threshold of 5 for \(N_{trk}^{disp}\) in the signal region. For the signal, the invariant mass of the displaced vertex is expected to peak around the mass of the LLP, which in this case is 800 GeV. However, it is essential to note that the number of reconstructed displaced tracks may be reduced for very short (\(\approx 1\,\mathrm{cm}\)) or very long decay lengths (\(\approx 500\,\mathrm{cm}\)), which can impact signal efficiency.
Now, we turn our attention to the utilisation of MTD timing information in the current analysis. At the HL-LHC, MTD will be positioned between the tracker and the electromagnetic calorimeter of the CMS detector, providing precise timing information for the charged particles originating within the tracker. Currently, precision timing information from the MTD is proposed to be included for the offline analysis in the CMS detector instead of the online trigger system. Including the partial readout of the MTD for a region of interest at L1 is a possibility in the future upgrades of the HL-LHC [105]. However, in this work, we have mainly focused on including output from MTD in the offline analysis, where we can construct complex physics variables out of the output from various sub-detectors, including MTD. At HL-LHC, the primary objective of the MTD will be to help mitigate the effect of the huge amount of PU on physics analysis and restore the physics performance at par with Phase-I of LHC. However, the role of MTD will be pivotal in studying exotic particles such as LLPs, where the decay of the particles is delayed, and timing information from the MTD can be efficiently used to search for such particles.
Figure 5: Two-dimensional distribution showing the relationship between the number of displaced tracks (\(N_{trk}^{disp}\)) and the displaced vertex invariant mass (\(M_{DV}\)) for the two LLP benchmarks, BP-1 and BP-2, along with the \(t\bar{t}\) and QCD background.
Timing information can be extracted from the MTD with the timing resolution of 30 ps for MTD hits from the charged particles with \(p_{T}>0.7\) GeV in the barrel region (\(|\eta|<1.5\)) and \(p>0.7\) GeV in the endcap region (\(1.5<|\eta|<3.0\)). Excellent coverage and timing resolution of the MTD can be leveraged to construct the timing variables for the jets originating due to the decay of LLPs which will be delayed in time. MTD layer is proposed to be placed at the radius of 1.16 m between the tracker and barrel ECAL, which is placed at the radius of 1.29 m.
In order to construct timing variables for jets using information from MTD, we will require MTD hits directly below the clustered jets within the specific cone along the jet axis. In addition to MTD hits coming from tracks, whether displaced or prompt, we have two additional lists of MTD hits - one where MTD hits originate from reconstructed displaced tracks with \(|d_{0}|>1.5\) mm and the second one where we have MTD hits with no reconstructed tracks associated with the hits. We construct physics variables using the three abovementioned collections of MTD hits. We consider MTD hits only within a narrow cone radius directly below the jets to reduce the PU contamination. For MTD hits associated with tracks, we only consider tracks with \(p_{T}>2\) GeV and can be reconstructed using the track construction efficiency as explained before. We have constructed following timing variables using above mentioned three MTD hits collection directly below a clustered jet in a cone with \(\Delta R<0.3\) and axis matching with the jet axis-
* \(\mathbf{N_{MTD}}\): The number of MTD hits with associated reconstructed tracks within R = 0.3 of the jet axis. Hard signal jets will contain comparatively higher number of MTD hits when compared to the background. LLPs decaying after the MTD and before ECAL and HCAL boundary will have energy deposition in the calorimeters but with no associated MTD hits. So, MTD hit multiplicity will decrease with the increase in the decay length; however, distribution will mainly be dominated by the charged PU hits.
* \(\mathbf{N_{MTD}^{Disp}}\): The number of MTD hits with associated reconstructed tracks within R = 0.3 of the jet axis with \(|d_{0}|>1.5\) mm. Displaced jets will contain comparatively higher number of MTD hits coming from displaced tracks when compared to the background. MTD hit multiplicity will decrease with the increase in the decay length.
* \(\mathbf{N_{MTD}^{NT}}\): The number of MTD hits within R = 0.3 of the jet axis with no associated tracks. Track reconstruction efficiency follows an efficiency curve where the efficiency of reconstructing a track will degrade with the transverse distance (\(D_{xy}\)) from the beam line. As a result, we will have a higher number of MTD hits with no associated tracks for displaced LLPs. However, this number will decrease with the decreasing decay length as we will have more and more tracks with smaller \(D_{xy}\) being reconstructed. In contrast, for prompt processes, most of the MTD hits will have associated tracks; hence \(N_{MTD}^{NT}\) will be less than displaced LLPs.
* \(\mathbf{T_{Raw}}\): The mean of the timing of MTD hits constituting a jet within cone radius of \(R=0.3\). To compute \(T_{raw}\), no timing calibration corresponding to the position of
MTD hits has been applied. For highly displaced LLPs, \(T_{raw}\) will have higher values compared to prompt processes, but since the majority of MTD hits inside a jet will be coming from PU interactions, \(T_{raw}\) measurement will mainly be dominated by the timing of PU hits. Also, the timing of the jet will depend on the position and \(p_{T}\) of the jets. Jets with low \(p_{T}\) depositing energy at higher \(\eta\) values away from the central part of the barrel will have higher timing which is valid for both LLPs and the prompt background processes.
* \(\mathbf{T_{Raw}^{Disp}}\): The mean of the timing of MTD hits associated with displaced tracks constituting a jet within cone radius of \(R=0.3\). For highly displaced LLPs, \(T_{raw}\) will have higher values compared to prompt processes where displaced track multiplicity is very low.
* \(\mathbf{T_{Raw}^{NT}}\): The mean of the timing of MTD hits with no associated tracks within cone radius of \(R=0.3\) of the jet axis. As we discussed earlier, most tracks can be successfully reconstructed for prompt processes and LLPs with very short decay lengths; therefore, jets from such processes are less likely to leave MTD hits with no reconstructed tracks. In such cases, \(T_{raw}^{NT}\) will be zero when no MTD hit is found with no reconstructed tracks. However, with the increase in decay length, we will have more and more number of MTD hits with no reconstructed tracks. Furthermore, for prompt processes, contribution to the tail of the timing distribution of the jet will be coming from the hits with very low \(p_{T}\) tracks, which did not get reconstructed.
* \(\mathbf{T_{Calib}}\): The mean of the timing of the MTD hits within R = 0.3 of the jet axis calibrated with respect to origin (0,0,0). Calibration of the temporal position of each hit is done to mitigate the effect of the position of the MTD hit in the \(\eta-\phi\) plane on the timing of the MTD hit. The timing of each MTD hit is corrected such that if the particle travels with the speed of light from the origin to the position of the MTD hit, it should take zero seconds to reach there. Hence, the timing of the delayed particles will be given as the difference between the raw timing of the hit, as discussed before, and the time taken by a massless particle travelling with the speed of light originating from the origin to reach the position of the MTD hit. \(T_{calib}\) will have higher values and longer tail in the timing distribution for displaced jets than those coming from prompt processes.
* \(\mathbf{T_{Calib}^{Disp}}\): The mean of the timing of the MTD hits associated with displaced tracks as explained above within R = 0.3 of the jet axis calibrated with respect to origin (0,0,0). \(T_{calib}\) will have higher values and longer tail in the timing distribution for displaced jets compared to jets coming from prompt processes where displaced track multiplicity will be very low compared to displaced processes.
* \(\mathbf{T_{Calib}^{NT}}\): The mean of the calibrated timing of the MTD hits with no associated tracks within R = 0.3 of the jet axis. As explained earlier, prompt processes and displaced particles with very small decay lengths will have MTD hits which can be easily associated with the reconstructed tracks. More and more number of MTD hits will
be available for highly displaced particles with no reconstructed tracks to be fed into the calculation of \(T_{calib}\). As a result, the timing distribution of the displaced jets will have slightly higher values of \(T_{calib}\) when compared to prompt processes. However, \(T_{calib}\) will have smaller values as we consider LLPs with shorter and shorter decay lengths. For prompt processes, contribution to the tail of the timing distribution of the jet will be coming from the hits with very low \(p_{T}\) tracks which did not get reconstructed.
* \(\mathbf{p_{T}^{Ratio}}\): The Ratio of the sum of \(p_{T}\) of reconstructed tracks (prompt as well as displaced) associated with MTD hits within \(\mathrm{R}=0.3\) of the jet and the corresponding jet \(p_{T}\). For LLPs with large decay lengths, fewer and fewer prompt tracks will be reconstructed, and hence the number of MTD hits with no associated tracks will be smaller. As a result, there will be a more significant mismatch between actual jet \(p_{T}\) and \(p_{T}\) calculated using tracks with associated hits. This effect will be minimal for prompt processes where most MTD hits will have associated tracks.
* \(\mathbf{D_{T}^{Med}}\): The median of the transverse distance calculated using the reconstructed tracks that have hits in the MTD and are associated with the jets within \(\Delta R<0.3\).
Figure 6 shows the multiplicity of MTD hits for each jet, as measured using three lists of MTD hits for LLP benchmarks BP-1 and BP-2 and the QCD background under the conditions of HL-LHC. Similarly, Figure 7 and Figure 8 depict the \(T_{Calib}\) and \(T_{Raw}\), respectively, calculated using the three MTD hits collections.
From Figure 6, 7 and 8 We observe-
* LLP with smaller decay length has more number of MTD hits compared to LLP with higher decay length. LLPs in general has more number of MTD hits compared to background sources.
Figure 6: Distribution of the Multiplicity of MTD (\(N_{MTD}\)) hits for three MTD hits collections, for the QCD background, and the two LLP benchmark points BP-1 and BP-2, at the HL-LHC.
* Number of MTD hits with associated displaced tracks decrease with increasing LLP decay length while for background, number is significantly lower due to absence of displaced tracks inside the jet.
* LLP with higher decay length have more number of hits with no associated tracks compared to LLP with lower decay length while backgrounds have very less number of MTD hits with associated tracks since most of the hits in MTD will be coming from promptly produced particles.
* Tail of the timing distribution (\(T_{Calib}\) and \(T_{Raw}\)) calculated using MTD hits with associated tracks and MTD hits associated with displaced tracks only increases with decay length for LLPs and background and signal can be easily distinguished.
* Timing calculated using MTD hits with no associated tracks has longer tail for LLP with higher decay lengths. High timing in timing distribution calculated using MTD
Figure 8: Raw time (\(T_{Raw}\)) calculated using three MTD hits collections, for the QCD background, and the two LLP benchmark points BP-1 and BP-2, at the HL-LHC.
Figure 7: Calibrated time (\(T_{Calib}\)) calculated using three MTD hits collections, for the QCD background, and the two LLP benchmark points BP-1 and BP-2, at the HL-LHC.
hits with no associated tracks in background is associated with low \(p_{T}\) PU tracks which move very slowly and contaminate the jet timing.
* Tail at the lower end of the \(p_{T}^{ratio}\) is observed for LLPs because of the mismatch between jet \(p_{T}\) and \(p_{T}\) calculated using tracks associated with MTD hits. Effect is more pronounced for higher decay length because of the higher probability of not finding hits with associated tracks.
Now, we will shift our attention to the timing of ECAL tower constituents within jets to construct various timing variables for jets. For jet formation, we require ECAL and HCAL towers with energy deposits \(E_{em}>0.5\) GeV and \(E_{had}>1\) GeV, respectively. Timing is calculated only for those jets with at least one ECAL tower exceeding an energy deposit of 1 GeV. For timing calculation, we only take into account ECAL towers by requiring \(E_{had}<0.0001\) GeV and \(E_{em}>0.5\) GeV. In Section 3, we have already utilized one of the timing variables, namely the energy-weighted mean timing of the jet (\(\Delta T_{mean}^{Ewt}\)), which is used in the design of the L1 trigger based on ECAL timing. Additionally, we have computed several other measures for the jets using the ECAL timing. These measures are listed as follows:
* \(\mathbf{\Delta T_{mean}}\): The average timing of all ECAL crystals associated with the jet as shown in Equation 1. Here, \(i\) refers to all ECAL crystals within the jet, and \(N\) is the complete count of the crystals associated with the jet. \[\Delta T_{mean}=\frac{\sum\Delta T_{i}}{N},\] (10)
* \(\mathbf{\Delta T_{median}}\): The median timing of all ECAL crystals associated with the jet.
Figure 9: Ratio of sum of \(p_{T}\) of tracks associated with MTD hits within \(\Delta R<0.3\) of the calorimeter jet and jet \(p_{T}\), as calculated using calorimeter inputs using the anti-\(k_{T}\) jet algorithm with R=0.3, for the QCD background, and the two LLP benchmark points BP-1 and BP-2, at the HL-LHC.
* \(\mathbf{\Delta T_{RMS}}\): The root mean square value of the timing of all ECAL crystals with in the jet as computed in Equation 26. \[\Delta T_{RMS}=\sqrt{\frac{\sum\Delta T_{i}^{2}}{N}},\] (27)
* \(\sum\mathbf{\Delta T}\): The sum of the timing of all ECAL crystals in the jet.
* \(\mathbf{\Delta T_{mean}^{Ewt}}\): The energy-weighted mean timing of all ECAL crystals in the jet. This is computed as the sum of the product of each crystal's timing and energy divided by the total energy of all crystals within the jet, as in Equation 26. \[\Delta T_{mean}^{Ewt}=\frac{\sum\Delta T_{i}\times E_{i}}{\sum E_{i}}\] (28)
* \(\mathbf{\Delta T_{mean}^{Ewt}}\): The transverse energy-weighted mean timing of all ECAL crystals in the jet as shown in Equation 27. \[\Delta T_{mean}^{ETwt}=\frac{\sum\Delta T_{i}\times E_{T,i}}{\sum E_{T,i}}\] (29)
Before re-weighting with energy or transverse energy for the aforementioned timing variables, we adjust the timing of each ECAL crystal relative to the origin, as explained in 3.
We have also implemented two more different calibration techniques for the above-mentioned timing variables where we calibrate the timing of each crystal in the jet with respect to the primary vertex (PV) and the jet vertex (JV). The PV is determined by using prompt track collection. The vertex with the highest \(\sum p_{T}^{2}\) is selected as the PV. Similarly, the JV is determined by considering all prompt tracks associated with the jet, located within a distance of \(\Delta R<0.3\) from the jet axis at the ECAL. The vertex with the maximum \(\sum p_{T}^{2}\) is chosen as the JV.
Additionally, the mean timing of a jet is computed using only five or ten crystals, with the maximum time delay determined by multiplying the maximum value of time delay with the energy of the crystal, denoted as \((\Delta T\times E)_{mean}^{Max5},(\Delta T\times E)_{mean}^{Max10}\). The mean timing of the jet calculated using 5 and 10 most energetic crystals is denoted as \(\Delta T_{mean}^{Max5}\) and \(\Delta T_{mean}^{Max10}\), respectively. We additionally compute two quantities, namely \((\Delta T\times E)_{mean}^{TMax5}\) and \((\Delta T\times E)_{mean}^{EMax5}\) where we calculate mean timing of a jet using only five ECAL towers, using the maximum value of time delay multiplied by the energy of the crystals, and this product is divided by the timing and energy of the five ECAL towers possessing highest energy and timing values respectively.
For quantities calculated above, If the jet contains less than five or ten towers, the values of \(\Delta T_{mean}^{Max5}\) and \(\Delta T_{mean}^{Max10}\) are assigned the same values as \(\Delta T_{mean}\), while the values of \((\Delta T\times E)_{mean}^{Max5}\) and \((\Delta T\times E)_{mean}^{Max10}\) as well as \((\Delta T\times E)_{mean}^{TMax5}\) and \((\Delta T\times E)_{mean}^{EMax5}\) are assigned the same values as \(\Delta T_{mean}^{Ewt}\). Introducing such variables in the analysis is crucial as they are more resistant to PU contamination. Additionally, using crystals with the highest
\(\Delta T\times E\) values make sure that PU hits with low energy and high ECAL timing do not significantly affect these variables.
We also compute several other quantities using information about the tracks and calorimeter towers associated with the jet-
* \(\mathbf{p_{T}^{Ratio}}\)- Sum of the \(p_{T}\) of all tracks associated with the jet within a distance of \(\Delta R<0.3\) from the jet axis, divided by the jet \(p_{T}\) as determined through calorimeter inputs using anti-\(k_{T}\) jet algorithm with R=0.3.
* Differences in the \((\eta,\phi)\) position of a jet as calculated using tracks and ECAL towers within the jet. \((\eta,\phi)\) of the jet using tracks is calculated by using \(p_{T}\) weighted mean of \((\eta,\phi)\) of tracks contained within \(\Delta R<0.3\) of the jet. Similarly, \((\eta,\phi)\) of the jet is calculated using position of ECAL crystals contained with in the jet by re-weighting them with crystal \(E_{T}\). For displaced LLPs, position of jets constructed using available tracks will differ from the position of the jets constructed using ECAL crystals since less and less number of displaced tracks will be reconstructed with the increase in the decay length of the LLPs.
* Sum of energy of ECAL towers with in \(\Delta R=\) 0.3
* Fraction of energy deposited in hadron calorimeter (HCAL) compared to total jet energy.
* Number of prompt tracks associated with jet located within a \(\Delta R\) of less than 0.3 from the jet axis.
* Number of displaced tracks with \(|d_{0}|>\)1.5 mm associated with jet within \(\Delta R<\) 0.3 of jet axis.
In Figure 10, we show the distributions of three important timing variables constructed using the information from ECAL, namely \((\Delta T\times E)_{mean}^{Max5}\), \(\sum\Delta T\), and \(\Delta T_{mean}^{Twt}\), for QCD background and two LLP benchmark points, BP-1 and BP-2.
As we can see from the Figure 10, LLP benchmarks exhibit a longer tail in the timing distribution when compared to the QCD background as expected. Compared to BP-1, where the LLP has a decay length of 10 cm, discrimination is more pronounced for BP-2, where the LLP has a decay length of 100 cm. The plot for \(\sum\Delta T\), which shows the sum of the time delay for all the hits in the ECAL, also demonstrates a significant difference between the LLP benchmarks and the background, with a more prominent difference for BP-2. Timing variable \((\Delta T\times E)_{mean}^{Max5}\) and \(\sum\Delta T\) show comparatively better discrimination between jet timing for QCD and LLP when LLP with lower decay length is considered.
Now, we study the correlation between different timing variables constructed using ECAL timing. Our aim is to identify variables that exhibit high correlation factors for both signal and background, while contributing little to distinguish between them. Such redundant variables can be omitted from the analysis thus improving efficiency and interpretability of the analysis. Figure 15 and 16 in Appendix A illustrates the correlation
matrix for the LLP benchmark BP-2 and the QCD background with \(H_{T}^{gen}=\{500-600\}\) GeV respectively. As we can see from the figure, several variables show strong correlation with each other for both signal and background. Such variables can be termed redundant and thus excluded from the final analysis. On the other hand, there are some variables showing a strong correlation for signal while exhibiting weak correlation for background; such variables can be helpful in distinguishing the signal from the background.
Now, with the definitions of various physics variables as described above, we divide the offline analysis of the events selected through the triggers defined in Section 3 into three separate and independent parts, as described below:
* **Cut-based analysis (CBA)**: In order to get a handle on LLP scenarios with shorter decay lengths, we adopt a cut-based approach to efficiently select signal events while significantly rejecting background contribution. We apply an appropriate two-dimensional cut on two variables: \(N_{trk}^{disp}\), which represents the number of displaced tracks associated with the secondary vertex, and \(M_{DV}\), which represents the invariant mass of the displaced vertex as defined earlier.
* **Multi-variate analysis-1 (MVA-1)**: To get a handle on the events with significant lifetime, a machine learning-based multi-variate analysis denoted as MVA-1 is performed independently on the jets from the events selected at L1 using variables constructed using the information from the MTD and associated information from the tracker as calculated previously. The variables used in this analysis are tabulated in Table 2.
* **Multi-variate analysis-2 (MVA-2)**: Similar to MVA-1, we conduct a separate MVA analysis referred to as MVA-2 on the jets from the events selected at L1 aimed
Figure 10: Energy-weighted mean timing of a jet, calculated exclusively from the 5 crystals having the maximum time delay, \((\Delta T\times E)_{mean}^{Max5}\) (left), sum of the timing all ECAL crystals associated with a jet, \(\sum\Delta T\) (middle), and the transverse energy-weighted mean timing of the jet \(\Delta T_{mean}^{ETwt}\) (right) for QCD background, and the two LLP benchmark points BP-1 and BP-2, at the HL-LHC.
at LLP scenarios with large lifetime where we utilize variables that are constructed using information from the ECAL as well as associated data from the tracker, as calculated previously. The variables used for this analysis are listed in Table 2.
Dividing the analysis into three independent parts with CBA focused on LLPs with smaller decay lengths and MVA-1 and MVA-2 focused on LLPs with larger decay lengths helps us address the challenge associated with background suppression and signal extraction across a wide range of decay lengths, from very small to very large. Utilizing these three approaches ensures the analysis remains sensitive to various LLP scenarios with a spectrum of decay lengths.
The final signal significance for each LLP benchmark point is calculated by combining results from the abovementioned approaches after removing the duplicate events. In the following sections, we will provide a detailed explanation of the analysis approaches mentioned above.
### Cut-based analysis (CBA)
Owing to the inclusion of events from displaced Calo-Jet and track-\(H_{T}\) trigger at L1, the significant contribution to the background will originate from the jets coming from instrumental effects, the QCD processes and \(t\bar{t}\) events. Contribution from these background sources can be significantly reduced by requiring events with high \(H_{T}\) where we calculate \(H_{T}\) by summing over the \(p_{T}\) of all the jets in each event. Furthermore, as previously stated, an appropriate two-dimensional threshold cut on \(M_{DV}\) and \(N_{trk}^{disp}\) will also lead to a significant reduction in the background events. For the cut-based analysis, we apply the following selection cuts:
* **Event H\({}_{\bf T}\)**: We require events selected at L1 to possess event \(H_{T}\) greater than 500 GeV where \(H_{T}\) is calculated using jets with jet \(p_{T}>\) 40 GeV.
* **N\({}_{\bf trk}^{\bf Disp}\)**: We require at-least one reconstructed secondary vertex with at-least six associated displaced tracks, each with a transverse impact parameter (\(|d_{0}|\)) greater than 1.5 mm.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Analysis** & **CBA** & **MVA-1** & **MVA-2** \\ \hline \hline \multirow{8}{*}{**Variables**} & & & \(\Delta T_{mean}\), \(\Delta T_{mean}^{PV}\), \(\Delta T_{mean}^{PVJ}\), \(\sum\Delta T\), \\ & & & \(\sum\Delta T^{PV}\), \(\sum\Delta T^{PVJ}\), \(\Delta T_{mean}^{PVJ}\), \(\Delta T_{mean}^{PVJ}\), \\ & & \(N_{MTD}\), \(N_{MTD}^{Disp}\), \(N_{MTD}^{NT}\), & \(\Delta T_{mean}^{PET,PVJ}\), \(\Delta T_{mean}^{cut}\), \(\Delta T_{mean}^{PET,PVJ}\), \\ & & \(T_{raw}\), \(T_{min}^{Disp}\), \(T_{raw}^{NN}\), & \(\Delta T_{median}^{PET}\), \(\Delta T_{median}^{PET,PVJ}\), \\ & & \(T_{calib}\), \(T_{calib}^{Disp}\), \(T_{calib}^{NT}\), & \(\Delta T_{RMS}^{PVJ}\), \(\Delta T_{RMS}^{PET}\), \(\Delta T_{mean}^{MAT}\), (\(\Delta T\times E\))\({}_{mean}^{Max5}\), \\ & & \(p_{T}^{Ratio}\), \(p_{T}^{jet}\), \(\eta^{jet}\) & (\(\Delta T\times E\))\({}_{mean}^{Max5}\), \\ & & & \(\Delta T_{mean}^{Max10}\), (\(\Delta T\times E\))\({}_{mean}^{Max10}\), \\ & & & \(p_{T}^{Ratio}\), \(\frac{E_{max}}{E_{total}}\), \(\sum E_{tow}\), \(N_{trk,prompt}^{jet}\), \\ & & & \(N_{trk,disp}^{jet}\), \(p_{T}^{jet}\), \(\eta^{jet}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Different physics variables to be used in cut-based analysis (CBA) and two independent multi-variate analyses (MVA-1 and MVA-2)
* \(\mathbf{M_{DV}}\): We require the invariant mass of the reconstructed secondary vertex to be greater than 20 GeV.
Events selected after imposing the abovementioned cuts are sorted and stored for further analysis, where we combine results from MVA-1 and MVA-2 with CBA. Now, let us discuss the second and third approaches, MVA-1 and MVA-2.
### Multivariate analysis -1 (MVA-1)
In MVA-1, we utilise an XGBoost (Extreme Gradient Boosting) [121] model trained on physics variables constructed using MTD information to specifically target LLPs with longer lifetimes. XGBoost works by iteratively building a series of decision trees, where each tree corrects the errors made by the previous trees. To minimize the loss function, which measures the difference between the predicted and actual values, XGBoost uses a gradient descent optimization algorithm. In our analysis, we use the following set of XGBoost parameters to train our model for a multi-class classification :
* **objective**: The objective of our model was to perform multi-class classification using the'multi:softprob' approach, which computes the predicted probabilities for each class.
* **num_class**: This parameter was set to 8, indicating the total number of classes in our multi-class classification problem with class 0 representing the signal while other 7 classes ranging from 1 to 5 representing QCD background in different \(H_{T}^{gen}\) bins while class 6 and 7 representing \(t\bar{t}\) and W+Jets background respectively.
* **eval_metric**: We used'mlogloss' as our evaluation metric, which calculates the multi-class logarithmic loss during the training process. It provides a measure of the model's performance.
* **learning_rate**: We utilized a learning rate of 0.1, which determines the step size at each boosting iteration.
* **early_stopping_rounds**: We implemented early stopping with a value of 5 for this parameter. This means that if the loss does not decrease further after 5 consecutive iterations, the training process is halted. The purpose of early stopping is to prevent over-training and improve the generalization ability of the model.
* **colsample_bytree**: This parameter was set to 0.3, indicating the fraction of columns to be randomly sampled for each tree during training.
* **max_depth**: We set the maximum depth of each tree in our model to 6. This restricts the depth of the individual trees, preventing overfitting and improving generalization.
* **alpha**: The alpha parameter was assigned a value of 4, which controls the L1 regularization term on the weights. It helps in reducing the complexity of the model and preventing overfitting.
* **tree_method:** The 'tree_method' parameter was set to 'gpu_hist', indicating the use of GPU acceleration for training the model.
* **num_boost_rounds**: To ensure convergence of the training and achieve the minimum loss, we set the number of boosting rounds to 1000 epochs. This value determined the maximum number of iterations performed during the training process.
The XGBoost model is trained using a set of variables described in the third column of Table 2. We focus on LLP benchmark points with \(M_{\chi^{0}_{2}}/M_{\chi_{1}\pm}\)=1600 GeV, and vary the mass of the LLP, \(M_{\chi^{0}_{1}}\), from 500 GeV to 1000 GeV.
Events selected at L1 utilising displaced and lepton triggers are further required to have offline event \(H_{T}\) greater than 500 GeV where event \(H_{T}\) is calculated in similar manner as for CBA as explained in Section 4.1. Only 6 leading jets per event with transverse momentum greater than 100 GeV are considered in the training process to exclude most pileup jets and only keep the jets coming from the hard interaction. The decay length of the LLPs in our benchmark scenarios ranges from 1 cm to 500 cm. To account for different decay lengths, we train three separate XGBoost models, each targeting a specific range of decay lengths.
Three models are trained using LLP benchmark scenarios exhibiting decay length of 1 cm, 50 cm and 200 cm to target LLPs with decay length in range of 1 cm to 5 cm, 10 cm to 50 cm and 100 cm to 500 cm respectively. We choose the mass of the LLP, \(M_{\chi^{0}_{1}}=800\) GeV for training the XGBoost models for each decay length, as it falls within the moderate range of LLP masses considered in our analysis, with \(M_{\chi^{0}_{2}}/M_{\chi_{1}\pm}\) fixed at 1600 GeV. Each jet in the training sample for the background is assigned a weight according to process cross-section and number of generated events for that particular sample such that sum of weights is unity. Signal jets are assigned unit weights. The trained XGBoost models are then utilized to classify LLPs in the respective decay length ranges in the subsequent analysis steps.
We divide the jets selected at L1 and after the pre-selection cuts, as defined above, into training and testing datasets of equal size. We have approximately 3600k jets from \(t\bar{t}\) events and 2200k jets from QCD dijet events, the two dominant background sources. For signal benchmark points with decay lengths of 1 cm, 50 cm, and 200 cm, we have approximately 5800k, 5400k, and 4600k jets respectively.
To highlight the significance of timing information in selecting LLP events with large lifetime, we show the feature importance of three crucial variables for the MTD in three different LLP scenarios, with decay lengths of 1 cm, 50 cm, and 200 cm in Figure 11 (_left_).. Feature importance is evaluated using the gain metric, which quantifies the improvement in accuracy achieved by a feature in the decision tree branches. In the case of MTD, timing information is derived from tracks (displaced or prompt) that leave hits in the MTD or from hits with no associated tracks. As shown in Figure 11 (_left_), Jet timing calculated using MTD hits with associated tracks performs well for LLP scenarios where the decay length allows for the reconstruction of a larger number of tracks, including displaced ones. However, for LLPs with very long decay lengths, the timing of jets calculated using MTD
hits with no associated tracks gains more significance due to the abundance of MTD hits without associated tracks.
In Figure 11 (_right_), we present the signal efficiency versus background rejection in terms of Receiver Operating Characteristic (ROC) curves for three different decay lengths, namely 1 cm, 50 cm, and 200 cm, for LLPs with \(M_{\chi^{0}_{2}}=1600\) GeV and \(M_{\chi^{0}_{1}}=800\) GeV. The plots demonstrate that the MVA-1 approach, which incorporates timing information from the MTD, exhibits significantly improved performance for LLP scenarios with longer decay lengths compared to those with shorter decay lengths while maintaining good performance for LLP scenarios with shorter decay lengths. This improvement can be attributed to the inclusion of timing information from MTD, which aids in better discriminating between signal and background events, particularly for LLPs with longer decay lengths.
To finalize the event selection, we impose the prerequisite of at least one jet in every event that exhibits a very high signal probability. This signal probability is determined based on the amount of background rejection required, which will depend on the decay length of the LLP on which the model was trained.
### Multivariate analysis -2 (MVA-2)
For MVA-2, we follow the same training strategy as outlined in the previous section for MVA-1. However, we utilize a different set of variables to train the XGBoost models, listed in the fourth column of Table 2. In this case, physics variables are constructed using timing information from the ECAL instead of MTD, as was done for MVA-1. Similar to MVA-1, the main objective of MVA-2 is to identify LLPs with longer lifetimes effectively.
For MVA-2, we have approximately 2200k jets from \(t\bar{t}\) events and 1400k jets from QCD dijet events, which are the two dominant background sources. As for the signal benchmark points with decay lengths of 1 cm, 50 cm, and 200 cm, we have approximately 5600k,
Figure 11: Relative feature importance of three important variables of MVA-1 for three LLP scenarios where decay length is 1 cm, 50 cm and 200 cm (_left_) and classification in terms ROC for two dominant background (QCD and \(t\bar{t}\)) for LLP with decay length 1 cm, 50 cm and 200 cm (_right_).
5200k, and 4400k jets respectively.
We now study the performance of three crucial physics variables included in MVA-2 regarding their relative importance in classifying jets in signal and background for three LLP scenarios with decay lengths of 1 cm, 50 cm and 200 cm. Similar to MVA-1, we utilize the gain metric to quantify the importance. The relative feature importance of these variables is shown in Figure 12 (left).
As we can see from Figure 12 (_left_), An higher relative importance is assigned to the timing variables, \(\Delta T_{mean}^{etwt}\) and \((\Delta T\times E)_{mean}^{Max5}\), for LLPs with larger decay lengths compared to LLPs with smaller decay lengths, as expected. Similarly, \(p_{T}^{Ratio}\) holds more significance for LLPs with larger decay lengths than LLPs with smaller decay lengths. This discrepancy can be understood from the fact that a more significant mismatch arises between the jet \(p_{T}\) calculated using tracks within the jet and the calorimeter jet \(p_{T}\) as the LLP decay length increases, resulting from the fewer displaced tracks being reconstructed.
Here, we would also like to highlight the importance of incorporating energy or transverse energy re-weighting when calculating the timing of the jet. Energy-weighted timing variables exhibit higher significance in classification than timing variables without energy re-weighting. This difference arises from the fact that considering energy-weighted quantities helps mitigate the PU contamination in the jet timing. Since PU energy deposits are soft, their effect on the timing of the jet is reduced after taking their energy into account to construct the jet timing.
In Figure 12 (_right_), we show the ROC curves for three different decay lengths of LLP considering QCD and \(t\overline{t}\) background separately, considering the same LLP benchmark scenario described in the previous section. We can observe that MVA-2 outperforms the LLPs with decay lengths of c\(\tau\) = 50 cm and 200 cm, compared to 1 cm, emphasizing the vital role of ECAL timing in distinguishing highly displaced LLPs from the background.
Figure 12: Relative feature importance of three important variables of MVA-2 for three LLP scenarios where decay length is 1 cm, 50 cm and 200 cm (_left_) and classification in terms ROC for two dominant background (QCD and \(t\overline{t}\)) for LLP with decay length 1 cm, 50 cm and 200 cm (_right_).
In order to make a final selection of events, we impose a criterion that requires at least one jet in each event to have a very high signal probability. This effectively eliminates the majority of the jets originating from background sources.
Next, we will quantify the results obtained from CBA, MVA-1 and MVA-2 in terms of signal significance.
## 5 Results
The final signal significance is determined by combining the outcomes of MVA-1, MVA-2, and CBA, while ensuring that duplicate events are excluded from the final event selection. The signal (S) or background (B) yield is calculated using the following equation:
\[S\text{ or }B=\sigma_{\text{process}}\times\epsilon\times\mathcal{L} \tag{10}\]
where \(\sigma_{\text{process}}\) represents the production cross-section of the process, \(\epsilon\) represents the selection efficiency, and \(\mathcal{L}\) represents the integrated luminosity. The selection efficiency is determined by dividing the number of finally selected events, obtained after combining the results from MVA-1, MVA-2, and CBA, by the total number of events. In this analysis, integrated luminosity of 3000 fb\({}^{-1}\) for HL-LHC is considered.
Finally, for each signal benchmark point, we calculate signal significance using the following formula:
\[S_{\text{sig}}=\frac{S}{\sqrt{B}} \tag{11}\]
where \(S_{\text{sig}}\) represents the signal significance, and \(S\) and \(B\) represent the signal and background yields, respectively.
In Table 3, we present the number of events, yield, and signal significance obtained from CBA, MVA-1 and MVA-2 for three LLP benchmark points with decay lengths of 1 cm, 50 cm, and 200 cm, and \(M_{\chi^{0}_{2}}=1600\) GeV and \(M_{\chi^{0}_{1}}=800\) GeV, along with three background sources.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Events} & \multicolumn{1}{c|}{Total (mil)} & \multicolumn{1}{c|}{CBA} & \multicolumn{1}{c|}{MVA-1} & \multicolumn{1}{c|}{MVA-2} & \multicolumn{1}{c|}{Combined} & Yield & \multicolumn{1}{c|}{\(S_{sig}\)} \\ \hline \multirow{2}{*}{\(M_{\chi^{0}_{1}}=800\) GeV} & 1cm & 0.5 & 444681 & 29154 & 49406 & 447699 & 215 & 9.28 \\ \cline{2-9} & 50 cm & 0.5 & 404420 & 330181 & 365242 & 455612 & 219 & 9.46 \\ \cline{2-9} \(M_{\chi^{0}_{2}}=1600\) GeV & 200 cm & 0.5 & 219169 & 274411 & 333449 & 415166 & 200 & 8.59 \\ \hline \multicolumn{2}{|c|}{QCD} & 3 & 0 & 0 & 0 & 0 & 0 & 0 & - \\ \hline \(t\bar{t}\) & 5 & 0 & 0 & 1 & 1 & 534.0 & - \\ \hline W+Jets & 1 & 0 & 0 & 0 & 0 & 0 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: Total number of events for signal and background obtained individually from the CBA, MVA-1, and MVA-2 analyses, as well as the combined number of events and yield for both signal and background. \(S_{\text{sig}}\) represents the signal significance for three chosen benchmark points with decay lengths of 1 cm, 50 cm, and 100 cm, and \(M_{\chi^{0}_{2}}=1600\) GeV and \(M_{\chi^{0}_{1}}=800\) GeV.
We generate 0.5 million events for each LLP benchmark. At L1, where we select events with lepton triggers, displaced calo-jet trigger and track-\(H_{T}\) trigger, LLP events are selected with more than 90% efficiency, with efficiency decreasing as decay length increases. Further, we select events with \(H_{T}>\) 500 GeV. Since QCD events are generated \(H_{T}^{Gen}>\) 500 GeV, most QCD events pass this cut. For CBA, events with at least one secondary vertex with at least six displaced tracks with \(M_{DV}>\) 20 GeV are selected. Events with LLPs having smaller decay lengths are mostly selected, while signal efficiency decreases with increasing decay length, with efficiency decreasing to less than 50% for decay length above 200 cm. We find no background events passing the criteria mentioned above. The instrumental background is also handled since we require \(M_{DV}>\) 20 GeV. Next, we select jets out of jets selected at L1 after applying suitable cut on signal probability of jets from MVA-1 and MVA-2 separately where XGBOOST trained model is applied on L1 jets. Those events are selected where we find at least one jet passing the selection criteria on signal probability.
MVA-1 and MVA-2 surpass CBA in identifying LLPs with 200 cm decay length. Remarkably, MVA-2 outperforms CBA notably for \(c\tau=\) 200 cm. The combined usage of MVA-1 and MVA-2 demonstrates significantly better signal efficiency than CBA alone, emphasizing the importance of timing information when searching for LLPs with long lifetimes. However, as anticipated, MVA-1 and MVA-2 exhibit poor performance for the LLP benchmark with \(c\tau=\) 1 cm compared to CBA. In Table 3, We also show the yield and signal significance for the three benchmark points for the wino-like chargino-neutralino pair production at HL-LHC, considering an integrated luminosity of \(\mathcal{L}=\) 3000 fb\({}^{-1}\). We obtain the signal significance of around 9\(\sigma\) for all three decay lengths. Remarkably, signal significance does not degrade with decay length, which is attributed to increased analysis sensitivity to the LLPs with higher decay length, thanks to the inclusion of the timing information in the analysis.
We extend the analysis by calculating signal significance for a set of LLP benchmark points following the similar procedure as described above and in Section 4.1, 4.2, and 4.3. In Figure 13, we present the signal significance for numerous LLP benchmark points for wino and higgsino-like \(M_{\chi^{0}_{2}}/\)\(M_{\chi^{0}_{1}}\) pair production scenario where \(M_{\chi^{0}_{2}}=\) 1600 GeV and \(M_{\chi^{0}_{1}}\) varies from 500 GeV to 1000 GeV with decay length varying from 1 cm to 500 cm in the form of a grid. As mentioned earlier, We train three different XGBOOST models for three different decay lengths, namely 1 cm, 50 cm and 200 cm, with \(M_{\chi^{0}_{2}}=\) 1600 and \(M_{\chi^{0}_{1}}=\) 800. The model trained with LLP benchmark with a decay length of 1 cm is applied on LLPs with decay lengths varying between 1 cm and 5 cm. The LLP model trained with a decay length of 50 cm is applied on LLPs with decay lengths between 10 cm and 100 cm, while the LLP model trained with a decay length of 200 cm is reserved for LLPs with very high decay lengths greater than 200 cm.
From Figure 13, we observe a general trend: the signal significance tends to decrease as the decay length of LLP increases. This results from fewer LLPs decay within the tracker and calorimeter volumes as the decay length of the LLP increases. Moreover, the signal significance decreases with a decrease in the LLP mass.
For wino-like chargino-neutralino pair production, a maximum signal significance of
approximately 10 is observed for LLPs with a decay length of 5 cm across all mass points. At the smallest decay length of 1 cm, the signal significance ranges from 8.82 (for \(M_{\chi_{1}^{0}}=1000\) GeV) to 9.60 (for \(M_{\chi_{1}^{0}}=500\) GeV). As the decay length increases to the maximum of 500 cm, the signal significance decreases to a range of 7.04 (for \(M_{\chi_{1}^{0}}=1000\) GeV) and 4.26 (for \(M_{\chi_{1}^{0}}=500\) GeV). For \(M_{\chi_{1}^{0}}=800\) GeV, the signal significance starts at 9.29 for LLP with a 1 cm decay length and drops to 6.29 at a 500 cm decay length. This decrease in signal significance with increasing decay length is consistent across all mass points. Regarding probing and discovery potential at HL-LHC, all mass points exhibit a signal significance greater than two across all decay lengths, implying the potential for probing at the HL-LHC. For discovery potential at the HL-LHC, defined as a signal significance greater than 5, our analysis suggests that all mass points maintain discovery potential up to a decay length of 500 cm except for \(M_{\chi_{1}^{0}}=500\) GeV at 500 cm decay length.
For higgsino-like chargino-neutralino pair production with a relatively smaller cross-section than a wino-like signature, a maximum signal significance of approximately 2.6 is observed for LLP benchmark points with decay lengths greater than 5 cm and 10 cm across all LLP mass points. At a decay length of 1 cm, the signal significance varies from 2.52 (at \(M_{\chi_{1}^{0}}=500\) GeV) to 2.32 (at \(M_{\chi_{1}^{0}}=1000\) GeV). As the decay length extends to 500 cm, the signal significance decreases, with values ranging from 1.85 (at \(M_{\chi_{1}^{0}}=1000\) GeV) to 1.12 (at \(M_{\chi_{1}^{0}}=500\) GeV). When considering the probing potential at the HL-LHC, it is important to note that all mass points maintain a signal significance greater than 2 for decay lengths up to 200 cm, except for the LLPs with 500 and 600 GeV mass at a decay length of 200 cm. It is worth mentioning that signal significance increases with the mass of the LLPs. Thus, LLPs with a mass greater than 1000 GeV and a decay length of 500
Figure 13: Signal significance for LLP benchmark points with \(M_{\chi_{2}^{0}}=1600\) GeV and \(M_{\chi_{1}^{0}}\) varying from 500 GeV to 1000 GeV while decay length varies from 1 cm to 500 cm for wino and higgsino-like chargino-neutralino pair production.
cm have the potential to be probed at HL-LHC.
We also present signal significance for wino-like chargino-neutralino pair production for \(M_{\chi_{2}^{0}}=1800\) GeV and 1900 GeV as shown in Figure 14.
For \(M_{\chi_{2}^{0}}=1800\) GeV, we observe a maximum signal significance of \(\approx\)3.8 at \(M_{\chi_{1}^{0}}=600\) GeV with a 5 cm decay length. LLPs with mass \(M_{\chi_{1}^{0}}>600\) GeV maintain a signal significance over two at a decay length of 500 cm, indicating the potential for probing at the HL-LHC. For \(M_{\chi_{2}^{0}}=1900\) GeV, signal significance generally decreases with increased decay lengths, but higher mass points retain better values, suggesting a stronger HL-LHC probing potential. Particularly at the decay length of 200 cm, all mass points from \(M_{\chi_{1}^{0}}=800\) GeV and above maintain signal significance above the threshold of 2. For LLP with \(M_{\chi_{1}^{0}}=1000\) GeV and 500 cm decay length, a signal significance of 1.72 is observed, which is close to the probing threshold. Furthermore, as the mass of the LLPs at the decay length of 500 cm increases, the signal significance is expected to rise further. Therefore, LLPs with masses greater than 1000 GeV could be probed at HL-LHC.
## 6 Summary and conclusion
The exploration of supersymmetry (SUSY) continues to be crucial in investigating physics beyond the Standard Model, driven by strong theoretical and phenomenological motivations. Although both R-parity conserving and violating scenarios of SUSY have been extensively studied using prompt physics signatures, there is a scarcity of realistic phenomenological studies targeting the search for SUSY via exotic displaced signatures in R-parity violating (RPV) SUSY, especially in the context of HL-LHC. In this work, we
Figure 14: Signal significance for LLP benchmark points with \(M_{\chi_{2}^{0}}=1800\) GeV and 1900 GeV and \(M_{\chi_{1}^{0}}\) varying from 500 GeV to 1000 GeV while decay length varies from 1 cm to 500 cm for wino-like chargino-neutralino pair production.
particularly analyze the pair production of electroweakinos, \(\chi^{0}_{2}\) and \(\chi^{\pm}_{1}\), and their decay into Higgs boson and W boson, respectively, along with \(\chi^{0}_{1}\). The \(\chi^{0}_{1}\) then undergo further decay to light quarks, facilitated by small values of the RPV couplings \(\lambda^{{}^{\prime\prime}}\), resulting in \(\chi^{0}_{1}\) with longer lifetimes.
In order to efficiently select events at the Level-1 trigger level, we have used three triggers: Track-\(H_{T}\), Displaced Calo-Jet, and Single TkIsoLepton. The first two triggers are specifically designed for displaced searches. Our analysis shows that the Displaced Calo-Jet trigger is highly effective in selecting long-lived particle (LLP) events where LLP has a longer lifetime, while the Track-\(H_{T}\) trigger is primarily efficient in selecting LLP events with smaller decay lengths. By combining above these three triggers, we demonstrate the ability to effectively select LLP events across a wide range of decay lengths, ranging from very small to very high, with a high level of efficiency. This highlights the complementary nature of these triggers in capturing LLP signatures with varying decay lengths, and underscores their effectiveness in our study. In the following step, we construct several physics variables by utilizing information from the tracker, MTD, and calorimeters. The analysis is subdivided into three parts, namely cut-based analysis (CBA), multivariate analysis-1 (MVA-1), and multivariate analysis-2 (MVA-2). The cut-based analysis incorporates displaced vertex information, while MVA-1 and MVA-2 employ timing information from MTD and ECAL, respectively. Our findings indicate that LLPs with shorter decay lengths can be effectively searched using the cut-based analysis. However, for LLPs with longer decay lengths, where displaced vertex information alone may not be sufficient, timing-based analyses such as MVA-1 and MVA-2 provide effective selection methods. These results contribute to the understanding of best approaches for identifying LLPs in different decay length scenarios, considering the limitations of displaced vertex information and the potential of timing-based analyses in the context of this study.
Finally, we calculate the signal significance for LLPs in different benchmark scenarios. We vary the mass of LLPs from 500 GeV to 1000 GeV and the decay length from 1 cm to 500 cm for both wino-like and higgsino-like electroweakino pair production scenarios, with a degenerate chargino/neutralino mass, \(M_{\chi^{0}_{2}}/M_{\chi^{\pm}_{1}}=1600\) GeV. Our results show that LLPs in the wino-like chargino/neutralino pair production scenario, for all benchmark points discussed, have the potential to be probed at the HL-LHC with signal significance greater than or equal to \(5\sigma\) for all LLP masses except for LLP with mass 500 GeV at 500 cm decay length where signal significance is less than 5 but greater than 2. However, the significance decreases for the higgsino-like scenario. Nonetheless, the majority of the benchmark points exhibit signal significance greater than \(2\sigma\) except for LLPs at 500 cm decay length and LLPs with mass \(\leq\)600 GeV at 200 cm decay length, suggesting that they can be probed at the HL-LHC. In comparison, the ATLAS study [56] which examines the pair production of electroweakinos in four channels in a pure higgsino state, using the processes \(pp\to\chi^{\pm}_{1}\chi^{0}_{2}\), \(\chi^{0}_{2}\chi^{0}_{1}\), \(\chi^{+}_{1}\chi^{-}_{1}\), and \(\chi^{\pm}_{1}\chi^{0}_{1}\) at 13 TeV, rules out electroweakinos with masses below roughly 1250 GeV for a decay length of 200 cm. Our analysis, focusing only on the \(\chi^{\pm}_{1}\chi^{0}_{2}\) production channel, projects the exclusion mass limit for electroweakinos to 1600 GeV at the same decay length.
We also calculate the signal significance for the heavier electroweakinos with \(M_{\chi^{0}_{2}}/M_{\chi^{\pm}_{1}}\) of 1800 and 1900 GeV for wino-like LLP signatures. For \(M_{\chi^{0}_{2}}/M_{\chi^{\pm}_{1}}=\)1800 GeV, all mass points, except for \(M_{\chi^{0}_{1}}\leq 600\) GeV, retain a signal significance above 2 across all decay lengths. For \(M_{\chi^{0}_{2}}=1900\) GeV, Despite the general decrease in signal significance with increased decay lengths, higher mass points sustain stronger values, thereby indicating their probing potential at HL-LHC. Particularly, all mass points from \(M_{\chi^{0}_{1}}=800\) GeV and higher maintain a signal significance above 2 at a decay length of 200 cm indicating their probing potential at HL-LHC.
## Acknowledgement
BB acknowledges the support provided by the MATRICS Grant(MTR/2022/000264) of the Science and Engineering Research Board (SERB), Government of India.
## Appendix A Correlation matrix for ECAL timing variables
BP-2 |
2307.12525 | Critical Prandtl number for Heat Transfer Enhancement in Rotating
Convection | Rotation, which stabilizes flow, can enhance the heat transfer in
Rayleigh-B\'enard convection (RBC) through Ekman pumping. In this Letter, we
present the results of our direct numerical simulations of rotating RBC,
providing a comprehensive analysis of this heat transfer enhancement relative
to non-rotating RBC in the parameter space of Rayleigh number ($Ra$), Prandtl
number ($Pr$), and Taylor number ($Ta$). We show that for a given $Ra$, there
exists a critical Prandtl number ($Pr_{cr}$) below which no significant heat
transfer enhancement occurs at any rotation rate, and an optimal Prandtl number
($Pr_{opt}$) at which maximum heat transfer enhancement occurs at an optimal
rotation rate ($Ta_{opt}$). Notably, $Pr_{cr}$, $Pr_{opt}$, $Ta_{opt}$, and the
maximum heat transfer enhancement all increase with increasing $Ra$. We also
demonstrate a significant heat transfer enhancement up to $Ra=2\times 10^{10}$
and predict that the enhancement would become even more pronounced at higher
$Ra$, provided $Pr$ is also increased commensurately. | Mohammad Anas, Pranav Joshi | 2023-07-24T04:46:51Z | http://arxiv.org/abs/2307.12525v3 | # Critical Prandtl number for Heat Transfer Enhancement in Rotating Convection
###### Abstract
Rotation can enhance the heat transfer in thermal convection at low and moderate Rayleigh number (\(Ra\)). However, there has been no evidence of such enhancement at high Rayleigh number (\(Ra\gtrsim 10^{10}\)), which is relevant for most large-scale natural phenomena. In this Letter, we show that rotation can enhance the heat transfer significantly even for high Rayleigh numbers (\(\gtrsim 10^{10}\)), provided the Prandtl number is greater than a critical value, \(Pr_{cr}\), that increases with \(Ra\). We also predict that heat transfer enhancement due to rotation not only would occur at \(Ra>10^{10}\) but would also become more pronounced.
Thermal convection under the influence of background rotation manifests in various geophysical and astrophysical flows, such as flows occurring within the Earth's atmosphere, oceans, and outer core [1; 2; 3], gaseous planets like Jupiter [4; 5], and solar interiors [6]. Rotation, which introduces the Coriolis force into the system, significantly affects the characteristics of these flows, including heat and momentum transfer [7; 8]. The canonical model to study the behavior of such systems is rotating Rayleigh-Benard convection (RBC), in which fluid motion occurs between a hot plate (at the bottom) and a cold plate (at the top) as a consequence of the thermal buoyancy while the system rotates along an axis parallel to the gravity [7].
Rotating RBC is primarily governed by three dimensionless parameters: the Rayleigh number (\(Ra\)), which represents the strength of the buoyancy force over the dissipative forces, the Prandtl number (\(Pr\)), which represents the ratio of the momentum diffusivity to thermal diffusivity, and the Taylor number (\(Ta\)), which represents the strength of the Coriolis force relative to the viscous force. To characterize the relative strength of convection over rotation, convective Rossby number (\(Ro=\sqrt{Ra/TaPr}\)) is commonly used. When \(Ro\gg 1\), the buoyancy force dominates over the Coriolis force, and the heat transfer characteristics of rotating RBC systems are similar to those of corresponding non-rotating RBC [9; 10; 11]. On the other hand, when \(Ro\ll 1\), rotation becomes dominant and the heat transfer in rotating RBC, as compared to that of non-rotating case, is severely suppressed. Such rotating RBC system exhibits similarities to geostrophic flow, which is characterized by a force balance between pressure gradient and the Coriolis force [9; 12; 13].
Rotation, which suppresses the intensity of flow, enhances the heat transfer in rotating RBC for a certain range of \(Ra\), \(Pr\), and \(Ta\)[14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. This heat transfer enhancement in rotating RBC as compared to the non-rotating case is ascribed to Ekman pumping. Rotation generates columnar vortices aligned with the rotation axis in the flow, which in turn induce a secondary motion (parallel to the rotation) within the viscous boundary layer [30]. This secondary motion facilitates the transport of hot fluid (at the bottom plate) and cold fluid (at the top plate) from the thermal boundary layers, leading to this enhancement in the heat transfer [13; 19; 31].
Although ample evidence for this heat transfer enhancement in rotating RBC is found in earlier studies for moderate \(Ra\)[14; 15; 16; 17; 18; 11; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29], evidence for enhancement at high \(Ra\) (\(Ra\gtrsim 10^{10}\)) does not exist. Thus, it is commonly expected that there is no (or insignificant) heat transfer enhancement at \(Ra\gtrsim 10^{10}\) in rotating RBC [12; 13]. In this Letter, however, we demonstrate a clear evidence of significant heat transfer enhancement in rotating RBC even at \(Ra\geq 10^{10}\). We explore a very wide range of Prandtl numbers, including very high \(Pr\) (\(\sim\mathcal{O}(1000)\)) that have not been studied earlier for rotating convection, to uncover the existence of a 'critical' Prandtl number, \(Pr_{cr}\). We show that for each \(Ra\) (at least within the range of \(Ra=2\times 10^{4}-2\times 10^{10}\) explored in the present work), heat transfer enhancement will occur only if the Prandtl number is greater than \(Pr_{cr}\) that increases with increasing \(Ra\). In this work, we also provide a precise definition of the optimal Prandtl number \(Pr_{opt}\) for obtaining the maximum heat transfer enhancement at a given \(Ra\) and show that \(Pr_{opt}\) also increases with \(Ra\). Importantly, the present findings predict that heat transfer enhancement due to rotation not only would occur but also would become more pronounced at \(Ra>10^{10}\).
For this study, we perform direct numerical simulations (DNS) of rotating RBC for a wide range of parameters: \(Ra=g\beta\Delta H^{3}/(\nu\kappa)=2\times 10^{4}-2\times 10^{10}\), \(Pr=\nu/\kappa=1-1000\), and \(Ta=4\Omega^{2}H^{4}/\nu^{2}=0-2\times 10^{12}\), and measure the heat transfer in terms of the Nusselt number \(Nu=qH/(\lambda\Delta)\). Here, \(g\) is the acceleration due to gravity, \(\beta\) is the thermal expansion coefficient, \(\Delta\) is the temperature difference between the hot and cold plates, \(H\) is the separation between the plates, \(\nu\) is the kinematic viscosity, \(\kappa\) is the thermal diffusivity, \(\Omega\) is the system's rotation rate, \(\lambda\) is the thermal conductivity of the fluid, and \(q\) is the heat flux from the hot to cold plates. We perform simulations in a horizontally periodic rectangular domain of size \(L\times L\times H\) (\(L\times L\) in the horizontal directions) employing isothermal and no-slip (and impenetrable) boundary conditions at the top
(cold) and bottom (hot) plates. For the simulations of non-rotating RBC (\(Ta=0\)) at moderate \(Ra\), we use large aspect ratio (\(\Gamma=L/H\)) to avoid the effect of confinement on the Nusselt number [32]: \(\Gamma=8\) for \(Ra=2\times 10^{4}-10^{6}\) and \(\Gamma=4\) for \(Ra=10^{7}-10^{8}\). Considering the high computational cost at large \(\Gamma\) for high \(Ra\), we use \(\Gamma=1\) for \(Ra=5\times 10^{8}-2.3\times 10^{9}\) and \(\Gamma=0.5\) for \(Ra=10^{10}\). We use \(Nu\approx 0.12Ra^{0.30}\) (which we obtain by fitting the \(Nu\) data for \(Ra=10^{6}-10^{10}\) at \(Pr=100\)) to estimate \(Nu\approx 148\) for \(Ra=2\times 10^{10}\) and \(Pr=100\).
Since the horizontal length scale of the flow in rotating convection, \(\ell_{c}\), decreases with \(Ta\) as \(\ell_{c}=2.4Ta^{-1/6}H\)[7], we use relatively lower aspect ratios (half or one-fourth of those for the corresponding non-rotating RBC cases) for some simulations of rotating RBC. In all simulations of rotating RBC, we ensure \(\ell_{c}/L\lesssim 1/8\) to mitigate the effect of confinement on \(Nu\)[31; 33]. For more details about the simulations and the solver used in this study, please refer to the Supplementary Material [34].
In Fig. 1, we show the variation of the normalized Nusselt number \(Nu/Nu_{0}\) (\(Nu_{0}\) is the Nusselt number for the non-rotating case) with the Taylor number, \(Ta\), and the inverse of the Rossby number, \(1/Ro\), for \(Pr=1-1000\) at \(Ra=[10^{7},10^{8},10^{9},10^{10}]\). We observe that when \(Pr\) is not too small, \(Nu/Nu_{0}\) first increases and then decreases as the rotation rate is increased. For each \(Ra\), the maximum enhancement in the heat transfer as compared to the non-rotating case occurs in a certain range of \(Pr\) and rotation rate. This maximum enhancement increases with increasing \(Ra\) and can reach up to approximately 25%, 40%, and 55% for \(Ra=10^{7}\), \(Ra=10^{8}\), and \(Ra=10^{9}\), respectively. Note that we observe a significant heat transfer enhancement (more than 40%) even at \(Ra=10^{10}\), which will be discussed later in greater detail.
Interestingly, we observe (see Fig. 1) that the Taylor number \(Ta\) serves as a better parameter than \(1/Ro\) in representing the optimal rotation rate at which the maximum enhancement occurs. Unlike the optimal rotation rate represented in terms of the inverse of Rossby number, (\(1/Ro_{opt}\)), the optimal Taylor number, \(Ta_{opt}\), is nearly independent of \(Pr\) when significant enhancement is observed at a given \(Ra\). Most earlier studies used \(1/Ro_{opt}\) to represent the optimal rotation rate but found \(1/Ro_{opt}\) to be strongly dependent on \(Pr\)[20; 19; 23]. Since the heat transfer enhancement due to rotation is largely controlled by the dynamics of the Ekman boundary layer [35], the thickness of which depends only on the Taylor number (which represents the ratio of Coriolis to viscous forces), \(Ta\) can be expected to represent better the heat transfer enhancement than \(1/Ro\) (which represents the ratio of Coriolis to buoyancy forces) at moderate and high rotation rates. This finding is also in line with the hypothesis of King _et al._[9] that the boundary layer controls the rotation-dominated regime in rotating RBC, rather than the balance between the buoyancy and Coriolis forces. Nonetheless, the beginning of the rotation-affected regime, i.e., the rotation rate at which \(Nu/Nu_{0}\) deviates from \(1\), is better represented by \(1/Ro\) than by \(Ta\), as seen from our results. Specifically, for \(Ra=10^{7}\) and \(10^{8}\), the rotation-affected regime begins at \(1/Ro\approx 0.2\), while for \(Ra=10^{9}\), it begins at \(1/Ro\approx 0.4\). These values are consistent with the findings of Stevens _et al._[22].
In Fig. 2, we show the variation of \(Ta_{opt}\) with \(Ra\) for various Prandtl numbers. As discussed earlier, we observe that at a given \(Ra\), the optimal Taylor number does not vary significantly with \(Pr\). Also, \(Ta_{opt}\) follows a power law close to \(Ta_{opt}\propto Ra^{1.5}\) up to a certain \(Ra\) and this limiting \(Ra\) for the power law seems to increase with increasing \(Pr\). Note that the Taylor number at which convection ceases completely (\(Ta_{cs}\)) also follows the scaling \(Ta_{cs}\propto Ra^{1.5}\) for \(Ta\gg 1\)[7].
In Fig. 3, we show the variation of the normalized maximum Nusselt number, \(Nu_{max}/Nu_{0}\), with \(Pr\) for \(Ra=2\times 10^{4}-2\times 10^{10}\). At any given \(Ra\) and \(Pr\), \(Nu_{max}\), by definition, corresponds to the Nusselt number at the optimal Taylor number for that \(Ra\) and \(Pr\). The maximum heat transfer enhancement represented by \(Nu_{max}/Nu_{0}\) for each \(Ra\) increases with \(Pr\) up to a certain \(Pr\) and then decreases. Note that for each \(Ra\) there exists a Prandtl number (obtained by extrapolating the data for each \(Ra\) to \(Nu_{max}/Nu_{0}=1\)) below which there will be no (or negligible) heat transfer enhancement at any rotation rate. We call this \(Pr\) the critical Prandtl number \(Pr_{cr}\). In rotating convection, columnar vortical structures are known to play an important role in the heat transfer by transporting the temperature anomaly from one wall to the other [19; 31]. However, for \(Pr<Pr_{cr}\), it is likely that the lateral diffusion of the heat/temperature anomaly away from the vortex columns restricts their ability to transport heat between the top and bottom walls: see Fig. 4 which shows the flow structure for different \(Pr\) at \(Ra=10^{8}\) and \(Ta\approx Ta_{opt}\) (\(Pr=2\) for Fig. 4(a) is close to \(Pr_{cr}\)) [23; 36]. As \(Pr\) increases, this effect is expected to weaken (e.g., see Fig. 4(b) and 4(c)), and so the heat transfer increases. However, the heat transfer enhancement decreases again at very large \(Pr\). At any \(Ra\), we define the Prandtl number at which \(Nu_{max}/Nu_{0}\) reaches its maximum as the optimal Prandtl number \(Pr_{opt}\) for that \(Ra\). Some studies (e.g., Stevens _et al._[23]), comparing the heat transfer at a constant \(Ro\), have proposed that at large \(Pr\), the Ekman boundary layer (\(\delta_{u}\)) is much thicker than the thermal boundary layer (\(\delta_{\theta}\)); consequently, the columnar vortices do not reach the thermal boundary layer and the fluid entering them is not as hot (or as cold), leading to a decrease in the heat transfer enhancement at large \(Pr\). However, we observe that the maximum enhancement (which occurs at \(Ta_{opt}\)) decreases at large Prandtl number even though \(1.3\lesssim\delta_{u}/\delta_{\theta}\lesssim 1.5\) for \(Pr>Pr_{opt}\) (see Supplementary Material [34]). We hypothesize that the higher viscous damping of the flow at very large
weakens the vertical advection of heat by the columnar structures (see Fig. 4(d) in which these columns are observed to have diffused significantly in the lateral direction), resulting in maximum heat transfer at an intermediate (optimal) \(Pr\).
Here, we make an important observation. The results show a significant heat transfer enhancement even at \(Ra\geq 10^{10}\): approximately \(40\%\) at \(Ra=2\times 10^{10}\) for \(Pr=100\), and the trends indicate an even higher heat transfer enhancement for higher \(Pr\). This finding challenges a common expectation that there will be no (or negligible) heat transfer enhancement due to rotation at \(Ra\gtrsim 10^{10}\)[12; 13]. The present results predict that heat transfer enhancement due to rotation is possible even for \(Ra>10^{10}\) provided \(Pr>Pr_{cr}\). For \(Ra=10^{10}\), \(Pr_{cr}\approx 10\). This is the reason why most earlier studies, which use \(Pr<Pr_{cr}\), have reported no (or negligible) heat transfer enhancement for \(Ra\gtrsim 10^{10}\): Niemela _et al._[37] (for \(Pr=0.7-5.9\)), Stellmach _et al._[35] (for \(Pr\approx 1-7\)), Kunnen _et al._[33] (for \(Pr=1\)), Ecke and Niemela [38] (for \(Pr=0.7\)), and Hartmann _et al._[36] (for \(Pr=4.38\) and \(6.4\)). Note that we also observe a significant heat transfer enhancement (\(>10\%\)) at \(Ra=2\times 10^{4}\) (using \(\Gamma=8\)), in agreement with Rossby [8]'s experimental results for \(\Gamma\gtrsim 6\).
In Fig. 5, we show the variation of \(Pr_{cr}\) and \(Pr_{opt}\) with \(Ra\). Interestingly, both \(Pr_{cr}\) and \(Pr_{opt}\) increase monotonically with \(Ra\) and approximately follow power-laws: \(Pr_{cr}\approx 1.46\times 10^{-3}Ra^{0.375}\) and \(Pr_{opt}\approx 8.18\times 10^{-3}Ra^{0.504}\). Considering the high computational cost at large \(Pr\) and large \(Ra\), we do not perform simulations at \(Pr>200\) to find \(Pr_{opt}\) for \(Ra=10^{10}\), which is estimated to be \(Pr_{opt}\approx 900\) by the above power-law fit. As Rayleigh number increases, the turbulent diffusion of heat is also expected to become stronger. At low \(Pr\), this higher turbulent diffusion will combine with the large molecular thermal diffusivity to further increase the lateral diffusion of heat in the bulk, and hence, will decrease the ability of the vortex columns to transport heat. Thus, a correspondingly larger \(Pr\) may be necessary to counter this effect of the enhanced turbulent thermal diffusivity to register any enhancement in the heat transfer, i.e., \(Pr_{cr}\) will increase with increasing \(Ra\). On the other hand, as the buoyancy forcing increases with increasing \(Ra\), the viscous damping of the flow at large \(Pr\) will become weaker and the heat transfer enhancement can be sustained until larger Prandtl numbers, i.e. \(Pr_{opt}\) also increases as \(Ra\) is increased.
In Fig. 6, we show the variation of \(Nu_{max}(Pr_{opt})/Nu_{0}\) with \(Ra\). Here, \(Nu_{max}(Pr_{opt})\) is \(Nu_{max}\) at \(Pr=Pr_{opt}\). Again, we observe a clear monotonic increase of \(Nu_{max}(Pr_{opt})/Nu_{0}\) with \(Ra\), and the trend can be fitted by a power-law \(Nu_{max}(Pr_{opt})/Nu_{0}\approx 0.62Ra^{0.044}\). This relationship predicts \(Nu_{max}/Nu_{0}\approx 1.7\) (i.e., \(70\%\) enhancement in heat transfer) for \(Ra=10^{10}\) at \(Pr_{opt}\approx 900\), and even higher enhancement at \(Ra>10^{10}\) at higher \(Pr\). Thus, the present trends predict that the maximum heat transfer enhancement increases with \(Ra\), even for \(Ra>10^{10}\), provided the Prandtl number is also increased commensurately. As discussed earlier, at a given \(Ra\), the enhancement increases with \(Pr\) up to \(Pr_{opt}\) beyond which the viscous damping of the flow likely restricts the vertical advection of heat. However, as \(Ra\) and hence buoyancy forcing is increased, the enhancement due to rotation can increase up to larger \(Pr\) before the viscous damping effect becomes significant. Thus, the maximum enhancement can be expected to increase with \(Ra\).
Note that the effect of finite aspect ratio on \(Nu\) in non-rotating simulations [32] may lead to some uncertainty in the values of \(Pr_{cr}\) and \(Nu_{max}/Nu_{0}\) for \(Ra\geq 5\times 10^{8}\). However, this effect is expected to not alter any of the major findings of this study.
The present results show that as \(Ra\) is increased in ro
Figure 1: Variation of the normalized Nusselt number \(Nu/Nu_{0}\) with Taylor number \(Ta\) (left) and \(1/Ro\) (right) for various \(Pr\) at (a) \(Ra=10^{7}\), (b) \(Ra=10^{8}\), (c) \(Ra=10^{9}\), and (d) \(Ra=10^{10}\). Solid lines are used to connect data points, aiding visual interpretation.
tating RBC, \(Pr_{cr}\), \(Pr_{opt}\), as well as the maximum heat transfer enhancement increase. In particular, not only enhancement is certainly possible for \(Ra>10^{10}\), but is also expected to be higher than that for lower \(Ra\) at the optimal Prandtl number. However, we do not know up to what \(Ra\) these trends will persist. Simulations and experiments at significantly higher \(Ra\) and \(Pr\) than currently feasible may be required to answer this question.
We thank Roshan Samuel for his valuable assistance in the development of the solver used in this study. We also thank Prof. M. K. Verma for inspiring us to utilize GPUs for scientific computing, and also for providing resources for the testing of the solver and for running some simulations. Mohammad Anas thanks Soumyadeep Chatterjee, Shadab Alam, and Manthan Verma for their useful discussions on the solver and this work. For all the simulations related to this work, we gratefully acknowledge the support and the resources provided by Param Sanganak under the National Supercomputing Mission, Government of India at the Indian Institute of Technology, Kanpur.
Figure 3: Variation of the normalized maximum Nusselt number \(Nu_{max}/Nu_{0}\) with \(Pr\) for various \(Ra\). Solid lines are used to connect data points, aiding visual interpretation.
Figure 4: Contour plots of the normalized temperature field \(T\) and velocity vectors in the \(yz\)-plane at \(x=0.5\) for (a) \(Pr=2\), (b) \(Pr=4.38\), (c) \(Pr=100\), and (d) \(Pr=1000\) at \(Ra=10^{8}\) and \(Ta\approx Ta_{opt}\). Note that the rotation axis is along \(z\)-direction and for \(Ra=10^{8}\), \(Pr=2\) and \(Pr=100\) are close to \(Pr_{cr}\) and \(Pr_{opt}\), respectively.
Figure 5: Variation of the critical Prandtl number \(Pr_{cr}\) and the optimal Prandtl number \(Pr_{opt}\) with \(Ra\). Dot-dashed and dashed lines represent power-law fits \(Pr_{cr}\approx 1.46\times 10^{-3}Ra^{0.375}\) and \(Pr_{opt}\approx 8.18\times 10^{-3}Ra^{0.504}\), respectively. |
2307.05059 | On Imperfect Recall in Multi-Agent Influence Diagrams | Multi-agent influence diagrams (MAIDs) are a popular game-theoretic model
based on Bayesian networks. In some settings, MAIDs offer significant
advantages over extensive-form game representations. Previous work on MAIDs has
assumed that agents employ behavioural policies, which set independent
conditional probability distributions over actions for each of their decisions.
In settings with imperfect recall, however, a Nash equilibrium in behavioural
policies may not exist. We overcome this by showing how to solve MAIDs with
forgetful and absent-minded agents using mixed policies and two types of
correlated equilibrium. We also analyse the computational complexity of key
decision problems in MAIDs, and explore tractable cases. Finally, we describe
applications of MAIDs to Markov games and team situations, where imperfect
recall is often unavoidable. | James Fox, Matt MacDermott, Lewis Hammond, Paul Harrenstein, Alessandro Abate, Michael Wooldridge | 2023-07-11T07:08:34Z | http://arxiv.org/abs/2307.05059v1 | # On Imperfect Recall in Multi-Agent Influence Diagrams
###### Abstract
Multi-agent influence diagrams (MAIDs) are a popular game-theoretic model based on Bayesian networks. In some settings, MAIDs offer significant advantages over extensive-form game representations. Previous work on MAIDs has assumed that agents employ behavioural policies, which set independent conditional probability distributions over actions for each of their decisions. In settings with imperfect recall, however, a Nash equilibrium in behavioural policies may not exist. We overcome this by showing how to solve MAIDs with forgetful and absent-minded agents using mixed policies and two types of correlated equilibrium. We also analyse the computational complexity of key decision problems in MAIDs, and explore tractable cases. Finally, we describe applications of MAIDs to Markov games and team situations, where imperfect recall is often unavoidable.
## 1 Introduction
Multi-agent influence diagrams (MAIDs) are a graphical representation for dynamic non-cooperative games, which can be more compact and expressive than extensive-form games (EFGs) [26]. Like Bayesian networks (BNs), MAIDs use a directed acyclic graph (DAG) to represent conditional probabilistic dependencies between random variables, but they also specify decision and utility variables for each agent. Each agent selects a behavioural policy - independent conditional probability distributions (CPDs) over actions for each of their decision variables - to maximise their expected utility. A MAID's mechanised graph extends this DAG by explicitly representing each variable's distribution and showing which other variables' distributions matter to an agent optimising a particular decision rule [19, 26, 11].
MAIDs, and their causal variants [19], have been used in the design of safe and fair AI systems [15, 2, 16, 8, 9], to explore reasoning patterns and deception [41, 49], and to identify agents from data [23]. However, to date, agents in MAIDs are usually assumed to have perfect (or, at least,'sufficient') recall [26]. This assumption is often unreasonable. For example, MAIDs must allow imperfect recall to handle bounded rationality, teams with imperfect communication [14], or memoryless policies in Markov games. However, forgetfulness (of previous observations) or absent-mindedness (about whether previous decisions have even been made) can prevent the existence of a Nash Equilibrium (NE) in behavioural policies. To overcome this, one can consider other solution concepts, such as mixed or correlated equilibria.
In this work, we focus on imperfect recall in MAIDs. Imperfect recall has already been extensively studied in EFGs [42, 27, 50], but a MAID's mechanised graph makes graphically explicit the semantic difference between behavioural and mixed policies (hidden in EFGs) and readily identifies forgetful or absent-minded agents (or teams). Our insights inspire two definitions of _correlated equilibrium_ in MAIDs. The first follows from the normal-form game definition [3]. The second, based on von Stengel
and Forges' extensive-form correlated equilibrium [48], is more natural for dynamic settings, can yield greater social welfare, and is easier to compute. Again, mechanised graphs clearly depict the assumptions made in both. Next, we examine MAIDs from a computational complexity perspective by studying the decision problems of finding a best response, checking whether a policy profile is an NE, and checking whether each type of NE exists. These provide an insight into what makes particular instances hard, when computations can be made tractable, and rigorously identify which problems are suitable for analysis as MAIDs. Our results also apply to refinements of MAIDs, such as _causal games_[19]. We assume familiarity with EFGs [32], BNs [25], and the complexity classes P, NP, and PP [39]. Proof sketches are provided, but details are deferred to the appendices.
Related Work.There is a rich literature on influence diagrams [24] and imperfect recall has been studied in single-agent influence diagrams [34, 35, 30, 29, 6, 36] as well as in EFGs [4, 22, 27, 42, 40]. However, to our knowledge, we are the first to focus on imperfect recall in influence diagrams with multiple agents.
A full policy profile in a MAID induces a BN, so many of our results inherit from that setting, where the decision problem variant of marginal inference is, in general, PP-complete [31]. However, we care about the cases we encounter in practice, not just the worst case. Marginal inference in a BN can be performed in time exponential in the treewidth of the underlying graph [25], which entails a poly-time algorithm when the treewidth is small. Similarly, we will see that tractable results for computations in MAIDs can be found when problems are restricted to certain settings. We also sometimes reduce from partial order games [51], which can be interpreted as MAIDs without chance nodes, with deterministic decision rules, and where each agent has a single utility node as a child of all the decision nodes.
## 2 The Model
We use capital letters \(V\) for random variables, lowercase letters \(v\) for their instantiations, and bold letters \(\mathbf{V}\) and \(\mathbf{v}\), respectively, for sets of variables and their instantiations. We let \(\mathit{dom}(V)\) denote the (finite, non-singleton) domain of \(V\) (for ease, we take this to be binary unless stated otherwise) and \(\mathit{dom}(\mathbf{V})\coloneqq\bigtimes_{V\in\mathbf{V}}\mathit{dom}(V)\). Parents and children of \(V\) in a graph are denoted by \(\mathbf{Pa}_{V}\) and \(\mathbf{Ch}_{V}\), respectively (with \(\mathbf{pa}_{V}\) and \(\mathbf{ch}_{V}\) their instantiations) and \(\Delta(X)\) denotes the set of all probability distributions over a set \(X\).
**Example 1**.: _An autonomous taxi decides whether to offer Alice a discount (\(T\)) depending on whether its journey count exceeds a quota (\(Q\)). Alice decides whether to accept a journey (\(A\)) depending on the price. The taxi wants to maximise profit, but if its journey count is less than the quota and Alice rejects it, the taxi pays a penalty (the municipality uses this mechanism to prevent a proliferation of unnecessary taxis). Alice's utility is a function of her decision and the price offered by the taxi._
Figure 0(a) shows a MAID for this example. Chance variables (moves by nature), decision variables, and utility variables are represented by white circles, squares, and diamonds, respectively. Full edges leading into chance and utility nodes represent probabilistic dependence, as in a BN. Dotted edges leading into decision nodes identify information available to the agent when a decision \(D\) is made, so \(\mathbf{pa}_{D}\), the values of \(\mathbf{Pa}_{D}\), represents the decision context for \(D\). In EFGs, imperfect information is represented using explicitly labelled information sets. In MAIDs, we can infer that Alice is unaware of the value of \(Q\) when making her decision by the lack of edge \(Q\to A\). A parameterisation defines the CPDs for the chance and utility variables, whereas CPDs of decision nodes are chosen by the agents playing the game.
**Definition 1** ([26]).: _A **multi-agent influence diagram (MAID)** is a structure \(\mathcal{M}=(\mathcal{G},\mathbf{\theta})\). \(\mathcal{G}=(N,\mathbf{V},E)\) specifies a set of agents \(N=\{1,\ldots,n\}\) and a DAG \((\mathbf{V},E)\), where \(\mathbf{V}\) is partitioned into chance variables
_X_, decision variables \(\mathbf{D}=\bigcup_{i\in N}\mathbf{D}^{i}\), and utility variables \(\mathbf{U}=\bigcup_{i\in N}\mathbf{U}^{i}\). The parameters \(\mathbf{\theta}=\{\theta_{V}\}_{V\in\mathbf{V},\mathbf{D}}\) define the CPDs \(\Pr(V\mid\mathbf{Pa}_{V})\) for each non-decision variable such that for any setting of the decision variables' CPDs, the resulting joint distribution over \(\mathbf{V}\) is Markov compatible with the DAG, i.e., \(\Pr(\mathbf{v})=\prod_{V\in\mathbf{V}}\Pr(v\mid\mathbf{pa}_{V})\).
Given a MAID, a **decision rule**\(\pi_{D}\) for \(D\in\mathbf{D}\) is a CPD \(\pi_{D}(D\mid\mathbf{Pa}_{D})\). A **partial (behavioural) policy profile**\(\pi_{\mathbf{D}^{\prime}}\) is a set of decision rules for each \(D\in\mathbf{D}^{\prime}\subseteq\mathbf{D}\), whereas \(\pi_{-\mathbf{D}^{\prime}}\) is the set of decision rules for each \(D\in\mathbf{D}\setminus\mathbf{D}^{\prime}\). A **(behavioural) policy**\(\pi^{i}\) refers to \(\mathbf{\pi_{\mathbf{D}^{\prime}}}\), and a **(full) policy profile**\(\pi=(\pi^{1},\ldots,\pi^{n})\) is a tuple of policies, where \(\mathbf{\pi}^{-i}\coloneqq(\mathbf{\pi}^{1},\ldots,\mathbf{\pi}^{i-1},\mathbf{\pi}^{i+1}, \ldots,\mathbf{\pi}^{n})\). A decision rule is **pure** if \(\pi_{D}(d\mid\mathbf{pa}_{D})\in\{0,1\}\), which holds for a policy (profile) if it holds for all decision rules in the policy (profile). For clarity, we use an overhead dot to mark this determinism, e.g., \(\hat{\pi}_{D},\hat{\mathbf{\pi}}^{i}\), or \(\hat{\pi}\).
By combining \(\mathbf{\pi}\) with the partial distribution \(\Pr\) over the chance and utility variables, we obtain a joint distribution:
\[\Pr^{\mathbf{\pi}}(\mathbf{x},\mathbf{d},\mathbf{u})\coloneqq\prod_{V\in\mathbf{V},\mathbf{D}}\Pr(v \mid\mathbf{pa}_{V})\cdot\prod_{D\in\mathbf{D}}\pi_{D}(d\mid\mathbf{pa}_{D})\]
A full policy profile \(\mathbf{\pi}\) therefore induces a BN with DAG given by the MAID's graph. Agent \(i\)'s **expected utility**\(EU^{i}(\mathbf{\pi})\) for a given policy profile \(\mathbf{\pi}\) is defined as the expected sum of their utility variables:
\[EU^{i}(\mathbf{\pi})\coloneqq\sum_{U\in\mathbf{U}^{i}}\sum_{u\in dom(U)}\Pr^{\mathbf{\pi}} (U=u)\cdot u\]
Utility variables have deterministic CPDs, so can be interpreted as functions \(U:dom(\mathbf{Pa}_{U})\rightarrow\ \mathbb{R}\) to show their functional dependence on their parents (e.g., Figure 0(a)). An NE is defined in the usual way.
**Definition 2** ([26]).: _A (behavioural) policy profile \(\mathbf{\pi}\) is a **Nash equilibrium (NE)** (in behavioural policies) if for every agent \(i\in N\) and every alternative (behavioural) policy \(\mathbf{\varpi}^{i}\): \(EU^{i}(\mathbf{\pi}^{-i},\mathbf{\pi}^{i})\geq EU^{i}(\mathbf{\pi}^{-i},\mathbf{\varpi}^{i})\)_
Collectively, the decision rules of decision variables and the CPDs of chance or utility nodes are known as mechanisms. A mechanism \(\mathsf{M}_{V}\) for \(V\) is **strategically relevant** to a decision rule for \(D\) if the choice of the CPD at \(\mathsf{M}_{V}\) can affect the optimal choice of this decision rule. Koller and Milch [26] define an associated sound and complete graphical criterion for strategic relevance, \(\mathbf{s}\)-**reachability**, based on d-separation which can be checked in \(\mathcal{O}(|\mathbf{V}|+|E|)\) time [44] (see Appendix A for formal definitions).
A MAID's regular graph \(\mathcal{G}\) captures the probabilistic dependencies between **object-level** variables in the game's environment, but its **mechanised graph**\(\mathsf{m}\mathcal{G}\) is an enhanced representation which adds an explicit representation of the strategically relevant dependencies between agents' decision rules and the game's parameterisation (see [19] for details). Each object-level variable \(V\in\mathbf{V}\) has a mechanism parent \(\mathsf{M}_{V}\) representing the distribution governing \(V\): each decision \(D\) has a new _decision rule_ parent \(\Pi_{D}=\mathsf{M}_{D}\) and each non-decision \(V\) has a new _parameter_ parent \(\Theta_{V}=\mathsf{M}_{V}\), whose values parameterise the CPDs.
Agents select a decision rule \(\pi_{D}\) (i.e., the value of a decision rule variable \(\Pi_{D}\)) based on both the parameterisation of the game (i.e., the values of the parameter variables) and the selection of the other
Figure 1: A MAID (a) and its mechanised graph (b) for Example 1, which is a perfect recall and imperfect, but sufficient, information game.
decision rules \(\boldsymbol{\pi}_{-D}\) - these dependencies are captured by the edges from other mechanisms into decision rule nodes. \(s\)-reachability determines which of these edges are necessary, so \(\mathsf{M}_{V}\rightarrow\Pi_{D}\) exists if and only if \(\Pi_{D}\) strategically relies on \(\mathsf{M}_{V}\). The mechanised graph for Example 1 (in Figure 0(b)) shows that \(\Pi_{T}\) strategically relies on \(\Theta_{U^{T}}\) and \(\Pi_{A}\), whereas \(\Pi_{A}\) only strategically relies on \(\Theta_{U^{A}}\). In contrast to a MAID's regular graph \(\mathcal{G}\), which is a DAG, there may exist cycles between mechanisms (e.g., Figure 2(a)).
For convenience, we denote the set of agent \(i\)'s behavioural policies as \(\boldsymbol{P}^{i}\coloneqq dom(\boldsymbol{\Pi}^{i})\), with sets of pure policies denoted as \(\dot{\boldsymbol{P}}^{i}\) and (pure) policy profiles denoted by \(\boldsymbol{P}\) (\(\dot{\boldsymbol{P}}\)).
### Concise Representations
A concise representation of MAIDs is needed for three reasons. First, real numbers may obscure the true complexity of the problems [6], so we assume that all probability parameters are given by a fraction of two integers, both expressed in finite binary notation. This is realistic since the probabilities are normally either assessed by domain experts or estimated by a learning algorithm and means that all CPDs can be read in poly-time. Second, even with binary variables, a joint distribution across \(\boldsymbol{V}\) requires \(2^{|\boldsymbol{V}|}-1\) parameters. A MAID or BN's graphical Markov factorisation reduces this to \(\sum_{V\in\boldsymbol{V}}2^{|\mathbf{Pa}_{V}|}\), but this can still be exponential in \(|\boldsymbol{V}|\). Therefore, it is standard [46, 43, 29, 25] to assume that the maximum in-degree in the graph is much less than \(|\boldsymbol{V}|\) (or constant), so that the size of the CPDs are polynomial in \(|\boldsymbol{V}|\). This means that the total representation of our MAID (including all CPDs) is polynomial in our chosen complexity parameter \(|\boldsymbol{V}|\). Finally, as in BNs, our complexity results are strongly affected by the DAG's **treewidth**. The **treewidth** of a DAG measures its resemblance to a tree and is given by the number of vertices in the largest clique of the corresponding triangulated moral graph minus one [5].
## 3 Imperfect Recall in MAIDs
Agents may possess different degrees of information about the state of a game. A game has **perfect recall** if each agent remembers all their past decisions and observations, and it has **perfect information** if each agent is aware of _every_ agent's past decisions and observations.
**Definition 3** ([26]).: _Agent \(i\) in a MAID \(\mathcal{M}\) is said to have **perfect recall** if there exists a total ordering \(D_{1}\prec\cdots\prec D_{m}\) over \(\boldsymbol{D}^{i}\) such that \((\mathbf{Pa}_{D_{j}}\cup D_{j})\subseteq\mathbf{Pa}_{D_{k}}\) for any \(1\leq j<k\leq m\). \(\mathcal{M}\) is a perfect recall game if all agents in \(\mathcal{M}\) have perfect recall. \(\mathcal{M}\) is a **perfect information** game if there exists such an ordering over \(\boldsymbol{D}\)._
A MAID with perfect information (recall) can be transformed into an EFG with perfect information (recall), and vice versa [18]. Hence, these information conditions also guarantee the existence of an NE in pure (behavioural) policies in the MAID ([27] gives the equivalent results in EFGs). However, the mechanised representation of a MAID enables weaker criteria to be defined - **sufficient information** and **sufficient recall**. Later, in Proposition 3, we will see that these criteria preserve the NE existence results of perfect information and perfect recall games, respectively.
**Definition 4**.: _Agent \(i\) in a MAID \(\mathcal{M}\) has **sufficient recall**[37] if the subgraph of the mechanised graph \(\mathsf{m}\mathcal{G}\) restricted to just agent \(i\)'s decision rule nodes \(\boldsymbol{\Pi}_{\boldsymbol{D}^{i}}\) is acyclic. \(\mathcal{M}\) is a sufficient recall game if all agents in \(\mathcal{M}\) have sufficient recall. \(\mathcal{M}\) is a **sufficient information** game if the subgraph of \(\mathsf{m}\mathcal{G}\) restricted to contain only and all decision rule nodes \(\boldsymbol{\Pi}_{\boldsymbol{D}}\) is acyclic.1_
Footnote 1: Note that since previous work on influence diagrams has not modelled absent-mindedness (see our Definition 5 in Section 3.1), this definition implicitly assumes each mechanism variable has a single child.
### Forgetfulness and Absent-Mindedness
Previous work on MAIDs has assumed perfect or sufficient recall. We now begin the contributions of this paper by distinguishing between two types of imperfect recall in MAIDs. **Forgetfulness** applies when an agent forgets an observation or the _outcome_ of one of their previous decisions. **Absent-mindedness** applies when an agent cannot even remember whether they have previously made a decision. To make this distinction, we leverage the following insight: _mechanism nodes represent the CPDs governing object-level variables. Every edge between a mechanism and object-level node represents an independent draw from the mechanism's distribution._ We now provide formal definitions.
**Definition 5**.: _Agent \(i\) has **imperfect recall** in a MAID \(\mathcal{M}\) if for every total ordering \(D_{1}\prec\cdots\prec D_{m}\) over \(\textbf{D}^{i}\) there exists some \(j<k\) such that \((\textbf{Pa}_{D_{j}}\cup D_{j})\not\subseteq\textbf{Pa}_{D_{k}}\) (i.e., if agent \(i\) does not have perfect recall). Agent \(i\) is **forgetful** if such a \(D_{j}\) and \(D_{k}\) have distinct decision rules and is **absent-minded** if in \(\mathcal{M}\)'s mechanised graph, a decision rule node has more than one outgoing edge to a decision node._
To motivate our definition of absent-mindedness in MAIDs, we revisit Piccione and Rubinstein's absent-minded driver game [42] (its EFG is in Figure 1(a)). A driver on a highway may take one of two exits. Taking the first, second, or no exit yields a payoff of 0, 4, or 1, respectively. Adopting Aumann [4]'s _modified multi-selves approach_ (i.e., that the driver should only be able to control her current action, not her future actions), the driver does not know which junction she is facing, so she must have the same decision rule at both junctions. We make absent-mindedness explicit with a shared decision rule node \(\Pi_{D}\) for \(D_{1}\) and \(D_{2}\) in the mechanised graph (Figure 1(b)) (note this is consistent with our mechanised graph definition). \(\Pi_{D}\)'s _two outgoing edges now represent two independent draws from the same distribution._ For \(D_{i}\) and \(D_{j}\) to share a decision rule, it is necessary that \(dom(D_{i})=dom(D_{j})\) and \(dom(\textbf{Pa}_{D_{i}})=dom(\textbf{Pa}_{D_{j}})\). Note that perfect recall implies that for any two decisions belonging to the same agent, one's set of parents is a strict superset of the other's, so their decision rules have a different type signature, which rules out absent-mindedness.
In the following examples, used just to explain this paper's concepts, Alice and Bob play variations of matching pennies with the usual payoffs given according to the _final_ state of their two coins (where \(a/b\) and \(\bar{a}/\bar{b}\) represent heads and tails, respectively). Example 2 illustrates a consequence of Bob being forgetful - meaning he cannot remember the _outcome_ of his previous decision. In Example 3, Bob is absent-minded - he cannot remember whether he has made a decision at all.
**Example 2** (Figures 2(a)-2(c)).: _Bob is told he must submit a move in advance (\(B_{1}\)) and then confirm it on game day (\(B_{2}\)). If his moves agree, payoffs correspond with normal matching pennies, but if his moves disagree, he must forfeit and always loses (these payoffs are shown in Figure 2(c)). Bob is forgetful, so on game day he cannot remember his advance choice (i.e., the edge \(B_{1}\to B_{2}\) is missing in Figure 2(a))._
Figure 2: The EFG (a) and the mechanised graphs for an absent-minded driver choosing behavioural (b) or mixed (c) policies.
**Example 3** (Figures 3d-3f).: _In a new game, the pennies start heads up, and Bob decides whether or not to turn the coin over (\(B_{1}\)). He is absent-minded, so when he sees heads he cannot remember whether he has already made his move, and he decides again (\(B_{2}\)). If he turns the coin having previously chosen to keep heads, Bob gets a \(-2\) penalty and Alice a \(+2\) bonus. In all other cases, the payoffs correspond with normal matching pennies (payoffs are shown at the leaves of the EFG in Figure 3e)._
Observe that the MAID's regular graph (just the object-level variables) is identical for both Figures 3a and 3d with the missing \(B_{1}\to B_{2}\) edge implying imperfect recall. The difference between forgetfulness and absent-mindedness is only revealed by the mechanised graph. Forgetful Bob has two independent decision rules \(\Pi_{B_{1}}\) and \(\Pi_{B_{2}}\) for \(B_{1}\) and \(B_{2}\). Absent-minded Bob only has one shared decision rule \(\Pi_{B}\).
Examples 2 and 3 demonstrate that both types of imperfect recall can mean an NE in behavioural policies may not exist, even in zero-sum two agent MAIDs with binary decisions. The normal-form games (in Figures 3c and 3f) show that neither contains an NE in pure policies. It is also easy to prove non-existence in behavioural policies (see Appendix B). This arises due to the grand best response function being non-convex valued, which violates a condition of Kakutani's fixed point theorem.
**Proposition 1**.: _Both forgetfulness and absent-mindedness can prevent the existence of an NE in behavioural policies._
## 4 Solution Concepts for MAIDs under Imperfect Recall
To overcome the fact that a behavioural policy NE may not exist in imperfect recall MAIDs, one can use mixed or correlated policies. These ensure that the grand best response function always satisfies the
Figure 3: The mechanised graphs for forgetful Bob (Example 2) using (a) behavioural or (b) mixed policies, with normal-form in (c). (d) The mechanised graph for absent-minded Bob (Example 3) using a behavioural policy, with EFG and normal-form representations in (e) and (f).
conditions of Kakutani's fixed point theorem, so an equilibrium always exists. We show how the assumptions behind mixed policies, behavioural mixtures, and correlated equilibria (well-studied in EFGs [22, 48], but unexplored in MAIDs) are made graphically explicit in mechanised graphs.
### Mixed Policies and Behavioural Mixtures
Behavioural policies allow agents to randomise independently at every decision node. By contrast, a **mixed policy**\(\mu^{i}\in\Delta(\boldsymbol{\dot{P}}^{i})\) is a distribution over pure policies. It allows an agent to coordinate their choice of decision rules at different decisions by randomising once at the game's outset and then committing to the assigned pure policy. More generally, **behavioural mixtures** in \(\Delta(\boldsymbol{P}^{i})\) are distributions over all behavioural policies. They allow agents to randomise _both_ at the outset of the game and before each decision. The outcome of the first randomisation determines the distributions for the others.
A behavioural mixture changes the specification of the game because it can require correlation between different decision rules. At the object-level, a behavioural mixture for agent \(i\) requires a new (correlation) decision variable \(C^{i}\) with \(\mathbf{Pa}_{C^{i}}=\varnothing\), \(\mathbf{Ch}_{C^{i}}=\boldsymbol{D}^{i}\), and \(dom(C^{i})=\boldsymbol{P}^{i}\) (the set of all behavioural policies). The decision rules for each \(D^{i}\) become conditional on \(C^{i}\), so each value of \(C^{i}\) determines a behavioural policy. This explains why \(C^{i}\) and still every \(D\in\boldsymbol{D}^{i}\) are decision nodes - the agent chooses the CPDs for both. Even in the mixed policy case, where each \(D^{i}\) depends deterministically on \(C^{i}\), the agent chooses the dependence independently from choosing the distribution over \(C^{i}\). In the mechanised graph (see Figure 2c), \(C^{i}\) gets an associated mechanism variable \(\Pi_{C^{i}}\) for the distribution \(C^{i}\) is drawing from (its mechanism parents are again determined by \(s\)-reachability).
In EFGs, the mechanism by which agents decide on their decision rules is not explicitly shown. Mechanised graphs, however, show clearly when an agent chooses to randomise. Behavioural and mixed policies are the limiting cases of behavioural mixtures: the former where the distribution over \(\boldsymbol{P}^{i}\) is deterministic; the latter where the decision rules \(\boldsymbol{\Pi}_{\boldsymbol{D}^{i}}\) are deterministic. The difference between forgetful Bob in Example 2 using a behavioural or mixed policy is shown in Figures 3a and 3b. For Bob's behavioural policy, \(C^{B}\) and \(\Pi_{C^{B}}\) are omitted as the decision rules \(\Pi_{B_{1}}\) and \(\Pi_{B_{2}}\) are independent. This leaves a normal mechanised graph. Whereas, if Bob uses a mixed policy, he only randomises once from \(\Pi_{C^{B}}\) at the start of the game to select a pure policy at \(C^{B}\). This fixes deterministic decision rules at \(\dot{\Pi}_{B_{1}}\) and \(\dot{\Pi}_{B_{2}}\).
**Proposition 2**.: _Given a MAID \(\mathcal{M}\) with any partial profile \(\boldsymbol{\pi}^{-i}\) for agents \(-i\), then if agent \(i\) is not absent-minded, for any behavioural policy \(\boldsymbol{\pi}^{i}\) there exists a pure policy \(\boldsymbol{\dot{\pi}}^{i}\) which yields a payoff at least as high against \(\boldsymbol{\pi}^{-i}\). On the other hand, if agent \(i\) is absent-minded in \(\mathcal{M}\) across a pair of decisions with descendants in \(\boldsymbol{U}^{i}\), then there exists a parameterisation of \(\mathcal{M}\) and a behavioural policy \(\boldsymbol{\pi}^{i}\) which yields a payoff strictly higher than any payoff achievable by a pure policy._
Proposition 2 says that a non-absent-minded agent cannot achieve more expected utility by using a behavioural rather than a pure (or mixed) policy, but an absent-minded agent often can. Consider Figure 2c, where \(dom(C^{D})=\boldsymbol{\dot{P}}^{D}\), the set of all the driver's pure policies. \(\Pi_{C^{D}}\) represents the distribution over \(dom(C^{D})\), so \(D_{1}\) and \(D_{2}\) must both be \(e\) or both be \(c\). Therefore, \(EU^{D}\leq 1\) under any mixed policy. Whereas, under the behavioural policy \(\pi_{D}^{1}(e)=\frac{1}{3}\), \(EU^{D}=\frac{4}{3}\). This highlights an important difference between absent-mindedness and forgetfulness. Under perfect recall, every mixed policy has an equivalent behavioural policy, in the sense of inducing the same distribution over outcomes against every opposing policy profile [19]. Under forgetfulness, whilst a mixed policy might not have an equivalent behavioural policy, a behavioural policy always has an equivalent mixed policy [27], so there must exist a pure policy which performs just as well. On the other hand, under absent-mindedness, neither mixed nor behavioural policies are guaranteed to have an equivalent of the other type, so there can be a behavioural policy which outperforms every mixed policy against a given policy profile.
We introduce mixed policies (and behavioural mixtures) to MAIDs to allow more generality in modelling when agents randomise and to guarantee an NE. However, a mixed policy can require exponentially more parameters \(\mathcal{O}(2^{2|\boldsymbol{\Psi}|})\) than a behavioural policy \(\mathcal{O}(2^{|\boldsymbol{\Psi}|})\) to define. Moreover, single agents are often more naturally modelled as randomising once they meet decision points [27] (this changes for team situations described in Section 6). It is therefore important to know when existence of each type of NE is guaranteed. The sufficient recall result was proved by [19], which we adapt to get the sufficient information result (in Appendix B). The mixed policies result follows directly from Nash's theorem [38].
**Proposition 3**.: _A MAID with sufficient information always has an NE in pure policies, a MAID with sufficient recall always has an NE in behavioural policies, and every MAID has an NE in mixed policies._
Since both sufficient recall and sufficient information (Definition 4) can be checked in poly-time2, they expand the class of games that have simple NEs beyond those identifiable using an EFG. For example, we can check in poly-time that the MAID in Figure 0(a) is an imperfect, but sufficient, information game, and hence know that there must exist an NE in pure policies.
Footnote 2: The mechanised graph is constructed using \(s\)-reachability, which uses the poly-time graphical criterion d-separation [44].
### Correlated Equilibria
We have just shown how mechanised graphs can explicitly represent the assumption behind mixed policies: a _single_ agent uses a source of randomness to correlate their decision rules. We now do the same for when _multiple_ agents can use the same source of randomness, so the choice of pure policy made by each agent may be correlated. An equilibrium in such a game is called a _correlated equilibrium (CE)_[3], which is a distribution \(\kappa\) over the set of all pure policy profiles, i.e., \(\kappa\in\Delta(\boldsymbol{\dot{P}})\). A mediator samples \(\boldsymbol{\dot{\pi}}\) according to \(\kappa\), then recommends to each agent \(i\) the pure policy \(\boldsymbol{\dot{\pi}}^{i}\). The distribution \(\kappa\) is a CE if no agent, given their information, has an incentive to unilaterally deviate from their recommended policy \(\boldsymbol{\dot{\pi}}^{i}\).
**Definition 6**.: _In a MAID, \(\kappa\in\Delta(\boldsymbol{\dot{P}})\) is a **correlated equilibrium (CE)** if and only if \(\forall i\), \(\forall\boldsymbol{\dot{\pi}}^{i},\boldsymbol{\dot{\varpi}}^{i}\in \boldsymbol{\dot{P}}^{i}\):_
\[\sum_{\boldsymbol{\dot{\pi}}^{-i}\in\boldsymbol{\dot{P}}^{-i}}\kappa( \boldsymbol{\dot{\pi}}^{i},\boldsymbol{\dot{\pi}}^{-i})EU^{i}(\boldsymbol{ \dot{\pi}}^{i},\boldsymbol{\dot{\pi}}^{-i})\geq\sum_{\boldsymbol{\dot{\pi}}^{- i}\in\boldsymbol{\dot{P}}^{-i}}\kappa(\boldsymbol{\dot{\pi}}^{i}, \boldsymbol{\dot{\pi}}^{-i})EU^{i}(\boldsymbol{\dot{\pi}}^{-i},\boldsymbol{ \dot{\varpi}}^{i})\]
We illustrate how MAIDs and their mechanised graphs make explicit the assumptions used for a CE using a costless-signal variation of Spence's job market game [47].
**Example 4**.: _Alice is hardworking or lazy (X) with equal probability. She applies for a job with Bob by deciding which costless signal (A) to send. Bob can distinguish between the signals, but does not know Alice's true temperature. He decides whether to offer the job (B) to Alice. The utility functions for Alice and Bob are \(U^{A}=(6-2X)\cdot B\) and \(U^{B}=6+(10X-6)\cdot B\), respectively._
The mechanised graph for the original game's MAID is shown in Figure 3(c). The cycle between \(\Pi_{A}\) and \(\Pi_{B}\) reveals that each agent's decision rule strategically relies on the other agent's decision rule.3 Therefore, the MAID has insufficient information and no proper subgames, making it difficult to solve.
Footnote 3: That Bob strategically relies on Alice’s decision rule might be less obvious than the fact that Alice strategically relies on Bob’s decision rule. The dependency occurs because since Bob can observe \(A\), this unblocks an active path \(\Pi_{A}\to A\gets X\to U^{B}\) in the independent mechanised graph, so \(\Pi_{A}\) is \(s\)-reachable from \(\Pi_{B}\).
To find the CE of this game, a trusted mediator is added using a _correlation variable_\(C\) with \(\mathbf{Pa}_{C}=\varnothing\), \(\mathbf{Ch}_{C}=\boldsymbol{D}\), and \(dom(C)=\boldsymbol{\dot{P}}\). In the mechanised graph, \(C\)'s associated mechanism variable \(K_{C}\) represents the distribution \(\kappa\in\Delta(\boldsymbol{\dot{P}})\) that the mediator draws a pure policy profile according to. This time, since
\(K_{C}\) is fixed as \(\kappa\) at the game outset instead of being chosen by any agent, \(C\) acts as a chance variable (in contrast to the correlation decision variable introduced for mixed policies and behavioural mixtures).
There is a well-known difference between public and private recommendations. If public, every payoff in the convex hull of the set of NE payoffs can be attained by a CE; however, if the recommendations are private, then the payoffs to each agent in a CE can lie outside this convex hull (e.g., Aumann's game of chicken [3]). This distinction is made explicit in the MAID's graph. If the recommendations are public, then the full outcome of \(C\) (the pure policy profile chosen by the mediator) is known by every agent (shown by the dotted edges between \(C\) and both \(A\) and \(B\) in Figure 3(d)). If the recommendations are private, then each agent only observes their decision rules (action recommendations) in \(C\)'s outcome, i.e., all recommendations given to other players are hidden (at \(C^{A}\) and \(C^{B}\) in Figure 3(e)). In this latter case, the agent infers, using Bayes' rule, a posterior over the pure policy profile that was chosen (and also which action was recommended to the other agent(s)). If \(\kappa\) is a CE, then each agent picks for their decision \(D\)'s decision rule the mediator's recommendation, i.e., \(\hat{\pi}_{D}\) where \(c=\boldsymbol{\pi}\). The set of variables \(\boldsymbol{D}\) remain as decisions because agents are free to deviate from their recommendation and pick any CPDs as decision rules for their decisions.
This mediator's distribution \(\kappa\in\Delta(\boldsymbol{\hat{P}})\) can be parameterised according to that in Figure 3(b). Note that \(b_{a}\bar{b}_{\bar{a}}\) denotes the pure policy profile where Bob offers the job (\(b\)) to Alice if she selects \(a\) and Bob does not offer the job (\(\bar{b}\)) if Alice selects \(\bar{a}\). Using the expected payoff for Alice and Bob under each pure
Figure 4: The sub-figures (a) and (b) give the expected payoff for each agent under each pure policy profile and the parameterisation of the distribution \(\kappa\), respectively. The mechanised graph for Example 4’s original MAID is shown in (c), and the mechanised graphs for when a trusted mediator gives public or private recommendations to find a CE are shown in (d) and (e), respectively. The blue edges are added to the graph in (e) for a MAID-CE’s staggered recommendations.
policy profile (Figure 4a), Definition 6's incentive constraints define 24 inequalities that must be satisfied by the CE distribution. After some algebra, we find that \(\alpha_{1}=\alpha_{2}=\alpha_{3}=\beta_{1}=\beta_{2}=\beta_{3}=\gamma_{1}=\gamma_ {2}=\gamma_{3}=0\); \(\alpha_{4},\beta_{4},\gamma_{4},\delta_{4}\geq 0\); \(\alpha_{4}-2\beta_{4}+3\gamma_{4}\geq 0\), and \(3\beta_{4}-2\gamma_{4}+\delta_{4}\geq 0\). Any CE, therefore, has Bob never offering a job to Alice because they play the pure policy \(\bar{b}_{a}\bar{b}_{\bar{a}}\) with probability 1, i.e., Bob's decision rule has \(\pi^{B}(B=\bar{b}\mid A=a)=\pi^{B}(B=\bar{b}\mid A=\bar{a})=1\). The remaining constraints require Alice not to give any incentive for Bob to offer her a job by making the conditional probability of Alice being hardworking too high relative to the conditional probability of her being lazy when he receives the signal \(a\) or \(\bar{a}\). These constraints find that every CE will result in \(EU^{A}=0\) and \(EU^{B}=6\). This is unsurprising because, in a signaling game with costless signals, every CE will be a 'pooling equilibrium' [9] (an equilibrium in which Alice chooses the same action regardless of their temperament).
Whilst the CE is among the best-known solution concepts for normal-form games, and is efficiently computable in that setting (e.g., via linear programming [20]), there can be an exponential number of pure policies (so an exponential number of incentive constraints) in EFGs and even in bounded treewidth MAIDs. It is therefore currently unknown if a CE can be found in an EFG or MAID in poly-time. Motivated by these tractability concerns, Von Stengel and Forges proposed an _extensive-form correlated equilibrium (EFCE)_[50]. Along similar lines, we define a _MAID correlated equilibrium_.
Instead of revealing the entire recommendation \(\mathbf{\pi}^{i}\) to each agent \(i\) immediately, we let the mediator _stagger_ their recommendations. This is made visible in the mechanised graph by adding the blue edges in Figure 4e. Importantly, if an agent deviates from any recommendation, then the mediator will _cease giving further recommendations to that agent_ (but will still give recommendations to all other agents). Thus, the incentive constraints are now tied to the threat of the mediator withholding future information.
**Definition 7**.: _Given a distribution \(\kappa\in\Delta(\mathbf{\dot{P}})\), consider the MAID with an additional correlation variable \(C\) with \(\textbf{Pa}_{C}=\varnothing\), \(\textbf{Ch}_{C}=\{C_{D}\}_{D\in\textbf{D}}\), and \(\textbf{Ch}_{C_{D}}=\{D\}\) for each \(D\). Let a pure policy profile \(\mathbf{\dot{\pi}}\) be selected at \(C\) according to \(\kappa\). Then, when each decision context \(\textbf{pa}_{D}\) is reached, agent \(i\) receives a recommended move \(d\in\text{dom}(D)\) specified by \(\mathbf{\dot{\pi}}_{D}\in\mathbf{\dot{\pi}}\) (\(C_{D}\) hides all other recommendations \(\mathbf{\dot{\pi}}_{-D}\in\mathbf{\dot{\pi}}\)). A **MAID correlated equilibrium (MAID-CE)** is an NE of this game in which no agent has an incentive to deviate from their recommendations._
The localised recommendations in a MAID-CE pose weaker incentive constraints compared to a CE, so the set of MAID-CE outcomes is larger. As such, MAID-CEs can lead to Pareto-improvements over the CEs (and NEs) in a game. We now give one such MAID-CE. The mediator chooses a signal \(s\) with equal probability for type \(X=x\), i.e., \(\Pr(c_{A}=a\mid X=x)=\Pr(c_{A}=\bar{a}\mid X=x)=0.5\). Bob is recommended to offer Alice a job (\(b\)) when Alice's action matches \(s\) and to reject otherwise (\(\bar{b}\)). If \(X=\bar{x}\), then the recommendation to Alice is arbitrary and is independent of the signal \(s\), which is only shown to hardworking Alice. Because the mediator only gives Alice her recommendation once her decision context \(\textbf{Pa}_{A}\) is set, lazy Alice cannot know \(s\). Therefore, in any situation, lazy Alice's action will match \(s\) with probability \(\frac{1}{2}\). Consequently, when Bob is called to play (i.e., the decision context \(\textbf{Pa}_{B}\) is set), and Alice's action matches \(s\), Alice is twice as likely to be hardworking than lazy (so \(EU^{B}=\frac{20}{3}\) for offering Alice a job rather than \(EU^{B}=6\) for rejecting her). If instead, Alice's action does not match \(s\), then he knows with certainty that Alice is lazy, so his best response is to reject. Overall, Alice's expected payoff in this MAID-CE is 3.5, and Bob's is 6.5 (higher than 0 and 6, respectively, for all CEs).
A MAID-CE can be computed in poly-time if the treewidth is bounded, via a reduction to a linear program. We follow Huang et al [21]'s method because the information sets in an EFG are in bijection with the decision contexts in a MAID, but relax beyond their conditions as MAIDs only require sufficient (rather than perfect) recall [21]. Any distribution over pure policies induced by an NE can be represented using a distribution \(\kappa\), and hence any mixed NE (or equivalent behavioural NE) is also a CE and MAID-CE. As every MAID has an NE in (mixed) policies, every MAID must also have a CE and a MAID-CE.
**Proposition 4**.: _A MAID-CE in bounded treewidth MAIDs with sufficient recall can be found in poly-time._
## 5 Complexity Results in MAIDs
We now give some complexity results in MAIDs. Our first follows from the known result in normal-form games [10]. Any normal-form game \(\mathcal{N}\) can be reduced to a MAID where each agent has one utility node (which copies the payoffs in \(\mathcal{N}\)) and one decision node. The domains of the decision variables are the set of each agent's pure strategies in \(\mathcal{N}\). Edges are added from every \(D\in\textbf{D}\) to every \(U\in\textbf{U}\).
**Proposition 5**.: _In a MAID, finding an NE in mixed policies is PPAD-hard._
In the following results, we focus on the complexity of the decision problems in Table 1.
**Proposition 6**.: Is-Best-Response _is_ NPPP_-complete,_ NP_-complete when restricted to MAIDs with graphs of bounded treewidth, and_ PP_-complete if both \(|\textbf{D}^{i}|\) and the in-degrees of \(\textbf{D}^{i}\) are bounded._
Proof sketch.: Is-Best-Response is in NPPP because given \(\hat{\textbf{x}}^{i}\), we can verify that \(EU^{i}(\hat{\textbf{x}}^{i},\textbf{\pi}^{-i})>q\) in poly-time using a PP oracle for inference in a BN [31]. With bounded treewidth, verification can be done in poly-time. The final setting is in PP by analogy with Kwishout's PARAMETER TUNING [28]. For the general case's hardness, we can reduce from E-Majsat as in [40], where MAP-nodes are replaced by agent \(i\)'s decision nodes; for bounded treewidth, we can reduce from MAXSAT as in [13]; and for the final case, Is-Best-Response with \(|\textbf{D}^{i}|=0\) is the same as inference in a BN.
Proposition 6 suggests Is-Best-Response is, in general, only tractable if inference is easy _and_\(|\textbf{D}^{i}|\) is bounded by a constant. Proposition 7 then explains the decision problem's name.
**Proposition 7**.: _If the in-degrees of \(\textbf{D}^{i}\) are bounded and Is-Best-Response can be solved in poly-time, then a best response policy for agent \(i\) to a partial profile \(\textbf{\pi}^{-i}\) can be found in polynomial time._
**Proposition 8**.: Is-Nash _is_ coNPPP_-complete, and_ coNP_-complete when restricted to MAIDs with graphs of bounded treewidth. The general problem remains_ coNP_-hard in sufficient information MAIDs. In MAIDs without chance variables, the problem remains_ coNP_-hard._
Proof sketch.: For membership, we can check that \(\textbf{\pi}\) is _not_ an NE by guessing an agent \(i\) and checking if \(\textbf{\pi}^{i}\in\textbf{\pi}\) is a best response in poly-time using a PP-oracle (this is unnecessary if the graph has bounded treewidth). Hardness comes from the single-agent setting where it is the complement of Is-Best-Response. In MAIDs without chance variables, we reduce from partial order games [51].
Proposition 3 shows when Non-Emptiness is vacuous. However, in an insufficient recall MAID, Non-Emptiness is, in general, intractable even without chance variables.
**Proposition 9**.: Non-Emptiness _is_ NEXPTIME_-hard and becomes_ NEXPTIME_-complete if we restrict to MAIDs without chance variables.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Problem** & **Input** & **Question** \\ \hline Is-Best-Response & \(\mathcal{M}\), \(i\), \(\textbf{\pi}^{-i}\), \(q\in\mathbb{Q}\) & Is there some \(\hat{\textbf{x}}^{i}\) such that \(EU^{i}(\hat{\textbf{x}}^{i},\textbf{\pi}^{-i})>q\)? \\ Is-Nash & \(\mathcal{M}\), \(\textbf{\pi}\) & Is \(\textbf{\pi}\) a (behavioural) NE of \(\mathcal{M}\)? \\ Non-Emptiness: & \(\mathcal{M}\) & Does \(\mathcal{M}\) have a (behavioural) NE? \\ \hline \hline \end{tabular}
\end{table}
Table 1: Three decision problems in MAIDs with behavioural policies.
Proof sketch.: For hardness, we can reduce from partial order games. Without chance variables, we can determine Non-Emptiness using a similar algorithm to that in [51]. It exploits the setting's determinism: payoffs are poly-time computable and the number of policy profiles is reduced to \(\mathcal{O}(2^{|\textbf{V}|})\).
**Proposition 10**.: _In a MAID with sufficient information, if the in-degrees of **D** are bounded and Is-Best-Response can be solved in poly-time, then a pure NE can be found in poly-time._
This result suggests an NE can be found efficiently in certain MAIDs, but even in games without sufficient information, NEs can be found more efficiently in a MAID than in an EFG. The mechanised graph dependencies reveal more'subgames' - parts of the MAID that can be solved independently from the rest - to which dynamic programming can be applied [26, 18]. As finding an NE in both EFGs and MAIDs depends significantly on the game's size, this can empirically lead to large compute savings [26].
## 6 Applications and Conclusion
We introduced forgetfulness and absent-mindedness as properties of individual agents (due to imperfect memory). However, imperfect recall also commonly arises in _team situations_; each team consists of several agents targeting a common goal with imperfect communication. Forgetfulness or absent-mindedness occurs when an agent does not know their teammates' actions (or observations) or whether they have acted at all. Mechanised graphs represent these situations where teams often employ a mix of randomisation strategies (e.g., Figure 4(b)). For mixed policies, the random seed is chosen at the start, before the agents set out following their distinct policies. For behavioural policies, agents pick a new random seed at every decision point. Behavioural mixtures correspond to randomising at both stages.
Another application of imperfect recall in MAIDs is to _Markov (or'stochastic') games_[45], in which the agents move between different states over time (e.g., Figure 4(a)). At each time step \(t\), each agent \(i\) selects an action \(A^{i}_{t}\), and the game probabilistically transitions to a new state \(S_{t+1}\), depending on the previous state \(S_{t}\) and the actions selected, and each agent receives a payoff \(R^{i}_{t}\). Each \(S_{t+1}\) and \(R^{i}_{t}\) has parents \(\{S_{t},A^{1}_{t},\ldots,A^{n}_{t}\}\) and must be identically distributed for all \(t\), again represented using shared mechanism variables. Often, the agent must learn a memoryless, stationary policy \(\pi^{i}:S\rightarrow\Delta(A^{i})\), where \(S\) is the set of states and \(\Delta(A^{i})\) the set of probability distributions over agent \(i\)'s actions. Hence, the agents are absent-minded (every decision \(A^{i}_{t+1}\) of agent \(i\) shares the same decision rule) and use _behavioural_ policies (since the action selected in each state is independently stochastic). In light of Proposition 1,
Figure 5: Mechanised graphs for a CE with (a) public and (b) private recommendations, where the blue edges are added for a MAID-CE; (c) a Markov game;(d) a team setting with imperfect communication.
it is therefore natural to ask whether a Markov game may not have an NE in memoryless stationary policies. It is known that infinite-horizon Markov games might not (for a counterexample see [12]). Although infinite games lie outside of the scope of this paper, it is nonetheless insightful to note that this possible non-existence is due to absent-mindedness: if agents can choose a different decision rule at each time step, a behavioural NE is guaranteed [33].
We have shown how to handle imperfect recall in MAIDs by overcoming the potential lack of NEs in behavioural policies using mixed and correlated equilibria. EFGs leave many assumptions about how agents play games hidden, but mechanised graphs make explicit the assumptions behind imperfect recall (both forgetfulness and absent-mindedness), mixed policies, and two types of correlated equilibria. Our complexity results highlight the importance of restricting the use of MAIDs to those with a limited number of decision variables and bounded treewidth. Finally, our applications to Markov games and team situations show that imperfect recall broadens the scope of what can be modelled using MAIDs.
AcknowledgementsThe authors wish to thank Ryan Carey, Tom Everitt, and Francis Rhys Ward for invaluable feedback, as well as three anonymous reviewers for their helpful comments. Fox was supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems (Reference: EP/S024050/1), MacDermott was supported by the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (Reference: EP/S023356/1), Hammond was supported by an EPSRC Doctoral Training Partnership studentship (Reference: 2218880), and Wooldridge was supported by a UKRI Turing AI World Leading Researcher Fellowship (Reference: EP/W002949/1).
|
2310.17092 | Magnetized AdS/BCFT Correspondence in Horndeski Gravity | This work examines the thermodynamics and hydrodynamics behaviors of a
five-dimensional black hole under the influence of an external magnetic field.
The solution is the gravity dual to the Anti-de Sitter/Boundary Conformal Field
Theory correspondence, enabling the study of properties within an anisotropic
fluid framework. Utilizing holographic renormalization, we compute the free
energy and the holographic stress tensor residing on the boundary denoted as
$Q$. Within the fluid/gravity correspondence framework, we have a class of
boundary extensions in $Q$, where the stress-energy tensor describes a
magnetizing conformal fluid. We discuss the characteristics of this special
solution as well as its thermodynamic properties, including the bulk and shear
viscosity, the square of the speed of sound, as well as the anisotropic effects
induced by the magnetic field in the magnetized conformal plasma. | Fabiano F. Santos, Moisés Bravo-Gaete, Manoel M. Ferreira, Rodolfo Casana | 2023-10-26T01:28:12Z | http://arxiv.org/abs/2310.17092v3 | # Magnetized AdS/BCFT Correspondence in Horndeski Gravity
###### Abstract
This work presents an investigation of the thermodynamics and hydrodynamics of a five-dimensional black hole in the presence of an external magnetic field. The solution is the gravity dual to an Anti-de-Sitter/Boundary Conformal Field Theory (AdS/BCFT) correspondence. For this, we will establish the AdS\({}_{5}\)/BCFT\({}_{4}\) correspondence, and with it, we will study the properties of an anisotropic fluid with an external magnetic field. Using holographic renormalization we compute the free energy and holographic stress tensor residing on boundary Q. From the point of view of the fluid/gravity correspondence, we have a class of boundary extensions existing in boundary Q, for which the stress-energy tensor describes a magnetizing conformal fluid. We discuss the characteristics of this special solution, as well as its thermodynamic properties, for example, the null trace indicates that the bulk viscosity must be zero, which gives us a plasma without viscosity.
Introduction
In recent years the description of macroscopic properties of strongly coupled matter has been a significant challenge, necessitating the use of non-perturbative methods, and related via gravity, thanks to the Anti-de-Sitter/Conformal Field Theory (AdS/CFT) correspondence [1; 2]. As for example, in the study of matter in the plasma state, more specifically Quark-Gluon Plasma (QGP), produced in collisions of heavy ions at the Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC), an extremely important object of study are thermodynamic and hydrodynamic properties of Quantum Chromodynamics (QCD) at high temperatures, where the non-perturbative effects are relevant (see for example [3; 4; 5; 6; 7]). Building on this idea, there has been a rise of interest in expanding the AdS/CFT duality in recent years. One important extension is the holographic duality known as Anti-de Sitter/Boundary Conformal Field Theory (AdS/BCFT) correspondence [8; 9; 10], where this new scenario the CFT is defined on a manifold \(\mathcal{M}\) with a boundary \(\partial\mathcal{M}\). Thus, in the holographic dual of the AdS/BCFT correspondence, the manifold boundary in a \(D\)-dimensional manifold \(\mathcal{M}\) corresponds to a \((D+1)\)-dimensional asymptotically AdS space \(\mathcal{N}\) with \(\partial\mathcal{N}=\mathcal{M}\cup Q\). Here, \(Q\) corresponds to a \(D\)-dimensional manifold that satisfies \(\partial Q=\partial\mathcal{M}\) (see Figure 1).
To explore the AdS/CFT correspondence, we need to impose the Dirichlet boundary condition (DBC) at the boundary of AdS, for then to perform the DBC on \(\mathcal{M}\). But, according
Figure 1: Graphic representation of the AdS/BCFT correspondence. In this case, \(\mathcal{M}\), where the CFT is present, represents the manifold and its boundary is \(\partial\mathcal{M}\). The gravity dual is represented by \(\mathcal{N}\), which its asymptotically AdS is \(\mathcal{M}\). Together with the above, \(\partial\mathcal{M}\) is extended into the bulk AdS, which constitutes the boundary of the \(D-\)dimensional manifold \(Q\).
to [8; 9], for the AdS/BCFT duality, a Neumann boundary condition (NBC) on \(Q\) is required and, from the standpoint of holography, this boundary should be dynamic [9]. Such dynamics can be introduced through the specification of the boundary conditions of the variational problem. This framework, in recent years, has called attention to a novel way to compute the transport coefficients, where black holes (BHs) have an important role, for example, Hawking-Page phase transition, the Hall conductivity, fluid/gravity correspondence in Einstein gravity [9; 11; 12; 13] and its extensions [14; 15; 16; 17]. In addition to the aforementioned, the nature of the AdS/BCFT duality is deeply ingrained in the holographic computation of entanglement entropy within the framework of Einstein gravity [18], Horndeski gravity [19; 20], and the Randall-Sundrum model [21]. In summary, this extension of the CFT's boundary inside the bulk of the AdS-space is a modification of a \(thin\) Randall-Sundrum brane, which intersects the AdS boundary and in theories such Horndeski models [16; 17]. In this way, we have that this brane is a dynamic object when the NBC has a discontinuity in the bulk extrinsic curvature across the defect, that is compensated by the tension from the brane. These boundaries are known as the Randall-Sundrum (RS) branes. As an example, in Figure 2, we illustrate the boundary denoted as \(P\), determined by the condition \(y=\text{const}\), where \(y\) represents one of the coordinates on \(\mathcal{M}\). This setup corresponds to the AdS/BCFT problem considering over half of Minkowski space. The solution with \(y=\text{const}\) predicts the presence of gravity solutions with non-zero tension for the Randall-Sundrum branes [16; 17], and recent studies have demonstrated the existence of such solutions, exploring their potential to describe charged BHs [17].
Together with the above, through the AdS/BCFT breaking the conformal symmetry, adding a single scalar of Horndeski gravity with non-zero profiles in the bulk, we access to transport of coefficients such as the bulk and shear viscosity, denotes as \(\zeta\) and \(\eta\) respectively (see for example Refs. [17; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]). For these works, the ratios \(\zeta/S\) and \(\eta/S\), where \(S\) represents the entropy, are affected via the contributions of Horndeski gravity, magnetic field, Temperature, and the profile \(Q\) through of \(S\). The advantage of these procedures is that results are analytic, and in agreement with the numerical results obtained previously in [33]. In fact, in higher temperatures, the fluid starts to the plasma phase where \(\zeta/S\to 0\) and \(\eta/S\to 0\), which is very similar to the proposal [33]. Moreover, in the absence of a magnetic field \(B\), these ratios are violated, as anticipated for a strongly coupled anisotropic plasma [34], characterized by anisotropic pressures [3; 4]. External magnetic fields, within
the theoretical framework, provide a means to probe non-perturbative aspects of QCD. In the experimental context, anisotropic effects in the QGP are expected under the influence of intense magnetic fields [5]. It is interesting to note that even for zero external magnetic fields, these results present anisotropic effects, which play an important role in the description of the QGP right after the collisions [6].
In the present paper, we are interested in studying the anisotropic effects in a fluid with an external magnetic field in four dimensions, via the AdS\({}_{5}\)/BCFT\({}_{4}\) correspondence. Additionally, by using holographic renormalization, we compute the free energy, and we describe a family of boundary stress-energy tensors, denoted as \(T^{Q}_{\alpha\beta}\), residing in \(Q\), consistent with the asymptotically AdS\({}_{5}\) BH in the bulk. Each of the \(T^{Q}_{\alpha\beta}\) corresponds to a hypersurface in the volume that bounds a subspace of the BH solution. With this tensor, we compute the anisotropic pressures, susceptibility (\(\chi_{BB}\)), and magnetization. In the UV regime, we verify that the trace of the tensor \(T^{Q}_{\alpha\beta}\) that resides in \(Q\) (being previously presented in [19]) is null for the "Pascal fluid", and this fact indicates that the bulk viscosity is null.
This work is organized as follows: In Section II we consider the gravitational setup, which contains all the information with respect to the AdS\({}_{5}\)/BCFT\({}_{4}\) duality, showing the solution. Together with the above, the charge density is obtained for then, in Section III to present
Figure 2: For this graphic representation, \(\mathcal{N}\) is the subspace of the bulk of AdS\({}_{D+1}\), bounded by Q which it encodes physics of \(\mathcal{M}\). P is the common boundary of Q and \(\mathcal{M}\).
the boundary \(Q\) profile. In Section IV, we perform a holographic renormalization, computing the Euclidean on-shell action, which is related to the free energy of the corresponding thermodynamic system, where in particular we will focus on the BH entropy, present in Section V, and the holographic transport coefficients, given in Section VI. In section VII, we present the fluid/gravity correspondence. Finally, Section VIII is devoted to our conclusions and discussions.
## II The setup and equation of motions to the bulk as well as the bcft side
Considering that our objective to explore the fluid/gravity correspondence in Horndeski gravity with the presence of an external magnetic field, some elements to construct transport coefficients such that \(\eta/S\), \(\zeta/S\), and the stress-energy tensor \(T^{Q}_{\alpha\beta}\) are essentials. The first one corresponds to the bulk side, which reads
\[S_{bulk} = S^{\mathcal{N}}_{\rm H}+S^{\mathcal{N}}_{\rm M}+S^{\mathcal{N}} _{2-{\rm FF}}+S^{\mathcal{N}}_{mat}, \tag{1}\] \[= \int_{\mathcal{N}}d^{5}x\sqrt{-g}\;\left(\kappa\mathcal{L}_{\rm H }+\kappa\mathcal{L}_{\rm M}+\lambda^{2}\mathcal{L}_{2-{\rm FF}}+\mathcal{L}_{ mat}\right),\]
where \(\kappa=1/(16\pi G_{N})\), with \(G_{N}\) is the Newton Gravitational constant, \(\lambda^{2}\) a coupling constant, and
\[\mathcal{L}_{\rm H} = (R-2\Lambda)-\frac{1}{2}(\alpha g_{\mu\nu}-\gamma\,G_{\mu\nu}) \nabla^{\mu}\phi\nabla^{\nu}\phi, \tag{2}\] \[\mathcal{L}_{\rm M} = -\frac{1}{4e^{2}}F^{\mu\nu}F_{\mu\nu},\] (3) \[\mathcal{L}_{2-{\rm FF}} = -\frac{1}{12}(dM)^{2}-\frac{m^{2}}{4}M^{\mu\nu}M_{\mu\nu}-\frac{1 }{2}M^{\mu\nu}F_{\mu\nu}-\frac{J}{8}V(M). \tag{4}\]
Here, for \(\mathcal{L}_{\rm H}\), we have that \(R=g^{\mu\nu}R_{\mu\nu}\), \(G_{\mu\nu}\) and \(\Lambda\) represent the scalar curvature, the Einstein tensor, and the cosmological constant respectively, while that \(\phi=\phi(r)\) is a scalar field, \(\alpha\), and \(\gamma\) are coupling constants. \(\mathcal{L}_{\rm M}\) represents the Maxwell Lagrangian, where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) and \(e\) is a coupling constant. The Lagrangian \(\mathcal{L}_{2-{\rm FF}}\) is constructed through a 2-form \(M_{\mu\nu}\), where \(dM=(dM)_{\tau\mu\nu}=3\nabla_{[\tau}M_{\mu\nu]}\) is the exterior differential and \((dM)^{2}=9\nabla_{[\tau}M_{\mu\nu]}\nabla^{[\tau}M^{\mu\nu]}\). \(V(M)\) describes the self-interaction of polarization tensor, with \(J\) a constant, and \(m\) is a constant related to the mass. Finally, \(S^{\mathcal{N}}_{mat}\) is the action associated with matter sources.
For this scenario, to establish the AdS\({}_{5}\)/BCFT\({}_{4}\) correspondence, we need to construct the terms of the boundary. Following the Refs. [16; 17], these expressions are given by
\[S_{BCFT} = 2\kappa\int_{Q}d^{4}x\sqrt{-h}\mathcal{L}_{bdry}+2\int_{Q}d^{4}x \sqrt{-h}\mathcal{L}_{mat}+2\kappa\int_{ct}d^{4}x\sqrt{-h}\mathcal{L}_{ct} \tag{5}\] \[+ S_{mat}^{Q},\]
with
\[\mathcal{L}_{bdry} = (K-\Sigma)-\frac{\gamma}{4}(\nabla_{\mu}\phi\nabla_{\nu}\phi n^{ \mu}n^{\nu}-(\nabla\phi)^{2})K-\frac{\gamma}{4}\nabla_{\mu}\phi\nabla_{\nu} \phi K^{\mu\nu}, \tag{6}\] \[\mathcal{L}_{ct} = c_{0}+c_{1}R+c_{2}R^{ij}R_{ij}+c_{3}R^{2}+b_{1}(\partial_{i}\phi \partial^{i}\phi)^{2}+\cdots. \tag{7}\]
For the Lagrangian \(\mathcal{L}_{bdry}\), \(K_{\mu\nu}=h_{\mu}^{\ \ \beta}\nabla_{\beta}n_{\nu}\) corresponds to the extrinsic curvature where \(K=h^{\mu\nu}K_{\mu\nu}\) is the trace, \(h_{\mu\nu}\) is the induced metric while that \(n^{\mu}\) is an outward pointing unit normal vector to the boundary of the hypersurface \(Q\). Together with the above, \(\Sigma\) is the boundary tension on \(Q\) and \(S_{mat}^{Q}\) is the matter action on \(Q\). \(\mathcal{L}_{ct}\) represents the boundary counterterms that do not affect the bulk dynamics and will be neglected.
With this previous presentation from the bulk and boundary side for the AdS/BCFT correspondence, from (1) and (5) we can present the total action \(S\) as
\[S = S_{bulk}+S_{BCFT}. \tag{8}\]
Now, we compute the equations of motion for the action (8). These equations are the equation Einstein-Horndeski equation to provide the BH solution, the equation to provide the profile solution [8; 9; 11; 13; 16], and the equations of motion to the electromagnetic sector, this last is provided through probe approximation as discussed by [17]. As a first step, we start imposing the Neumann Boundary condition (NBC) where this condition, according to [16; 17], provide:
\[K_{\alpha\beta}-h_{\alpha\beta}(K-\Sigma)-\frac{\gamma}{4}H_{ \alpha\beta}=\kappa\mathcal{S}_{\alpha\beta}^{Q}\,, \tag{9}\]
with
\[H_{\alpha\beta}:=(\nabla_{\sigma}\phi\nabla_{\rho}\phi\,n^{ \sigma}n^{\rho}-(\nabla\phi)^{2})(K_{\alpha\beta}-h_{\alpha\beta}K)-(\nabla_ {\alpha}\phi\nabla_{\beta}\phi)K\,, \tag{10}\]
and \(\mathcal{S}_{\alpha\beta}^{Q}\) represents the variation of the action \(S_{mat}^{Q}\) with respect the induced metric \(h_{\mu\nu}\), this is, \(\mathcal{S}_{\alpha\beta}^{Q}=-(2/\sqrt{-h})(\delta S_{mat}^{Q}/\delta h^{ \alpha\beta})\). Here, we consider that the matter stress-energy tensor on \(Q\) as a constant, implying that \(\mathcal{S}_{\alpha\beta}^{Q}=0\).
On the other hand, from the Einstein-Horndeski model (1)-(2), assuming that \(S^{\mathcal{N}}_{mat}\) is constant, the equations of motions on \(S^{\mathcal{N}}_{\rm H}\) and \(S^{Q}_{bdry}\) with respect to the dynamical fields \(g_{\mu\nu}\) and \(\phi\) are given by
\[\mathcal{E}_{\mu\nu} := -\frac{2}{\sqrt{-g}}\frac{\delta S^{\mathcal{N}}}{\delta g^{\mu \nu}}=G_{\mu\nu}+\Lambda g_{\mu\nu}-\frac{\alpha}{2}\left(\nabla_{\mu}\phi \nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\nabla_{\lambda}\phi\nabla^{\lambda} \phi\right)\] \[+ \frac{\gamma}{2}\left(\frac{1}{2}\nabla_{\mu}\phi\nabla_{\nu}\phi R -2\nabla_{\lambda}\phi\nabla_{(\mu}\phi R^{\lambda}_{\nu)}-\nabla^{\lambda} \phi\nabla^{\rho}\phi R_{\mu\lambda\nu\rho}\right)\] \[+ \frac{\gamma}{2}\left(-(\nabla_{\mu}\nabla^{\lambda}\phi)(\nabla_ {\nu}\nabla_{\lambda}\phi)+(\nabla_{\mu}\nabla_{\nu}\phi)\Box\phi+\frac{1}{2} G_{\mu\nu}(\nabla\phi)^{2}\right)\] \[- \frac{\gamma g_{\mu\nu}}{2}\left(-\frac{1}{2}(\nabla^{\lambda} \nabla^{\rho}\phi)(\nabla_{\lambda}\nabla_{\rho}\phi)+\frac{1}{2}(\Box\phi)^{ 2}-(\nabla_{\lambda}\phi\nabla_{\rho}\phi)R^{\lambda\rho}\right)=0,\] \[\mathcal{E}_{\phi} := -\frac{2}{\sqrt{-g}}\frac{\delta S^{\mathcal{N}}}{\delta\phi}= \nabla_{\mu}\left[\left(\alpha g^{\mu\nu}-\gamma G^{\mu\nu}\right)\nabla_{\nu }\phi\right]\nabla_{\mu}J^{\mu}_{\phi}=0\,, \tag{12}\] \[\mathcal{F}_{\phi} := -\frac{2}{\sqrt{-h}}\frac{\delta S^{Q}_{bdry}}{\delta\phi}=- \frac{\gamma}{4}(\nabla_{\mu}\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla^{2}\phi) )K\] (13) \[- \frac{\gamma}{4}(\nabla_{\mu}\nabla_{\nu}\phi)K^{\mu\nu}=0.\]
In this model, we focus on a static BH. The approach outlined in Refs. [43; 44; 45; 46; 47] enables us to derive static BH configurations, thus bypassing the no-hair theorems [48]. For this particular scenario, it is essential that the square of the radial component of the conserved current identically vanishes (\(J^{r}_{\phi}=0\)), while still allowing flexibility in the radial dependence of the scalar field \(\phi\). This condition can be expressed using eq. (12):
\[\alpha g_{rr}-\gamma G_{rr}=0\,, \tag{14}\]
and defining \(\phi^{\prime}(r):=\psi(r)\), where (\({}^{\prime}\)) denotes the derivative with respect to the radial coordinate \(r\), we can show that firstly the equation \(\mathcal{E}_{\phi}=0\) is satisfied. For this setup, considering the five-dimensional metric
\[ds^{2}=\frac{L^{2}}{r^{2}}\left(-f(r)\,dt^{2}+dx^{2}+dy^{2}+dw^{2}+\frac{dr^{2 }}{f(r)}\right), \tag{15}\]
where \(x_{1}\leq x\leq x_{2}\), \(y_{1}\leq y\leq y_{2}\) and \(w_{1}\leq w\leq w_{2}\), the metric function \(f(r)\) from (14) takes the form [16; 37; 46]
\[f(r)=\frac{\alpha L^{2}}{3\gamma}\left[1-\left(\frac{r}{r_{h}}\right)^{4} \right], \tag{16}\]
where the integration constant \(r_{h}\) represents the location of the event horizon, while the remainder equations of motions are satisfied when \(\psi(r)\) reads
\[\psi^{2}(r)=(\phi^{\prime}(r))^{2}=-\frac{2L^{2}\xi}{\gamma r^{2}f(r)}\,, \tag{17}\]
where we define
\[\xi=\frac{\alpha+\gamma\Lambda}{\alpha}, \tag{18}\]
and the scalar field is real, only if
\[\alpha+\Lambda\gamma\leq 0.\]
For the sake of completeness, following the steps of [16, 37] via the transformations
\[f(r)\to\frac{\alpha L^{2}}{3\gamma}f(r),\qquad t\to\frac{3 \gamma}{\alpha L^{2}}t,\qquad w\to\sqrt{\frac{3\gamma}{\alpha L^{2}}}w,\] \[x\to\sqrt{\frac{3\gamma}{\alpha L^{2}}}x,\qquad y\to\sqrt{\frac {3\gamma}{\alpha L^{2}}}y,\qquad L\to\sqrt{\frac{\alpha}{3\gamma}}L^{2}, \tag{19}\]
we observe that the line element (15) remains invariant, with the metric function \(f(r)\) now adopting the following form:
\[f(r)=1-\left(\frac{r}{r_{h}}\right)^{4}. \tag{20}\]
Here, we can see that the metric function \(f(r)\) (20) only has one integration constant, without additional charges. Nevertheless, it is possible to perform a geometry-independent treatment via a probe approximation. In this case, the equations of motion to the electromagnetic field can be solved independently, allowing us to explore a finite charge and density in five dimensions. For dimensional scenarios can be found in [56]. From equations (1) and (4), we consider that \(V(M)\) reads
\[V(M)=(^{*}M_{\mu\nu}M^{\mu\nu})^{2}=[^{*}(M\wedge M)]^{2}, \tag{21}\]
where \((^{*})\) is the Hodge star operator. Our configurations are restricted to the probe approximation. Thus, from the action (1), one can derive the corresponding equations of motions for matter fields in the probe approximation, that is, \(e^{2}\to+\infty\) and \(\lambda\to 0\), so that:
\[\nabla^{\mu}\left(F_{\mu\nu}+\frac{\lambda^{2}}{4}\,M_{\mu\nu} \right) = 0, \tag{22}\] \[\nabla^{\tau}(dM)_{\tau\mu\nu}-m^{2}M_{\mu\nu}-J(^{*}M_{\tau \sigma}M^{\tau\sigma})(^{*}M_{\mu\nu})-F_{\mu\nu} = 0\,. \tag{23}\]
As we are focusing on the probe limit approximation, we are going to disregard any backreaction coming from the two-form field \(M_{\mu\nu}\). In order to analyze the holographic transport
and magnetizing plasma, using the fluid/gravity duality, we consider the gauge fields \(M_{\mu\nu}\) and \(A_{\mu}\) in the following form:
\[M_{\mu\nu} = -p(r)\,dt\wedge dr+\rho(r)\,dx\wedge dy, \tag{24}\] \[A_{\mu} = A_{t}(r)\,dt+Bx\,dy,\quad F_{\mu\nu}=\partial_{\mu}A_{\nu}- \partial_{\nu}A_{\mu}. \tag{25}\]
Here, \(B\) is a constant that represents the external magnetic field. With all this information, via eqs. (15), (24)-(25) in the background (20), the field equations (22) and (23) are given by
\[A_{t}^{\prime}+\left(m^{2}-\frac{4\,J\,r^{4}\,\rho^{2}}{L^{4}} \right)\,p = 0, \tag{26}\] \[\frac{\rho^{\prime\prime}}{L^{2}}+\left(\frac{f^{\prime}}{f}+ \frac{1}{r}\right)\,\frac{\rho^{\prime}}{L^{2}}-\left(\frac{4\,J\,r^{2}\,p^{2 }}{fL^{4}}+\frac{m^{2}}{r^{2}\,f}\right)\,\rho-\frac{B}{r^{2}\,f} = 0,\] (27) \[A_{t}^{\prime\prime}-\frac{A_{t}^{\prime}}{r}+\frac{\lambda^{2} }{4}\,\left(p^{\prime}-\frac{p}{r}\right) = 0. \tag{28}\]
Given that we are working on probe approximation, we can disregard the back reaction. As the system exhibits asymptotic AdS\({}_{5}\) behavior, in approaching to the boundary (this is \(r\to 0\)), we can solve the field equations (26)-(28). The solutions in this asymptotic regime are outlined below:
\[A_{t}(r)\sim\mu-\sigma r, \tag{29}\] \[p(r)\sim\frac{4\sigma}{\lambda^{2}}(1+r),\] (30) \[\rho(r)\sim\rho_{+}r^{\Delta_{+}}+\rho_{-}r^{\Delta_{-}}+\frac{B }{m^{2}},\] (31) \[\Delta_{\pm}=\pm\,2mL. \tag{32}\]
Here, \(\rho_{+}\) and \(\rho_{-}\) are integration constants representing the source and the vacuum expectation value of the dual operator in the boundary field theory (up to a normalization factor) respectively, where in order to obtain condensation spontaneously, one should take \(\rho_{+}=0\), [55]. To simplify our calculations, from eq. (31) the integration constants can be defined as \(\rho_{+}:=r_{h}^{-\Delta_{+}}\), \(\rho_{-}:=r_{h}^{-\Delta_{-}}\), and \(\rho(r)\) acquires the structure:
\[\rho(r)\sim\left(\frac{r}{r_{h}}\right)^{\Delta_{+}}+\left(\frac{r}{r_{h}} \right)^{\Delta_{-}}-\frac{B}{m^{2}}. \tag{33}\]
Beyond these conditions to the bulk side, we apply Neumann boundary conditions (NBC) to extract the ratio \(\rho/B\) in the next section.
## III Q-boundary profile
In this section, our aim is to present the five dimensional boundary \(Q\) profile. For this, we assume that \(Q\) is parameterized through the equation \(y=y_{Q}(r)\), analyzing the influence of the Horndeski Lagrangian (2). Together with the above, the induced metric on this surface reads
\[ds_{\text{ind}}^{2}=\frac{L^{2}}{r^{2}}\left(-f(r)dt^{2}+dx^{2}+dw^{2}+\frac{g ^{2}(r)dr^{2}}{f(r)}\right), \tag{34}\]
where \(g^{2}(r)=1+{y^{\prime}}^{2}(r)f(r)\) and \((^{\prime})\), as before, denotes the derivative with respect to \(r\). The normal vectors on \(Q\) are
\[n^{\mu}=\frac{r}{Lg(r)}\left(0,0,0,\,1,\,-f(r)y^{\prime}(r)\right)\!, \tag{35}\]
and via the field equation \(\mathcal{F}_{\phi}=0\) (13), one can solve the eq. (9) (with \(\mathcal{S}_{\alpha\beta}^{Q}=0\)), yielding
\[y^{\prime}(r)\;=\;\frac{(\Sigma L)}{\sqrt{4-\frac{\xi L^{2}}{2r^{2}\left(1- \left(\frac{r}{r_{h}}\right)^{4}\right)}-(\Sigma L)^{2}\left(1-\left(\frac{r} {r_{h}}\right)^{4}\right)}} \tag{36}\]
Here, \(\xi\) given previously in (18) and \(\Sigma L=\cos(\theta^{\prime})\), where \(\theta^{\prime}\) represents the angle between the positive direction of the \(y\) axis and \(Q\). Utilizing this information, we can generate a plot illustrating the \(y_{Q}\) profile from eq. (36), which represents the holographic depiction of the BCFT within the framework of the theory (1), given in Figure 3.
On the other hand, through the steps from Refs.[11; 14; 17], the NBC on the gauge field is \(n^{\mu}F_{\mu\nu}|_{Q}=0\), while that \(B=\sigma\). As the four-dimensional situation [17], the holographic model (AdS\({}_{5}\)/BCFT\({}_{4}\)) predicts that a constant boundary current in the bulk induces a constant current on the boundary \(Q\). Furthermore, \(n^{\mu}M_{\mu\nu}|_{Q}=0\) provide
\[\frac{\rho(r)}{B}=\frac{f(r)y^{\prime}(r)}{m^{2}}. \tag{37}\]
Where the density \(\rho\) and the magnetic field \(B\) are dependent on the values of the Horndeski parameters and the polarization tensor. It is interesting to note that the \(\rho/B\) ratio is the Hall conductivity, which resembles the quantum Hall effect (QHE). In this sense, we have that our coefficients are topological. The ratio \(\rho/B\) (37) is showed in Fig. 4, where on the
boundary \(Q\), the curves of solutions in the plane \((\rho,B)\) corresponds to a localized condensate [57, 58].
UV and IR regimes: Together with the above, in addition to the above numerical solution, we can analyze some particular cases regarding the study of the ultraviolet (UV) and infrared (IR) regimes. Thus, for the first case, performing an expansion at \(r\to 0\) with, as before, \(\Sigma L=\cos(\theta^{\prime})\), the equation (36) becomes
\[y_{{}_{UV}}(r)=y_{0}+\sqrt{\frac{2}{-\xi L^{2}}}\,r\cos(\theta^{ \prime}), \tag{38}\]
where \(y_{0}\) is an integration constant. In the above equation, considering \(\xi\rightarrow-\infty\), we have that
\[y_{{}_{UV}}(r)=y_{0}=\text{constant}, \tag{39}\]
which is equivalent to keeping \(\xi\) finite together with a zero tension limit \(\Sigma\to 0\), considering the cases \(\theta^{\prime}=\pi/2\) and \(\theta^{\prime}=3\pi/2\). For this regime, we have that the \(\rho/B\) ratio takes the
Figure 3: The figure shows the numerical solution for the \(Q\) boundary profile through eq. (36) for the BH within the theory (2), considering the cases \(\theta^{\prime}=2\pi/3\), \(\theta=\pi-\theta^{\prime}\), \(\Lambda=-1\), \(\alpha=8/3\) with \(\gamma=0\) (pink curve), \(\gamma=0.1\) (blue dashed curve ), \(\gamma=0.2\) (red dot dashed curve), and \(\gamma=0.3\) (green thick curve). The region between the curves \(Q\) are the bulk \(\mathcal{N}\) while the gray parallel vertical lines represent the Randall-Sundrum branes (see eq. (39)).
form
\[\frac{\rho}{B}=\sqrt{\frac{2}{-\xi L^{2}}}\frac{\cos(\theta^{\prime})}{m^{2}}. \tag{40}\]
We can note that although the above result is for five dimensions, is a consistent generalization of a known AdS\({}_{4}\)/CFT\({}_{3}\) solution, given by the four-dimensional AdS BH with plane symmetry, where it allows only stress-free RS branes in the construction [11; 17]. Furthermore, we have that uniform static charge density must be supported by a magnetic field.
In particular, we observe that the ratio \(\rho/B\) represents a constant proportional to the ratio of coefficients given in the Horndeski gravity (2). These analyses suggest that an extended version of the AdS\({}_{5}\) BH can effectively model a quantum Hall system situated on a plateau of transverse conductivity. To summarize, the AdS\({}_{5}\)/BCFT\({}_{4}\) configuration shows that the Hall conductivity (\(\sigma H\)) is inversely proportional to the sum of coefficients associated with the topological terms present in the gravitational Lagrangian. In equation (40), this relationship is expressed as \(\sigma_{H}=\rho/B\), given by:
\[\sigma_{H}=\sqrt{\frac{2}{-\xi L^{2}}}\frac{\cos(\theta^{\prime})}{m^{2}}, \tag{41}\]
where in QHE the conductivity is related to the number of filled Landau levels (filling fraction), namely, by
\[\frac{h}{e^{2}}\sigma_{H}=\sqrt{\frac{2}{-\xi L^{2}}}\frac{\cos(\theta^{\prime} )}{m^{2}}, \tag{42}\]
where the expression \(e^{2}/h\) represents the magnetic flux quantum. In this fashion, the holographic description appears to yield findings similar to the QHE description obtained previously in [59; 60]. We have an extension of the covariant version of the Hall relation \(\rho=\sigma_{H}B\) in our situation, but considering AdS\({}_{5}\)/BCFT\({}_{4}\).
IR regime: For the IR case, we take \(r\rightarrow+\infty\) so that from eq. (17) we have that \(\lim_{r\rightarrow+\infty}\psi(r)^{2}=0\), implying that \(\phi=\phi_{0}=\) constant, ensuring a genuine vacuum solution. Plugging this result in eq. (36):
\[y^{\prime}_{{}_{IR}}(r)\sim\left(\frac{r_{h}}{r}\right)^{2}, \tag{43}\]
and \(y^{{}^{\prime}}_{{}_{IR}}(r)\to 0\) when \(r\rightarrow+\infty\). This implies, as shown in (40), that \(\rho/B\) tends towards zero. Consequently, this value renders the on-shell action finite.
Analytical solution: For the sake of completeness, an approximate analytical solution for \(y(r)\) can be obtained by performing an expansion for \(\xi\) very small from eq. (36), given by
\[y^{\prime}_{Q}=\frac{\cos(\theta^{\prime})}{\sqrt{4-\cos^{2}(\theta^{\prime}) f(r)}}+\frac{L^{2}\cos(\theta)\xi}{4r^{2}f(r)(4-\cos^{2}(\theta^{\prime})f(r))^{ 3/2}}+O(\xi^{2}). \tag{44}\]
with \(f\) given previously in (20).
## IV Holographic renormalization
In order to describe both the thermodynamic and hydrodynamic coefficients of a conformal fluid in the presence of a magnetic field, in this section we will calculate the Euclidean on-shell action, which is related to the free energy of the corresponding thermodynamic system. Thus, our holographic scheme takes into account the contributions of AdS\({}_{5}\)/BCFT
correspondence within the Horndeski model. Let us start with the Euclidean action given by \(I_{E}=I_{bulk}+2I_{bdry}\)
\[I_{bulk} = -\frac{1}{16\pi G_{N}}\int_{\mathcal{N}}d^{5}x\sqrt{g}\Big{(}R-2 \Lambda+\frac{\gamma}{2}G_{\mu\nu}\nabla^{\mu}\phi\nabla^{\nu}\phi\Big{)} \tag{45}\] \[- \frac{1}{8\pi G_{N}}\int_{\mathcal{M}}d^{4}x\sqrt{\bar{\gamma}} \Big{(}K^{(\bar{\gamma})}-\Sigma^{(\bar{\gamma})}-\frac{\gamma}{4}(\nabla_{\mu }\phi\nabla_{\nu}\phi n^{\mu}n^{\nu}\] \[- (\nabla\phi)^{2})K^{(\bar{\gamma})}-\frac{\gamma}{4}\nabla^{\mu} \phi\nabla^{\nu}\phi K^{(\bar{\gamma})}_{\mu\nu}\Big{)}.\]
In eq. (45), we have that \(g\) is the determinant of the metric \(g_{\mu\nu}\) on the bulk \(\mathcal{N}\), \(\bar{\gamma}\) is the induced metric, the surface tension (extrinsic curvature) on \(\mathcal{M}\) is represented with \(\Sigma^{(\bar{\gamma})}\) (\(K^{(\bar{\gamma})}\)). The boundary side is governed by \(I_{bdry}\)
\[I_{bdry} = -\frac{1}{16\pi G_{N}}\int_{\mathcal{N}}d^{5}x\sqrt{g}\left(R-2 \Lambda+\frac{\gamma}{2}G_{\mu\nu}\nabla^{\mu}\phi\nabla^{\nu}\phi\right)\] \[- \frac{1}{8\pi G_{N}}\int_{Q}d^{4}x\sqrt{h}\Big{(}(K-\Sigma)-\frac {\gamma}{4}(\nabla_{\mu}\phi\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla\phi)^{2} )K\] \[- \frac{\gamma}{4}\nabla^{\mu}\phi\nabla^{\nu}\phi K_{\mu\nu}\Big{)}.\]
To construct the bulk action \(I_{bulk}\), we need to consider the induced metric on the bulk, which is obtained from the metric Ansatz (15) after the transformation \(\tau=it\), given by
\[ds_{ind}^{2}=\bar{\gamma}_{\mu\nu}dx^{\mu}dx^{\nu}=\frac{L^{2}}{r^{2}}\left(f( r)d\tau^{2}+dx^{2}+dy^{2}+dw^{2}+\frac{dr^{2}}{f(r)}\right), \tag{47}\]
where \(0\leq\tau\leq\beta\) with
\[\beta=\frac{1}{T}=\left(\frac{|f^{\prime}(r)|}{4\pi}\Big{|}_{r=r_{h}}\right)^ {-1}=\pi\,r_{h}, \tag{48}\]
and \(T\) is the Hawking Temperature, obtained from eqs. (15) and (20). Now, using these elements, we can construct the bulk action \(I_{bulk}\). For this par from the process of holographic renormalization, in order to remove IR diverges in the bulk side, we introduce a cutoff \(\epsilon\)
\[I_{bulk}=\frac{1}{16\pi G_{N}}\int d^{3}x\int_{0}^{\beta}d\tau \int_{\epsilon}^{r_{h}}dr\sqrt{g}\left(R-2\Lambda+\frac{\gamma}{2}G^{rr}\psi ^{2}(r)\right)\] \[+\frac{1}{16\pi G_{N}}\int d^{3}x\int_{0}^{\beta}d\tau\frac{L^{2} \sqrt{f(\epsilon)}}{\epsilon^{4}}=-\frac{L^{2}V}{8r_{h}^{3}G}\left(1-\frac{ \xi}{4}\right), \tag{49}\]
with \(\xi\) given previously in (18) and, in our notations, \(V=\int d^{3}x=\Delta x\Delta y\Delta w=(x_{2}-x_{1})(y_{2}-y_{1})(w_{2}-w_{1})\). Now, computing the \(I_{bdry}\), we introduce a cutoff \(\epsilon\) to remove the UV divergence on the boundary side, which reads
\[I_{bdry}=\frac{r_{h}L^{2}\Delta y_{Q}}{2G_{N}}\left(1-\frac{\xi}{4}\right)\int _{\epsilon}^{r_{h}}\frac{\Delta y_{Q}(r)}{r^{5}}dr-\frac{r_{h}L^{2}\sec( \theta^{\prime})\Delta y_{Q}}{2G_{N}}\int_{\epsilon}^{r_{h}}\frac{\Delta y_{Q} (r)}{r^{4}}dr. \tag{50}\]
In the above equation \(\Delta y_{Q}\) is a constant and \(\Delta y_{Q}(r):=y_{Q}(r)-y_{0}\) is obtained from the equation (44) after an integration with respect to \(r\). From the overview point of AdS/CFT correspondence, IR divergences in AdS correspond to UV divergences in CFT where this relation is known as the IR-UV connection. Thus, based on this duality, we can reduce equation (50) to the following form:
\[I_{bdry}=-\frac{L^{2}\Delta\,y_{Q}}{2G_{N}}\left(1-\frac{\xi}{4} \right)\left(\frac{\xi\,L^{2}b(\theta^{\prime})}{5r_{h}^{3}}-\frac{q(\theta^{ {}^{\prime}})}{3r_{h}}\right)\] \[+\frac{L^{2}\sec(\theta^{\prime})\Delta\,y_{Q}}{2G_{N}}\left( \frac{\xi\,L^{2}b(\theta^{\prime})}{5r_{h}^{2}}-\frac{q(\theta^{{}^{\prime}})} {2}\right), \tag{51}\]
where
\[b(\theta^{\prime})=\frac{\cos(\theta^{\prime})}{\sqrt{2}(7-\cos^{2}(\theta^{ \prime}))^{3/2}},\qquad q(\theta^{\prime})=\frac{\sqrt{2}\cos(\theta^{\prime}) }{\sqrt{7-\cos^{2}(\theta^{\prime})}}\,. \tag{52}\]
Given the preceding details, and utilizing eqs. (49) and (51)-(52), we can compute \(I_{E}=I_{bulk}+2I_{bdry}\) as:
\[I_{E}=-\frac{L^{2}V}{8r_{h}^{3}G_{N}}\left(1-\frac{\xi}{4}\right) -\frac{L^{2}\Delta\,y_{Q}}{G_{N}}\left(1-\frac{\xi}{4}\right)\left(\frac{\xi\, L^{2}b(\theta^{\prime})}{5r_{h}^{3}}-\frac{q(\theta^{\prime})}{3r_{h}}\right)\] \[+\frac{L^{2}\sec(\theta^{\prime})\Delta\,y_{Q}}{G_{N}}\left(\frac {\xi\,L^{2}b(\theta^{\prime})}{5r_{h}^{2}}-\frac{q(\theta^{{}^{\prime}})}{2} \right). \tag{53}\]
In this context, \(I_{E}\) represents the approximate analytical expression for the Euclidean action. This equation plays a crucial role in formulating the free energy and extracting all thermodynamic quantities within our framework, as demonstrated in the subsequent section.
## V Black hole entropy
This section is devoted to presenting the BH entropy with contributions of our setup. This entropy is computed via the free energy and, with this information, we extract the thermodynamic quantities that we want to explore in the present work, these are, the BCFT entropy, specific heat, magnetization, the \(\eta/S\) ratio, and the \(\zeta/S\) ratio. The boundary entropy has a contribution to the magnetic field, which is performed by DBCs. Besides, all quantities are related to the BH considering the contributions of the AdS/BCFT correspondence in the Horndeski gravity. Free energy is defined as
\[\Omega=\epsilon-TS=TI_{E}\,, \tag{54}\]
where the entropy and the energy density, denoted as \(S\) and \(\epsilon\) respectively, reads
\[S=-\frac{\partial\,\Omega}{\partial T},\qquad\epsilon=\Omega-T\left(\frac{ \partial\,\Omega}{\partial T}\right), \tag{55}\]
and \(T\) is, as before, the Hawking Temperature. By plugging the Euclidean _on-shell action_\(I_{E}\) from eq.(53), as well as \(T\) obtained previously in (48), we have
\[S_{\rm total} = S_{\rm bulk}+S_{\rm bdry}, \tag{56}\]
where
\[S_{\rm bulk} = \frac{L^{2}V}{4r_{h}^{3}G_{N}}\left(1-\frac{\xi}{4}\right), \tag{57}\] \[S_{\rm bdry} = \frac{L^{2}\Delta\,y_{Q}}{G_{N}}\left(1-\frac{\xi}{4}\right) \left(\frac{\xi\,L^{2}b(\theta^{\prime})}{5r_{h}^{3}}-\frac{q(\theta^{{}^{ \prime}})}{3r_{h}}\right)\] (58) \[- \frac{L^{2}\sec(\theta^{\prime})\Delta\,y_{Q}}{G_{N}}\left(\frac {\xi\,L^{2}b(\theta^{\prime})}{5r_{h}^{2}}-\frac{q(\theta^{{}^{\prime}})}{2} \right).\]
The meaning behind this overall entropy (58) aligns with the Bekenstein-Hawking formula associated with BHs
\[S_{BH}=\frac{A}{4G_{N}}\,, \tag{59}\]
where, in this case the total area \(A\) is given by
\[A = \frac{L^{2}V}{2r_{h}^{3}}\left(1-\frac{\xi}{4}\right)+4L^{2} \Delta\,y_{Q}\left(1-\frac{\xi}{4}\right)\left(\frac{\xi\,L^{2}b(\theta^{ \prime})}{5r_{h}^{3}}-\frac{q(\theta^{{}^{\prime}})}{3r_{h}}\right) \tag{60}\] \[-4L^{2}\sec(\theta^{\prime})\Delta\,y_{Q}\left(\frac{\xi\,L^{2}b (\theta^{\prime})}{5r_{h}^{2}}-\frac{q(\theta^{{}^{\prime}})}{2}\right),\]
enjoying new contribution terms due to the model (2) and (6) for the bulk and the boundary \(Q\). Just for completeness, the entropy \(S\) through eqs. (55)-(58), with respect to the Hawking temperature \(T\) (48), is represented graphically in Fig. 5. In the right panel, there is an external magnetic field \(B\), while in the left panel \(B=0\). Specifically, we observe behavior akin to what was found in [3; 4] in the right panel. On the other hand, in the left panel, even when the external magnetic field is nullified, there persists a state of disorder attributed to the residual entropy (61). This type of entropy can serve as a constraint, suggesting potential violations of hydrodynamic coefficients in the context of holographic transport.
The information is bounded by the BH area \(A\). In our particular scenario, it behaves analogously to a conformal plasma, reminiscent of QCD models. This behavior provides
insight into the interpretation of the Buchel bound [7], identifying it as residual information stemming from the BH. Furthermore, the bound represented by equation (60) implies that the storage of information increases as the magnitude of \(\xi\) increases, as long as \(\xi<0\). This bound encodes the residual boundary entropy, and its manifestation is revealed when taking the limit as \(T\to 0\) (or \(r_{h}\to\infty\)) in eq. (58), which reads
\[S_{bdry}^{res}=\frac{L^{2}\sec(\theta^{\prime})\Delta\,y_{Q}q(\theta^{\prime}) }{2G_{N}}. \tag{61}\]
In this limit, a residual boundary entropy emerges, as discussed in detail in Ref.[16] in the context of three dimensions. In addition to the interpretation mentioned previously, it's anticipated that residual information is encoded in the entropy for paramagnetic materials (see for example Ref. [17]). Moreover, we note that the contribution from the boundary (58) aligns with the entropy of the BCFT, with corrections from Horndeski terms parametrized by \(\xi\) as described in eq. (18). Consequently, we can assert that \(S_{bdry}^{res}\) represents the natural Buchel bound for Horndeski gravity. These findings remain consistent with the established results in [11, 13] when approaching the limit \(\xi\to 0\). In addition to the point concerning residual entropy, there's another contribution stemming from the DBC, as outlined in eq.
Figure 5: Left panel: The behavior of the entropy \(S\) versus the temperature \(T\) for \(B\neq 0\) (considering for this case \(B=(4/5)T\)). Right panel: The behavior of \(S\) with respect to \(T\) for \(B=0\). For both cases, we consider the values for \(\alpha=8/3\), \(m=1/8\), \(\rho=1/4\), \(\Lambda=-1\), \(V=1\), \(G_{N}=1\), \(\theta^{\prime}=2\pi/3\) with \(\gamma=1\) (pink curve), \(\gamma=4\) (red dot dashed curve), \(\gamma=8\) (green thick curve).
(40), denoted as the magnetic boundary (\(S^{magnetic}_{bdry}\)) and expressed as follows:
\[S^{magnetic}_{bdry} = \frac{L^{2}\Delta\,y_{Q}}{G_{N}}\left(1-\frac{\xi}{4}\right)\left(- \frac{2B^{2}\cos^{2}(\theta^{\prime})}{m^{2}\rho^{2}}\frac{b(\theta^{\prime})}{ 5r_{h}^{3}}+\frac{q(\theta^{\prime})}{3r_{h}}\right) \tag{62}\] \[- \frac{L^{2}\sec(\theta^{\prime})\Delta\,y_{Q}}{G_{N}}\left(- \frac{2B^{2}\cos^{2}(\theta^{{}^{\prime}})}{m^{2}\rho^{2}}\frac{b(\theta^{ \prime})}{5r_{h}^{2}}+\frac{q(\theta^{{}^{\prime}})}{2}\right),\]
Here, \(S^{magnetic}_{bdry}\) is the entropy bound, which is restricted by \(m^{2}\) that comes from eq. (32). In the probe limit, the charge density contributed to the magnetized plasma must be finite.
## VI Thermodynamic quantities and transport coefficients
In this section, we will analyze a variety of thermodynamic parameters, encompassing magnetization and hydrodynamic variables. We will show that even in the absence of an external magnetic field, our system undergoes a phase transition, evidenced by a specific heat at constant volume. This transition is indicated by the formation of a condensate, as signified by the presence of bulk viscosity. To delve into the thermodynamic properties, we commence by examining the free energy \(\Omega\) as expressed in eqs. (53)-(54). The first law of black hole thermodynamics in the canonical ensemble guides our analysis
\[d\Omega=-SdT-PdV, \tag{63}\]
where, together with the entropy \(S\) as well as the Hawking temperature \(T\), we also consider the presence of pressure \(P\) and volume \(V\).
Heat capacity: To begin our exploration of thermodynamic quantities, we first examine the heat capacity \(C_{V}\), which allows us to analyze local thermodynamic stability and defined is defined as follows:
\[C_{V}=T\bigg{(}\frac{\partial S}{\partial T}\bigg{)}_{V,B}=-T\bigg{(}\frac{ \partial^{2}\Omega}{\partial T^{2}}\bigg{)}_{V,B}, \tag{64}\]
where the subscript \(V\) and \(B\) of eq. (64) represents the volume and magnetic field respectively, where the derivative is performed only with respect to the temperature. In Fig. 6, we can see that from both the right and left panels, the BH can undergo transitions between stable phases (\(C_{V}>0\)) and unstable (\(C_{V}<0\)) phases. This phase transition is instigated by the spontaneous electrical polarization and the Horndeski parameters within the model, whether an external magnetic field is present or not. Sound speed: Another quantity of
interest is sound speed \(c_{s}^{2}\), defined as:
\[c_{s}^{2}\equiv\frac{\partial p}{\partial\epsilon}=\frac{\partial T}{\partial \epsilon}\frac{\partial p}{\partial T}\,. \tag{65}\]
Identifying
\[\frac{\partial T}{\partial\epsilon}=\left(\frac{\partial\epsilon}{\partial T} \right)^{-1}=C_{V}^{-1}\,;\qquad\frac{\partial p}{\partial T}=S\,, \tag{66}\]
we have1
Footnote 1: It is also very common to describe the sound speed also as \(c_{s}^{2}=\frac{\partial(\ln T)}{\partial(\ln S)}\), where, as before, \(T\) is the Hawking Temperature and \(S\) is the entropy [16].
\[c_{s}^{2}\,=\,\frac{S}{C_{V}}. \tag{67}\]
In Fig. 7, we present the behavior of the sound speed \(c_{s}^{2}\) versus the temperature \(T\). The sound speed in the left panel deviates from the value \(1/3\) due to the anisotropic, which is acquired from the magnetic field and the Horndeski parameters. The right panel shows the sound speed for zero magnetic fields, this case agrees with the phase of a conformal system for small values of the Horndeski gravity, for example, \(c_{s}^{2}=1/3\) for \(\gamma=1\) (pink curve). For large values of \(\gamma\), the residual term of the entropy associated with the boundary Q deviates
Figure 6: Left panel: The behavior of the heat capacity \(C_{V}\) with the temperature \(T\) (considering for this case \(B=(4/5)T\)). Right panel: The behavior of the heat capacity \(C_{V}\) with respect to the temperature \(T\) for \(B=0\). For these cases, we consider the values \(\alpha=8/3\), \(m=1/8\), \(\rho=1/4\), \(\Lambda=-1\), \(\theta^{\prime}=2\pi/3\) with \(\gamma=1\) (pink curve), \(\gamma=4\) (red dot dashed curve), \(\gamma=8\) (green thick curve).
the value of \(c_{s}^{2}=1/3\) for \(c_{s}^{2}\sim 0.26\) for \(\gamma=4\) (red dot dashed curve) and \(c_{s}^{2}\sim 0.24\) for \(\gamma=8\) (green thick curve), respectively.
Magnetization density and magnetic susceptibility: Following the steps from Ref. [58], the magnetization density \(M\) and magnetic susceptibility \(\chi\) can be derived, where in our
Figure 8: Left panel: The behavior of the \(M\) with respect to the temperature \(T\) (considering for this case \(B=(4/5)T\)). Right panel: The behavior of the \(1/\chi\) with respect to \(T\). For both cases, we consider \(\alpha=8/3\), \(m=1/8\), \(\rho=1/4\), \(\Lambda=-1\), \(\theta^{\prime}=2\pi/3\) with \(\gamma=1\) (pink curve), \(\gamma=4\) (red dot dashed curve), \(\gamma=8\) (green thick curve).
Figure 7: Left panel: The behavior of the sound speed \(c_{s}^{2}\) versus the temperature \(T\) (considering for this case \(B=(4/5)T\)). Right panel: The behavior of the heat capacity \(c_{s}^{2}\) with respect to the temperature \(T\) for \(B=0\). In this case are considered: \(\alpha=8/3\), \(m=1/8\), \(\rho=1/4\), \(\Lambda=-1\), \(\theta^{\prime}=2\pi/3\) with \(\gamma=1\) (pink curve), \(\gamma=4\) (red dot dashed curve), \(\gamma=8\) (green thick curve).
case take the form
\[M=-\left(\frac{\partial\,\Omega}{\partial B}\right)=\frac{L^{2} \Delta\,y_{Q}T}{G_{N}}\left(1-\frac{\xi}{4}\right)\left(\frac{4B\cos^{2}(\theta ^{{}^{\prime}})}{m^{2}\rho^{2}}\frac{b(\theta^{\prime})}{5r_{h}^{3}}\right)\] \[-\frac{L^{2}\sec(\theta^{\prime})\Delta\,y_{Q}T}{G_{N}}\left( \frac{B\cos(\theta^{{}^{\prime}})}{m^{2}\rho^{2}}\frac{b(\theta^{\prime})}{5r_ {h}^{2}}\right), \tag{68}\] \[\chi=\left(\frac{\partial^{2}\Omega}{\partial B^{2}}\right)=- \frac{L^{2}\Delta\,y_{Q}T}{G_{N}}\left(1-\frac{\xi}{4}\right)\left(\frac{4\cos ^{2}(\theta^{\prime})}{m^{2}\rho^{2}}\frac{b(\theta^{\prime})}{5r_{h}^{3}}\right)\] \[+\frac{L^{2}\sec(\theta^{\prime})\Delta\,y_{Q}T}{G_{N}}\left( \frac{\cos(\theta^{{}^{\prime}})}{m^{2}\rho^{2}}\frac{b(\theta^{\prime})}{5r_ {h}^{2}}\right). \tag{69}\]
Here we can see that \(M=-\chi B\), the RS brane behaves like a paramagnetic material, that is, when we remove the external magnetic field, the equation (68) disappears and the disorder linked to entropy increases, as shown in Fig. 5. From eq. (68), the susceptibility \(\chi\) is not zero for zero magnetic fields (i.e. \(B=0\)). For the sake of completeness, both cases are represented in Figure 8.
On the other hand, it is crucial to consider additional quantities to comprehend the plasma phase within the model, namely the ratios of \(\eta/S\) and \(\eta/S\). These ratios are defined as functions of the Horndeski parameters, magnetic field, and the boundary \(Q\), which is associated with the boundary entropy. In order to be as clear as possible, the details about the computation of the ratios \(\eta/S\) and \(\zeta/S\) are present in Appendix A.
Shear viscosity: In particular, with respect to the \(\eta/S\) ratio, from Fig. 9 we can analyze the dependence of the
Figure 9: Left panel: The behavior of the \(\eta/S\) ratio as a function of the temperature \(T\) for different values for \(\alpha=8/3\), \(B=(4/5)T\), \(\rho=1/4\), \(\Lambda=-1\), \(\gamma=1\) (pink curve), \(\gamma=2\) (red dot dashed curve), \(\gamma=2.5\) (green thick curve). Right panel: The behavior of \(\eta/s\) for \(B=0\) considering the same values showed previously.
viscosity on the magnetic field, characterizing a magnetic side effect, and describing the slow relaxation of the magnetization of paramagnetic materials when they acquire magnetization in the presence of an external magnetic field \(B\) (left panel of Fig. 9). In the right panel, we can observe that under an interval of the temperature \(T\), the \(\eta/S\) ratio is an increasing function when \(B=0.\) On the other hand, and as we can see from Fig. 10, at a fixed temperature \(T\), we observe as the paramagnetic material, represented by the RS brane, we can obtain a relation between \(\eta/S\) with respect to the magnetic field \(B\), which is a decreasing function. Here, when \(B\) becomes large, we have that \(\eta/S\to 0\).
Bulk viscosity: The \(\zeta/S\) ratio present here exhibits similar results to those presented in [31, 33]. However, now a distinctive feature is that:
\[\frac{\zeta}{\eta}=\frac{\sqrt{3}}{6}\sqrt{\frac{\alpha+\gamma\Lambda}{3 \alpha+\gamma\Lambda}}, \tag{70}\]
where \(\alpha=-\gamma\Lambda\) does not form a condensate, implying \(\zeta=0\). This outcome aligns with the prediction in [33], implying no contribution of bulk viscosity (\(\zeta_{BCFT}=0\)) on the BCFT side. The above expression indicates that in the plasma phase of the fluid, the scalar field disappears, and the fluid is not subject to bulk viscosity. Thus, it is interesting to note that we have a fluid with low shear viscosity going to the plasma state with low bulk viscosity with or without an external magnetic field. For a visual representation, refer to Fig. 11 and Fig. 12. Together with the above, we note that the above represents a plasma state composed of
quarks and gluons, as illustrated in [4], wherein only vector bosons are present and scalars are absent. As the temperature increases, as depicted in Fig. 13, both \(\eta/S\) and \(\zeta/S\) tend towards zero. Furthermore, in the limit, as \(\gamma\) approaches zero, we recover the result of the Chamblin-Reall background in five dimensions (\(\zeta/\eta=1/6\)), as discussed in [33]. Hence, our results exhibit remarkable consistency when we exclude the Horndeski contributions, which are controlled by the \(\gamma\) parameter. Additionally, in Fig. 13, the behavior is akin to the case presented in [33]. The curves represent the boundary contributions stemming from the residual boundary entropy, akin to the Buchel bound. In the theory worked here, this is the case where the bulk is located at the boundary.
Figure 11: Left panel: The behavior of the \(\zeta/S\) ratio as a function of the temperature \(T\) (where for our case we consider \(B=(4/5)T\)) for different values for \(\alpha=8/3\), \(\rho=1/4\), \(\Lambda=-1\), \(\gamma=1\) (pink curve), \(\gamma=2\) (red dot dashed curve), \(\gamma=2.5\) (green thick curve). Right panel: The behavior of \(\eta/s\) for \(B=0\) considering the same values showed previously.
Figure 12: The behavior of \(\zeta/S\) with respect to the magnetic field \(B\), for different values for \(\alpha=8/3\), \(T=4/5\), \(\rho=1/4\), \(\Lambda=-1\), \(\gamma=1\) (pink curve), \(\gamma=2\) (red dot dashed curve), \(\gamma=2.5\) (green thick curve).
bound is described by the equation (70). This deviation from the conventional sound speed signifies the Buchel bound in our scenario.
## VII Fluid/gravity correspondence
In this section, we present the fluid/gravity correspondence with an external magnetic field. For this, we need the stress-energy tensor of boundary field theory in Horndeski gravity [19]. Through the renormalization procedure, the form of stress-energy tensor \(T_{\alpha\beta}\) can be written as:
\[T_{\alpha\beta}=-\frac{L^{2}}{16\pi G_{N}r^{2}}\left[K_{\alpha \beta}-h_{\alpha\beta}(K-\Sigma)+\frac{\gamma}{4}H_{\alpha\beta}-\kappa T_{ \alpha\beta}^{R}-\kappa T_{\alpha\beta}^{ct}\right], \tag{71}\] \[H_{\alpha\beta}=(\nabla_{\alpha}\phi\nabla_{\beta}\phi n^{ \alpha}n^{\beta}-(\nabla\phi)^{2})(K_{\alpha\beta}-h_{\alpha\beta}K)-(\nabla_ {\alpha}\phi\nabla_{\beta}\phi)K. \tag{72}\]
Here, \(T_{\alpha\beta}^{R}\) and \(T_{\alpha\beta}^{ct}\) are the possible contributions of extrinsic curvature and counter term, respectively. However, fixing the energy-momentum tensor on the boundary with \(T_{\alpha\beta}^{R}=T_{\alpha\beta}^{ct}=0\), we have
\[T_{\alpha\beta}=-\frac{L^{2}}{16\pi G_{N}r^{2}}\left[K_{\alpha \beta}-h_{\alpha\beta}(K-\Sigma)+\frac{\gamma}{4}H_{\alpha\beta}\right]. \tag{73}\]
Using the induced metric:
\[h_{\alpha\beta}=-\frac{L^{2}f}{r^{2}}dt^{2}+\frac{L^{2}}{r^{2}}dx ^{2}+\frac{L^{2}}{r^{2}}dw^{2}+\frac{L^{2}g^{2}}{r^{2}f}dr^{2}, \tag{74}\]
Figure 13: Left panel: the behavior of \(\zeta/S\) with respect to T. Right panel: The behavior of \(\eta/S\) v/s \(T\). For both situations, we consider \(\alpha=8/3\), \(B=4/5\), \(\rho=1/4\), \(\Lambda=-1\), \(\gamma=1\) (pink curve), \(\gamma=2\) (red dot dashed curve), \(\gamma=2.5\) (green thick curve).
we have that
\[K_{\alpha\beta}=-\frac{L}{kz}\left[\begin{array}{cccc}\frac{Lrfy^{\prime}}{2g} \left(\frac{f}{r^{2}}\right)^{\prime}&0&0&0\\ 0&\frac{Lfy^{\prime}}{r^{2}g}&0&0\\ 0&0&\frac{Lfy^{\prime}}{r^{2}g}&0\\ 0&0&0&-\frac{Ly^{\prime\prime}}{rg}+\frac{Ly^{\prime}g}{r^{2}}+\frac{Ly^{ \prime}}{r^{2}g}+\frac{Lrfy^{\prime}}{2g}\left(\frac{1}{r^{2}f}\right)^{\prime }\end{array}\right],\]
where \(K\) is given by
\[K=-\frac{1}{2L}\frac{r^{3}y^{\prime}}{g}\left(\frac{f}{r^{2}}\right)^{\prime} +\frac{2}{L}\frac{fy^{\prime}}{g}+\frac{1}{L}\frac{fy^{\prime}}{g^{3}}-\frac{ 1}{L}\frac{rfy^{\prime\prime}}{g^{3}}+\frac{1}{2L}\frac{r^{3}f^{2}y^{\prime} }{g^{3}}\left(\frac{1}{r^{2}f}\right)^{\prime}. \tag{75}\]
The energy density \(\epsilon\) and pressure \(p\) are given by
\[\epsilon=u^{\mu}u^{\mu}T_{\mu\mu},\qquad p=\frac{1}{2}\left(\epsilon+T_{\mu}^ {\mu}\right), \tag{76}\]
with velocity \(u^{\mu}=\frac{dx^{\mu}}{d\tau}\) calculated in the moving frame. The velocity is given by
\[(u^{t},u^{x},u^{w},u^{r})=\left(\frac{r}{L\sqrt{f(r)}},0,0,0\right). \tag{77}\]
Thus, we can write them respectively in the form
\[\epsilon = \frac{L^{2}}{16\pi G_{N}r^{3}}\left(\Sigma L+\frac{ry^{{}^{ \prime}}f^{{}^{\prime}}+2rfy^{{}^{\prime\prime}}-2f^{2}y^{{}^{\prime}3}-2y^{{ }^{\prime}}f}{2(1+y^{{}^{\prime}2}f)^{3/2}}\right)+\xi\,L^{2}{\cal A}, \tag{78}\] \[p_{xx} = -\frac{L^{2}}{16\pi G_{N}r^{3}}\left(\Sigma L-\frac{2(2f-rf^{{} ^{\prime}})y^{{}^{\prime}}+f(4f-rf^{{}^{\prime}})y^{{}^{\prime}3}-2rfy^{{}^{ \prime\prime}}}{2(1+y^{{}^{\prime}2}f)^{3/2}}\right)\] (79) \[+ \frac{\xi\,L^{2}{\cal A}}{2},\] \[p_{rr} = -\frac{L^{2}}{16\pi G_{N}r^{3}}\left(\Sigma L-\frac{4y^{{}^{ \prime}}f-ry^{{}^{\prime}}f^{{}^{\prime}}}{2(1+y^{{}^{\prime}2}f)^{1/2}} \right)-\frac{\xi\,L^{2}}{2}{\cal A}, \tag{80}\]
where \(p_{xx}=p_{yy}=p_{ww}\) and
\[{\cal A}=\frac{r^{3}y^{\prime}f^{\prime}-2f^{\prime}y^{\prime}-10fy^{\prime}} {2r^{2}fgL}+\frac{g}{r^{4}f^{2}}\left(\frac{1}{2}y^{\prime}f^{\prime}+2y^{ \prime}\right)+\frac{rf^{\prime}y^{\prime}-fy^{\prime}+rfy^{\prime\prime}}{2 fLg^{3}}. \tag{81}\]
Through the equation (40), we have that
\[\epsilon = \frac{L^{2}}{16\pi G_{N}r^{3}}\left(\Sigma L+\frac{ry^{{}^{ \prime}}f^{{}^{\prime}}+2rfy^{{}^{\prime\prime}}-2f^{2}y^{{}^{\prime}3}-2y^{{ }^{\prime}}f}{2(1+y^{{}^{\prime}2}f)^{3/2}}\right)-\frac{2{\cal A}B^{2}\cos^{2 }(\theta^{\prime})}{m^{4}\rho^{2}}, \tag{82}\] \[p_{xx} = -\frac{L^{2}}{16\pi G_{N}r^{3}}\left(\Sigma L-\frac{2(2f-rf^{{} ^{\prime}})y^{{}^{\prime}}+f(4f-rf^{{}^{\prime}})y^{{}^{\prime}3}-2rfy^{{}^{ \prime\prime}}}{2(1+y^{{}^{\prime}2}f)^{3/2}}\right)\] (83) \[- \frac{{\cal A}B^{2}\cos^{2}(\theta^{\prime})}{m^{4}\rho^{2}},\] \[p_{rr} = -\frac{L^{2}}{16\pi G_{N}r^{3}}\left(\Sigma L-\frac{4y^{{}^{ \prime}}f-ry^{{}^{\prime}}f^{{}^{\prime}}}{2(1+y^{{}^{\prime}2}f)^{1/2}} \right)+\frac{{\cal A}B^{2}\cos^{2}(\theta^{\prime})}{m^{4}\rho^{2}}. \tag{84}\]
As we know, RS brane solutions are allowed for external magnetic fields whereas solutions with external electric fields are not allowed, as it was discussed in [10]. Thus, through the discussions of [3], we have the electric polarization vector \(P^{\mu}\) and the magnetization \(M\), associated with the electric field \(E^{\mu}\) and magnetic field \(B\), through the susceptibilities denoted as \(\chi_{EE}\) and \(\chi_{BB}\), are given by
\[P^{\mu}=\chi_{EE}E^{\mu},\ \ \ M=\chi_{BB}B, \tag{85}\] \[\chi_{EE}=\frac{\partial^{2}p_{rr}}{\partial E^{2}},\ \ \ \ \chi_{BB}=\frac{\partial^{2}p_{rr}}{ \partial B^{2}}. \tag{86}\]
Here we note that when \(\chi_{EE}=0\), the electric polarization vector vanishes (\(P^{\mu}=0\)). In fact, the external magnetic field does not produce polarization of the magnetic moments of the fluid. On the other hand, \(\chi_{BB}\) is given by
\[\chi_{BB}=\frac{\mathcal{A}\cos^{2}(\theta^{\prime})}{m^{4}\rho^{2}}. \tag{87}\]
We note that now we have a symmetry breaking, due to \(p_{xx,yy,ww}\neq p_{rr}\), and the space is anisotropic. If we want a system that describes a Pascal fluid type (this is \(p_{xx,yy,ww}=p_{rr}\)), we have:
\[\frac{2f{y^{\prime}}_{Q}+f^{\prime}{y^{\prime}}_{Q}}{2r^{3}\sqrt{1+y^{\prime }_{Q}f}}=0, \tag{88}\]
which can be integrated, obtaining \(f{y^{{}^{\prime}}}^{2}=\) constant. Thus, the general solution, which yields a fluid-like theory on \(Q\), is provided by the profile
\[\Delta\,y_{Q}(r)=\int_{0}^{r}\frac{\cot(\theta^{\prime})ds}{\sqrt{f(s)}}. \tag{89}\]
This way, we have a profile for the tensor \(T_{\alpha\beta}\) defined in \(Q\), which is consistent with [13]. Replacing \(f{y^{{}^{\prime}}}^{2}=\cot^{2}(\theta^{\prime})\) in eqs. (82)-(84), we have
\[\epsilon = \frac{2L^{2}\cos(\theta^{\prime})}{16\pi\,G_{N}r^{3}}(1-\sqrt{f} )-2MB, \tag{90}\] \[p_{xx} = \frac{L^{2}\cos(\theta^{\prime})}{32\pi\,G_{N}r^{3}}\left(\frac{ 4f-rf^{\prime}-4\sqrt{f}}{\sqrt{f}}\right)-MB,\] (91) \[p_{rr} = \frac{L^{2}\cos(\theta^{\prime})}{32\pi\,G_{N}r^{3}}\left(\frac{ 4f-rf^{\prime}-4\sqrt{f}}{\sqrt{f}}\right)+MB, \tag{92}\]
where for the equation (87), we have that \(\mathcal{A}\) takes the form
\[\mathcal{A} = \frac{\cot(\theta^{\prime})}{3\sqrt{f}L}\left(\frac{r^{3}f^{ \prime}-2f^{\prime}-10f}{r^{2}f\sqrt{1-\cot^{2}(\theta^{\prime})}}+\frac{ \sqrt{1-\cot^{2}(\theta^{\prime})}(f^{\prime}+4)}{r^{4}f^{\prime}}+\frac{rf^{ \prime}-f+rff^{\prime}}{(1-\cot^{2}(\theta^{\prime}))^{3/2}}\right),\]
and we consider \(8\pi G_{N}\) in the natural units. For this fluid beyond the gravitational force, due to the non-flat space, we have an external magnetic field that modifies the energy density and equation of state. This effect is captured by a confining aspect of the Maxwell theory encoded in the logarithmic behavior of the charge interaction potential. Together with the above, as we discussed in the previous sections, in the framework from the AdS/CFT correspondence, IR divergences in AdS correspond to UV divergences in CFT. Thus, in the UV regimes \(r\to 0\), the fluid has a conformal behavior, allowing us to extract the following
\[\epsilon=\frac{2L^{2}\cos(\theta^{\prime})}{16\pi\,G_{N}r^{3}}-2MB, \tag{93}\] \[p_{xx}=\frac{L^{2}\cos(\theta^{\prime})}{32\pi\,G_{N}r^{3}}-MB,\] (94) \[p_{rr}=\frac{L^{2}\cos(\theta^{\prime})}{32\pi\,G_{N}r^{3}}+MB. \tag{95}\]
Together with
\[M=\chi_{BB}B,\ \ \ \ \chi_{BB}=\mathcal{A}T^{2}, \tag{96}\] \[\mathcal{A}=-\frac{1}{3(m\,\rho)^{4}}\frac{\cos^{2}(\theta^{ \prime})\cot(\theta^{\prime})}{\sqrt{1-\cot^{2}(\theta^{\prime})}}. \tag{97}\]
To validate the consistency between the behavior predicted by equations (96)-(96)-(97), we illustrate them in Fig. 14. It's worth noting that the depicted behaviors show striking similarity, even accounting for the residual contributions to the thermodynamic quantities. Just for completeness, the behavior of the energy density \(\epsilon\) as well as \(p_{xx}\) and \(p_{rr}\) are represented through Fig. 15.
Now, we can see that the stress tensor trace \(\langle T^{\alpha}_{\ \ \alpha}\rangle\) disappears, that is
\[\langle T^{\alpha}_{\ \ \alpha}\rangle=-\epsilon+3p_{xx}+p_{rr}=0, \tag{98}\]
as expected for a conformal fluid the bulk viscosity \(\zeta/S\to 0\) at the regime of higher temperature. Nevertheless, the bound only vanishes at the point \(\alpha=-\gamma\Lambda\). Thus, the tensor \(T_{\alpha\beta}\) residing in \(Q\) describes a conformal magneto hydrodynamic (for more discussions about this point, see Ref. [4]). In the plasma phase, the fluid exhibits zero bulk viscosity (\(\zeta/S=0\)) when \(\alpha=-\gamma\Lambda\). At this specific point, Horndeski's theory simplifies to Einstein's gravity without a scalar field, see eqs. (17)-(18). The fluid phase at low temperatures still has shear viscosity, that is, \(\eta/S=1/(4\pi)\), which is the Universal expected result for the usual Einstein gravity [61, 62]. Thus, as the temperature increases, the fluid starts to the plasma phase with both \(\zeta/S\to 0\) and \(\eta/S\to 0\), see Fig.13. If \(B=0\), we have \(\epsilon=4p\), which is a conformal behavior [13].
## VIII Conclusions and Discussions
In five dimensions, we explore fluid thermodynamics employing the holographic framework AdS\({}_{5}\)/BCFT\({}_{4}\), via a BH in the presence of an external magnetic field, together with a gravity model represented by the Horndeski theory (2). Together with the above, the five dimensional boundary \(Q\) profile was presented, whereas the four dimensional situation [17] the parameter \(\gamma\) from the model (2) takes a providential role. The above allows us to obtain the \(\rho/B\) ratio, where the density \(\rho\) and the magnetic field \(B\) are dependent on the values of
Figure 14: The behavior of \(M\) (left panel) and \(1/\chi_{BB}\) (right panel) with respect to the temperature \(T\), and presented in the equation (96). Here, we consider the values \(\alpha=8/3\), \(m=1/8\), \(B=(4/5)T\), \(\rho=1/4\), \(\Lambda=-1\), \(\theta^{\prime}=2\pi/3\) with \(\gamma=1\) (pink curve), \(\gamma=4\) (red dot dashed curve), \(\gamma=8\) (green thick curve).
Figure 15: The figure presents the behavior of \(\epsilon\) (eq. (93)), \(p_{xx}\) (eq. (94)), and \(p_{rr}\) (eq.(95)), with respect to \(T\). Here, we consider \(B=(4/5)T\), \(\rho=1/4\), \(\Lambda=-1\) where \(\epsilon\) is represented via the pink curve, \(p_{xx}\) through the red dot dashed curve, while that \(p_{rr}\) is represented by the green thick curve.
the Horndeski parameters and the polarization tensor.
For our analysis, we compute the free energy \(\Omega\) and the holographic stress tensor residing on the boundary \(Q\) by using holographic renormalization. This formulation sheds light on the distinct behaviors of the fluid's stress-energy tensor, within the context of fluid/gravity correspondence. These behaviors align seamlessly with a conformal fluid's known thermodynamic and hydrodynamic attributes. In this setup, we computed several thermodynamic quantities, such as the entropy \(S\), heat capacity \(C_{V}\), sound speed \(c_{s}^{2}\), magnetization density \(M\), magnetic susceptibility \(\chi\), shear viscosity \(\eta\) and bulk viscosity \(\zeta\), placing a special emphasis on the anisotropic effects induced by the magnetic field \(B\). Our study delves into the thermodynamic properties of the fluid, particularly focusing on its behavior in higher temperatures, resembling a robust plasma within the influence of a magnetic field.
The holographic renormalization procedure enabled the determination of the free energy. However, while the stress-energy tensor (\(T_{\alpha\beta}^{Q}\)) yields comparable results in thermodynamics, an intriguing divergence is observed in hydrodynamic coefficients. This divergence is attributed to the presence of the Buchel bound. The bound signifies a slight violation of the lower limit on the ratio of bulk and shear viscosities. This phenomenon arises due to the Horndeski coupling with the scalar field, simulating behavior akin to the conformal plasma discussed in the QCD models [33].
With respect to the entropy \(S\), we have that the presence of an external magnetic field \(B\) mirrors findings reported in [3; 4], while that even with \(B=0\), a discernible state of disorder persists. This persistent disorder is attributed to the remaining entropy described by (61). Together with the above, we note that the specific heat \(C_{V}\) enjoys phase transitions, influenced by the spontaneous electrical polarization and the Horndeski parameters within the model, whether an external magnetic field is present or not. The presence of anisotropy due to the magnetic field and Horndeski gravity leads to differing transverse and longitudinal pressures, as well as the corresponding speeds of sound. The induced anisotropy changes the squared speed of sound decreases from \(c_{s}^{2}=1/3\) to \(c_{s}^{2}<1/3\), which agrees with [3; 4; 33]. Such anisotropy in the hydrodynamic quantities shows up as a renormalization flow from isotropy in the UV region of the BCFT\({}_{4}\) conformal plasma to the anisotropy in the IR region of the AdS\({}_{5}\). With respect to the magnetization density \(M\) and magnetic susceptibility \(\chi\), we can see that \(M=-\chi B\) and the RS brane behaves like a paramagnetic material. Additionally, \(\chi\) is not zero for zero magnetic fields. Together with the aforementioned, to
comprehend the plasma phase within the model, the \(\eta/S\) ratio and \(\zeta/S\) ratio are crucial. Here, we note that the behavior of \(\eta/S\), is influence by the presence of the external magnetic field \(B\), while that the \(\zeta/S\) ratio displayed in this context showcases results akin to those outlined in [31; 33]. However, it possesses a characteristic delineated in eq. (70), where when \(\alpha=-\gamma\Lambda\), there is a distinctive feature: a non-formation of condensate, implying \(\zeta=0\).
With all the above, the fluid/gravity correspondence with an external magnetic field \(B\) is presented. Here, via the renormalization procedure, we obtain the energy density \(\epsilon\) and pressure \(p\). Here, we have that an external magnetic field modifies the energy density and equation of state, while the stress tensor trace \(\langle T^{\alpha}{}_{\alpha}\rangle\) becomes null in this scenario.
Our results are consistent with the seminal work of Gubser [33], performing a magneto hydrodynamic analysis. This study enabled us to derive both the equation of state and the stress-energy tensor for the conformal fluid. Within the spectrum of \(Q\) profiles (or stress-energy tensors \(T^{Q}_{\alpha\beta}\)), there is a single profile for which Horndeski gravity yields a fluid-like stress-energy tensor, in local thermodynamic equilibrium with the BH radiation.
###### Acknowledgements.
The authors would like to thank Matteo Baggioli, E. Capossoli, Saulo Mesquita, and Henrique Boschi Filho for the fruitful discussions.
## Appendix A Tensor perturbation to bulk and shear viscosity
In this section, the transport coefficients \(\zeta\) and \(\eta\) will be presented. To calculate these coefficients, we will perform tensor perturbations, which will be carried out inspired by the scenarios of [63; 64; 65; 66; 37; 67; 68; 36] for bulk and shear viscosity in the Horndeski equation (11):
Bulk viscosity and entropy density ratio
For the bulk viscosity with the metric
\[ds^{2}=h_{00}[z,t]dt^{2}+h_{11}[z,t]\left(dx^{2}+dy^{2}+dw^{2} \right)+h_{22}[z,t]dz^{2},\] (A1) \[h_{00}[z,t]=-\frac{L^{2}f(z)}{z^{2}}\Pi(z,t),\] (A2) \[h_{11}[z,t]=\frac{L^{2}}{z^{2}}\chi(z,t),\] (A3) \[h_{22}[z,t]=\frac{L^{2}}{z^{2}f(z)}\Gamma(z,t),\] (A4)
and following the steps present by the authors [63, 64, 65, 37, 66, 67, 68],considering the first-order perturbations \(\delta^{(1)}g_{\mu\nu}=h_{\mu\nu}\) with \(h_{\mu\nu}\), where
\[\delta^{(1)}R_{ij} = \partial_{\mu}(\delta^{(1)}\Gamma^{\mu}_{ij})-\partial_{i}( \delta^{(1)}\Gamma^{\mu}_{j\mu})+(\delta^{(1)}\Gamma^{\mu}_{\mu\rho})\Gamma^{ \rho}_{ij}\] (A5) \[+\Gamma^{\mu}_{\mu\rho}(\delta^{(1)}\Gamma^{\rho}_{ij})-(\delta^ {(1)}\Gamma^{\mu}_{i\rho})\Gamma^{\rho}_{\mu j}-\Gamma^{\mu}_{i\rho}(\delta^ {(1)}\Gamma^{\rho}_{\mu j}),\] \[\delta^{(1)}\Gamma^{k}_{ij} = \frac{1}{2}(\partial_{i}h^{k}_{j}+\partial_{j}h^{k}_{i}-\partial ^{k}h_{ij}),\] (A6)
we can write the transverse and traceless (TT) tensor perturbation in a general way to the bulk viscosity in Horndeski gravity, where \(\partial_{\alpha}h_{\mu\nu}=0\) and \(h\equiv\eta^{\mu\nu}h_{\mu\nu}=0\). After an algebraic combination of the equations \(\mathcal{E}_{tz}\) with \(\mathcal{E}_{xx}=\mathcal{E}_{yy}=\mathcal{E}_{ww}\) and taking the terms in \(Tr(\chi\ddot{\chi})\), \(Tr(\chi^{{}^{\prime}}\ddot{\chi})\) and \(Tr(\chi^{{}^{\prime\prime}}\ddot{\chi})\) (for more details about this see [65]), we have
\[\alpha^{2}L^{4}z(\alpha-\gamma\Lambda)f^{2}(z)(3\chi^{{}^{\prime}}(z)-z\chi^{{ }^{\prime\prime}}(z))+12(\alpha+\gamma\Lambda)\gamma^{2}z^{2}f^{2}(z)\ddot{ \chi}(z)=0.\] (A7)
Using the ansatz:
\[\chi(z,t)=e^{-i\omega t}\varphi(z),\] (A8) \[\varphi(z)=\exp\left(-i\omega J\ln\left(\frac{144\gamma^{2}z^{4}f( z)}{\sqrt{3}\mathcal{G}}\right)\right),\] (A9)
one finds
\[J=\frac{1}{2\pi T}\sqrt{\frac{\alpha+\gamma\Lambda}{\alpha-\gamma\Lambda}}.\] (A10)
At this point we must evaluate the Lagrangian (2) using the metric (A1) and expand it up to a quadratic term, yielding the expression:
\[\mathcal{H}_{bulk}=-96\alpha\gamma L^{2}(5\alpha+3\gamma\Lambda)f( z)(1+\chi(z,t))^{2}+432\alpha\gamma^{2}z^{2}(1+\chi(z,t))\ddot{\chi}(z,t)+\] \[288\alpha^{2}\gamma L^{2}f(z)(1+\chi(z,t))\chi^{{}^{\prime}}(z,t )-24\alpha\gamma zf(z)(1+\chi(z,t))\chi^{{}^{\prime\prime}}(z,t)+\]
\[216\gamma^{2}z^{2}(\alpha+\gamma\Lambda)(1+\chi(z,t))\ddot{\chi}(z,t)+6 48\gamma^{2}zf^{2}(z)(\alpha+\gamma\Lambda)(1+\chi(z,t))\chi^{{}^{\prime}}(z,t)-\] \[18\gamma^{2}z^{2}f^{2}(z)(\alpha+\gamma\Lambda)\chi^{{}^{\prime 2}}(z,t), \tag{111}\]
and collecting the quadratic terms, we have
\[{\cal H}_{bulk}=-{\cal M}_{1}\chi^{2}(z,t)+{\cal M}_{2}\chi(z,t) \ddot{\chi}(z,t)+{\cal M}_{3}\chi(z,t)\ddot{\chi}(z,t)+{\cal M}_{4}\chi(z,t) \chi^{{}^{\prime}}(z,t)-\] \[{\cal M}_{5}\chi(z,t)\chi^{{}^{\prime\prime}}(z,t)-{\cal M}_{6} \chi^{{}^{\prime}2}(z,t)+{\cal M}_{7}\chi(z,t)\chi^{{}^{\prime}}(z,t), \tag{112}\]
with
\[{\cal M}_{1}=\frac{9\gamma^{2}}{8\alpha z^{3}}(5\alpha+3\gamma \Lambda)f(z),\quad{\cal M}_{2}=\frac{9\gamma^{3}}{\alpha L^{2}z},\quad{\cal M }_{3}=\frac{9\gamma^{3}}{4\alpha^{2}L^{2}z}(\alpha+\gamma\Lambda),\] \[{\cal M}_{4}=\frac{6\gamma^{2}f(z)}{z^{3}},\quad{\cal M}_{5}= \frac{9\gamma^{2}f(z)}{\alpha L^{2}z^{2}},\quad{\cal M}_{6}=\frac{3\gamma^{3} f^{2}(z)(\alpha+\gamma\Lambda)}{16\alpha^{2}L^{2}z},\] \[{\cal M}_{7}=\frac{27\gamma^{3}f^{2}(z)(\alpha+\gamma\Lambda)}{4 \alpha^{2}L^{2}z}.\]
The bulk viscosity is determined from the term \({\cal M}_{4}\chi(z,t)\chi^{{}^{\prime}}(z,t)\), given by
\[\zeta=\frac{\sqrt{3}}{24\pi}\frac{{\cal G}}{4z_{h}^{3}}\sqrt{\frac{\alpha+ \gamma\Lambda}{\alpha-\gamma\Lambda}}, \tag{113}\]
with
\[S=\frac{{\cal G}{\cal F}}{4z_{h}^{3}}, \tag{114}\]
where
\[{\cal F} = 1+\frac{1}{T}\left(\frac{B^{2}\cos^{2}(\theta^{\prime})b(\theta^ {\prime})}{5m^{2}\rho^{2}}(4\pi T)^{3}-q(\theta^{{}^{\prime}})\left(\frac{\pi T }{3}\right)\right)\] \[- \frac{\sec(\theta^{\prime})}{\left(1-\frac{\xi}{4}\right)T}\left( -\frac{B^{2}\cos^{2}(\theta^{\prime})b(\theta^{\prime})}{2m^{2}\rho^{2}}(\pi T )^{2}-\frac{q(\theta^{{}^{\prime}})}{2}\right).\]
Finally, through an algebraic manipulation:
\[\frac{\zeta}{S}=\frac{\sqrt{3}}{24\pi{\cal F}}\sqrt{\frac{\alpha+\gamma \Lambda}{\alpha-\gamma\Lambda}} \tag{115}\]
Shear viscosity and entropy density ratio
For the \(\eta/S\) ratio, we consider the tensor perturbation in the \(xy\) metric component fluctuates. The holographic dictionary maps of-diagonal fluctuation in the bulk metric onto
off-diagonal components of the dual energy-momentum tensor. In the linear regime, such fluctuations are associated with shear waves in the boundary fluid, with a metric given by
\[ds^{2}=\frac{L^{2}}{z^{2}}\left(-f(z)dt^{2}+dw^{2}+dx^{2}+dy^{2}+2 \Psi(z,t)dxdy+\frac{dz^{2}}{f(z)}\right). \tag{111}\]
Substituting this metric in the Horndeski equation (\(\mathcal{E}_{\mu\nu}=0\)) for \(\mu=x\) and \(\nu=y\), one obtains:
\[\mathcal{P}_{1}\Psi^{{}^{\prime\prime}}(z,t)+\mathcal{P}_{2}\Psi^ {{}^{\prime}}(z,t)+\mathcal{P}_{3}\ddot{\Psi}(z,t)=0\,, \tag{112}\]
where we defined
\[\mathcal{P}_{1} =36\gamma^{2}(\alpha-\gamma\Lambda)f^{2}(z),\] \[\mathcal{P}_{2} =-\gamma(\alpha-\gamma\Lambda)f(z)(3\alpha L^{2}-6\gamma z^{4}/z _{h}^{4}),\] \[\mathcal{P}_{3} =-36\gamma^{2}z(3\alpha+\gamma\Lambda). \tag{113}\]
Using the ansatz:
\[\Psi(z,t)=e^{-i\omega t}\Phi(z), \tag{114}\] \[\Phi(z)=\exp\left(-i\omega K\ln\left(\frac{6\gamma^{2}z^{4}f(z)} {\mathcal{G}}\right)\right), \tag{115}\]
one finds
\[K=\frac{1}{4\pi T}\sqrt{\frac{3\alpha+\gamma\Lambda}{\alpha- \gamma\Lambda}}. \tag{116}\]
At this point we must evaluate the Lagrangian (2) using the metric (111) and expand it up to quadratic terms:
\[\mathcal{H}_{shear}=P_{1}\Psi^{2}(z,t)+P_{2}\dot{\Psi}(z,t)+P_{3 }\Psi^{{}^{\prime}2}(z,t)+P_{4}\Psi(z,t)\Psi^{{}^{\prime}}(z,t), \tag{117}\]
where
\[P_{1} =\frac{2\gamma^{2}}{z^{5}}\left(-486\alpha\gamma\frac{z^{4}}{z_{h }^{4}}-\alpha L^{2}(\alpha-48\gamma\Lambda)\right),\] \[P_{2} =-\frac{108\gamma^{2}(3\alpha+\gamma\Lambda)}{z^{2}f(z)(7\alpha+ \gamma\Lambda)},\] \[P_{3} =\frac{6\gamma^{2}}{z^{3}f(z)},\] \[P_{4} =\frac{24\gamma^{2}(\alpha+\gamma\Lambda)}{z^{4}(7\alpha+\gamma \Lambda)}.\]
The viscosity is determined from the term \(P_{3}\Psi(z,t)\Psi^{{}^{\prime}}(z,t)\), which reads
\[\eta=\frac{1}{4\pi}\frac{\mathcal{G}}{4z_{h}^{3}}\sqrt{\frac{3\alpha+\gamma \Lambda}{\alpha-\gamma\Lambda}}, \tag{101}\]
where
\[S=\frac{\mathcal{G}\mathcal{F}}{4z_{h}^{3}}, \tag{102}\]
with
\[\mathcal{F} = 1+\frac{1}{T}\left(\frac{B^{2}\cos^{2}(\theta^{\prime})b(\theta^ {\prime})}{5m^{2}\rho^{2}}(4\pi T)^{3}-q(\theta^{{}^{\prime}})\left(\frac{\pi T }{3}\right)\right)\] \[- \frac{\sec(\theta^{\prime})}{\left(1-\frac{\xi}{4}\right)T} \left(-\frac{B^{2}\cos^{2}(\theta^{\prime})b(\theta^{\prime})}{2m^{2}\rho^{2 }}(\pi T)^{2}-\frac{q(\theta^{\prime})}{2}\right).\]
Thus, we have after algebraic manipulation, we can provide:
\[\frac{\eta}{S}=\frac{1}{4\pi\mathcal{F}}\sqrt{\frac{3\alpha+\gamma\Lambda}{ \alpha-\gamma\Lambda}} \tag{103}\]
|
2304.04617 | VARS: Video Assistant Referee System for Automated Soccer Decision
Making from Multiple Views | The Video Assistant Referee (VAR) has revolutionized association football,
enabling referees to review incidents on the pitch, make informed decisions,
and ensure fairness. However, due to the lack of referees in many countries and
the high cost of the VAR infrastructure, only professional leagues can benefit
from it. In this paper, we propose a Video Assistant Referee System (VARS) that
can automate soccer decision-making. VARS leverages the latest findings in
multi-view video analysis, to provide real-time feedback to the referee, and
help them make informed decisions that can impact the outcome of a game. To
validate VARS, we introduce SoccerNet-MVFoul, a novel video dataset of soccer
fouls from multiple camera views, annotated with extensive foul descriptions by
a professional soccer referee, and we benchmark our VARS to automatically
recognize the characteristics of these fouls. We believe that VARS has the
potential to revolutionize soccer refereeing and take the game to new heights
of fairness and accuracy across all levels of professional and amateur
federations. | Jan Held, Anthony Cioppa, Silvio Giancola, Abdullah Hamdi, Bernard Ghanem, Marc Van Droogenbroeck | 2023-04-10T14:33:05Z | http://arxiv.org/abs/2304.04617v1 | # VARS: Video Assistant Referee System
###### Abstract
The Video Assistant Referee (VAR) has revolutionized association football, enabling referees to review incidents on the pitch, make informed decisions, and ensure fairness. However, due to the lack of referees in many countries and the high cost of the VAR infrastructure, only professional leagues can benefit from it. In this paper, we propose a Video Assistant Referee System (VARS) that can automate soccer decision-making. VARS leverages the latest findings in multi-view video analysis, to provide real-time feedback to the referee, and help them make informed decisions that can impact the outcome of a game. To validate VARS, we introduce _SoccerNet-MVFoul_, a novel video dataset of soccer fools from multiple camera views, annotated with extensive fool descriptions by a professional soccer referee, and we benchmark our VARS to automatically recognize the characteristics of these fools. We believe that VARS has the potential to revolutionize soccer refereeing and take the game to new heights of fairness and accuracy across all levels of professional and amateur federations.
+
Footnote †: Denotes equal contributions Contacts: [email protected], [email protected], [email protected]. Data/code available at www.soccer-net.org.
## 1 Introduction
Over the past decades, the technology used by referees in soccer has undergone a drastic evolution. Before the beginning of this century, referees and their assistants only relied on their own judgment, and the communication between them was based on eye contact and body language. The French and Scottish refereeing trios were the first ones to be linked through wireless mini-earphones during league matches, facilitating communication among them [56]. Nowadays, wireless headsets are essential pieces of equipment for referees worldwide, made mandatory for high-level competitions. Another important breakthrough in professional soccer was the introduction of goal-line technology, which uses a combination of cameras and sensors to determine whether the entire ball has crossed the goal line or not. This technology aims to prevent controversial goals such as the famous "ghost goal" scored by Geoff Hursts in the 1966 World Cup final against Germany, where the ball may not have fully crossed the line and led to England receiving the world champion title [17]. Successively, the International Football Association Board (IFAB) approved the introduction of extra referees, namely the video assistant referees (VAR), to prevent game-changing errors. More recently, artificial intelligence systems appeared for the first time during the World Cup 2022 in Qatar. Semi
Figure 1: **Video Assistant Referee System (VARS).** We propose an automated VARS for automatically classifying whether an action is a fool, determining the type of fool (_e.g._, ‘Tackling’, ‘Pushing’, etc.), and the appropriate punishment the player should receive for the fool (_i.e._, ‘No card’, ‘Yellow card’, or ‘Red card’) from a multi-view camera setup.
automated offside technology now supports the VAR to help referees make faster, more accurate, and more reproducible offside decisions [18]. This new system relies on \(12\) well-calibrated cameras to track the ball and the player's body pose, sending an automated offside alert to the video assistant referee inside the video operation room. This shows that soccer is moving towards more assistance or even automated systems to help referees make better decisions.
However, despite its intention to improve the accuracy of referee decisions, VAR has become a source of frustration and anger for many football fans around the world. Since we have a different video assistant referee for each game, we do not always have consistent decisions. Sometimes the VAR predicts different outcomes for similar situations in different games and leagues. Moreover, the implementation of the VAR technology and infrastructure requires a substantial financial investment, limiting its accessibility to only the top-tier leagues and clubs. As a result, semi-professional or amateur leagues are unable to benefit from the VAR due to financial constraints. Additionally, the shortage of referees worldwide makes it impossible in staffing additional referees as Video Assistant Referees, except for the professional leagues.
In this work, we propose a first step towards a fully automated "Video Assistant Referee System" (VARS) which could support or replace the current VAR. We attempt to automatically predict all fouis and suggest appropriate sanctions to the players. In case the on-field referee makes a significant mistake, our VARS could intervene to suggest a revision. It is intended that, just like regular VAR, our VARS serves as a support system for the referee, but the final decision remains in the hands of the on-field referee. To achieve this objective, we rely on multi-view uncalibrated camera video streams, which are already leveraged to edit broadcast games. Specifically, we release a new dataset comprising \(3{,}901\) actions with multi-view clips of \(5\) seconds around the action, annotated by a professional referee. We focus our analysis on the classification of foul types and evaluate their severity to identify the sanction for the player. Practically, our VARS analyses the different streams and combines the information from the multiple cameras. We show that using a multi-view system largely improves the performance compared to a single view and that we reach good performance on our video recognition tasks.
**Contributions.** We summarize our contribution as follows: **(i)** We publicly release _SoccerNet-MVFoul_, a new multi-view video dataset containing video clips captured by multiple cameras, annotated with \(10\) properties. **(ii)** We propose _VARS_, a new multi-camera video recognition system for classifying fouis and their severity. **(iii)** We propose a thorough study of using multiple views and how different types of camera views can influence the performance of VARS on two new video recognition tasks.
## 2 Related work
**Sports understanding.** As a research topic, sports video understanding has increased in popularity thanks to its challenging and fine-grained nature [41, 54]. Nowadays, most state-of-the-art automatic methods are based on deep learning and have shown impressive performance on tasks such as player detection and tracking [7, 39, 58], tactics analysis [53], pass feasibility [2] and prediction in soccer [25], talent scouting [11], or player re-identification in occluded scenarios [50]. Video classification started as a key area of research in this field [65], with approaches proposed to recognize specific actions [32, 45] or distinguish between different game phases [8]. With the growing interest in temporal activity localization [3], the task of action spotting [4, 6, 10, 48, 49, 67] has gained interest as it provides precise localization of specific actions within a soccer game.
The progress in those tasks was made possible thanks to the availability of large-scale datasets [28, 43, 46, 57, 66]. Giacola [20] introduced the SoccerNet dataset, which has grown to be the most extensive collection of data and annotations for video understanding in soccer, including benchmarks for \(10\) different tasks, ranging from broadcast understanding [12], field understanding [5] and player understanding [9]. The SoccerNet team also organizes yearly competitions on these different tasks to foster research in the field [21]. The dataset presented in this paper extends SoccerNet by proposing a novel multi-view video collection including foul annotations for video recognition tasks.
**Video understanding.** For a long time, video understanding lagged behind image understanding due to the lack of large-scale video datasets such as ImageNet or CIFAR-100 [13, 34] in the video domain. However, the release of large video understanding datasets such as UCF101 [51], ActivityNet [3], YouTube-8M [1], and Kinetics [31] has led to a surge in popularity and interest in the field. Video understanding tasks include video classification [16, 30, 42], action recognition [47, 61], video captioning [19, 33, 63], and video generation [36].
The interest in developing video classification models that capture spatio-temporal information has significantly grown. Temporal Segment Network (TSN) [62] aggregates features across multiple temporal video segments to improve recognition performance. Tran [55] proposed a new spatio-temporal convolutional block R(2+1)D and analyze its effect on action recognition models. Recently, the Multiscale Vision Transformer (MViT) [15, 37] came as a way to combine the strengths of both convolutional neural networks (CNNs) and transformers for video classification, capturing both spatial and temporal attentions. In this work, we train different video representations to learn per-clip features that we aggregate from multiple views to identify the different properties of the fouls.
**Multi-view understanding.** Su _et al_. [52] introduces the idea of training image encoders to recognize 3D objects from multiple views, benefiting from the mature 2D computer vision. Most effort focused on informative aggregation between views, introducing cross-view confidence [29], group convolutions to learn rotation-equivariant representations [14], graph convolutions to learn view aggregation [64]. Alternatively, MVTN [23] predicted the viewpoints from a differentiable 3D renderer. In the video domain, synthetic views (3D motion or optical flow) are created for single-stream videos as a way to obtain better representation learned in a self-supervised fashion [35, 59]. In this work, we leverage a simple multi-view pipeline for video understanding, trained in a fully supervised fashion, that incorporates multiple replay streams from soccer broadcast videos.
## 3 SoccerNet-MVFouls dataset
In this section, we introduce our novel multi-view foul classification soccer dataset, called _SoccerNet-MVFouls_.
Table 1 presents an overview of our dataset and compares it with other datasets that propose action recognition using either single or multiple views. Our dataset is the only one for multi-view video action recognition in sports, and the first dataset to focus specifically on referee's decisions.
_SoccerNet-MVFouls_ gathers \(3{,}901\) actions extracted from \(500\) soccer games from six main European leagues, covering three seasons from 2014 to 2017, extracted from the SoccerNet dataset [12, 20]. Each action is composed of at least two videos depicting the live action and at least one replay. The actions are annotated with \(10\) different properties describing the characteristics of the foul from a referee's perspective (_e.g_. the severity of the foul, the type of foul, _etc_.). To ensure high-quality annotations, all these properties were manually annotated by a professional soccer referee with \(6\) years of experience and more than \(300\) official games. The referee watched the videos from all available views at any speed to accurately characterize the foul.
### Dataset collection
The dataset was collected in three steps: (i) we extracted the relevant action clips from soccer broadcast videos, (ii) we temporally aligned the clips related to the same action, and (iii) we annotated several foul properties.
**Clip extraction.** As a starting point, we used the SoccerNet-v2 dataset [12], which contains timestamp annotations of foils for \(500\) full broadcast games. Furthermore, the SoccerNet dataset also provides annotations of the replays of some of the foils, allowing us to retrieve, for the same action, different viewpoints. Since our goal is to design a multi-view video assistant referee system, we only keep actions for which we have access to at least two different points of view. In most cases, the extracted clips should cover sufficient information to determine all the foul properties. Also, to prevent bias towards the on-field referee's decision, the \(5\) second clips should not contain the decision of the referee (_e.g_. if the player is given a yellow card). Therefore, we extracted \(5\) second clips per action, starting \(3\) seconds before and ending \(2\) seconds after the timestamp
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**Dataset** & **Context** & **Task** & **Videos/Images** & **View** & **Type** \\ \hline Kinetics 400 [31] & Human actions & Classification & \(650{,}000\) & Single-view & Videos \\ NTU RGB+D 120 [38] & Human actions & Classification & \(114{,}480\) & Multi-view & Videos \\ Northwestern-UCLA Multiview [60] & Human actions & Classification & \(1{,}493\) & Multi-view & Videos \\ UWA3D Multiview II [44] & Human actions & Classification & \(900\) & Multi-view & Videos \\ SoccerNet-v2 (Actions) [21] & Soccer & Classification & \(110{,}458\) & Single-View & Videos \\ SoccerNet-v3 (Re-id.) [5] & Soccer & Re-identifcation & \(33{,}986\) & Multi-View & Images \\ \hline
**SoccerNet-MVFouls (Ours)** & Soccer (Fouls) & Classification & \(8{,}923\) & Multi-view & Videos \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Video action understanding datasets**. Comparative overview of relevant datasets for multi/single-view action recognition in videos/images. Our dataset is the only one providing multi-view videos for classification in sports, with 10 annotated properties per action.
Figure 2: **Example of a multi-view sequence from our dataset.** Each foul has at least (a) one live-action clip (usually taken from the main camera) and (b) one synchronized replay clip (usually a close-up view). We annotate the exact frame where the point of contact happens (red box). The ground-truth properties for this example are: “Offence”, “Challenge”, “No card”, “With contact”, “Upper body”, “Use of shoulder”, “Ball is not played”, “Tried to play the ball”, “No handball”, and “No handball offence”.
annotation. In the following, "live action clip" will refer to the clips taken from the main camera, while "replay clips" will denote all the replay clips typically taken from closer shots. Figure 2 shows an example of such extracted clip.
Clips alignment.We build a Multi-View Foul Annotator tool with a similar interface to a VAR room to ensure the quality of the annotations performed by our referee. At first, the referee is presented with all available clips of an action simultaneously on a grid layout. Our annotator tool enables users to modify the annotated point of contact (see Figure 2) for each clip individually and adjust the speed and offset of the clips to align them temporally, taking into account the fact that replays are frequently broadcasted at a slower speed. The referee may browse simultaneously the synchronized videos either at regular speed or frame by frame to accurately understand and describe the properties of the action. More information and an example of annotation using our annotator may be found in the supplementary material.
Property annotations.The SoccerNet-v2 dataset provides annotations for fouls and yellow/red cards given by the actual game referee. However, the on-field referee has only his own point of view to characterize the foul. Judging foil play incidents from the referee's position at playing time leads to an average error rate of \(14\%\)[40]. Our referee annotator has no time pressure and access to multiple perspectives, which results in more accurate decisions compared to the on-field referee who has to take a quick decision and only has a single view. To ensure a high-quality dataset and avoid any bias, our professional soccer referee manually annotated all properties without seeing the on-field referee's decision.
We defined several properties for each action that are necessary for the referee to take the final decision. These properties include (i) if the clip contains a foul (an action which breaks/violates the _Laws of the Game_[27]), (ii) the class of the foul, (iii) the severity of the foul, (iv) if the player plays the ball, (v) if the player tries to play the ball, (vi) if any player touches the ball with his hand or arm, deliberately or not, (vii) and whether it is an offence according to the _Laws of the Game_[27]), (viii) if there is contact between two players, (ix) the action foul relates to the upper or underbody, and, finally, (x) we further discriminate for the upper body between arms and shoulders. We have special labels corresponding to grey areas for the property (i), we use the label "Between" when both "Foul" and "No foul" decisions are equally valid and there is no obvious decision. For property (iii), we use the labels "Borderline No card/Yellow card" and "Borderline Yellow card/Red card" to indicate a grey area when either "No card" or "Yellow card', (resp. "Yellow card" or "Red card"), would be the correct decision.
### Dataset statistics
Number of views.On average, we have \(2.29\) clips per foul action, around \(75\%\) of them have two viewpoints (live and replay), \(20\%\) have a second replay, and around \(5\%\) have a third replay video. No foul has more than four views.
Properties distribution.Table 2 shows the distribution of the properties "Offence", "Severity" and "Type of foul'. We can see that in all three cases, the distribution is highly unbalanced towards "No card", "Offence" and "Standing tackling", respectively. This analysis follows our intuition of soccer, where yellow and red cards are usually rarer than simple free-kicks given after a foul.
Success rate of the referees.As we only have extracted fools for which the on-field referee has given a foul in the game, we can analyze the success rate of the referees by analyzing the property "No offence". From the \(3901\) fouls given by the referees in the games, our referee annotated \(368\) fouls as "No offence", leading to an error rate of \(10.7\%\). "Standing tackling" and "Elbowing" are the most well classified with \(94\%\) success rate, as shown in Table 3. For the
\begin{table}
\begin{tabular}{l c|l c|l c} \hline \multicolumn{2}{c|}{**Fouls**} & \multicolumn{2}{c|}{**Offence**} & \multicolumn{2}{c}{**Offence Severity**} \\ \hline
**Class** & **Prob.** & **Class** & **Prob.** & **Class** & **Prob.** \\ \hline St. tackling & 43.6 & Offence & 85.8 & No card & 55.3 \\ Tackling & 15.6 & No offence & 10.7 & Yellow card & 26.6 \\ Challenge & 13.0 & Between & 3.4 & NC/YC & 15.2 \\ Holding & 12.5 & & & YC/RD & 1.7 \\ Elbowing & 5.9 & & & Red card & 1.1 \\ High leg & 3.5 & & & & \\ Pushing & 2.9 & & & \\ Dive & 0.9 & & & \\ \hline \end{tabular}
\end{table}
Table 2: **Distribution of classes in our SoccerNet-MVFouls dataset.** The distribution of the classes for the “Offence”, “Severity” and “Type of foul” properties is highly imbalanced. The distribution for the other properties is shown in supplementary. “St. tackling” stands for standing tackling.
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline & & \multicolumn{3}{c}{**Severity Distribution**} \\
**Foul Class** & **Succ. Rate** & **No Card** & **Card** & **Card** \\ \hline Standing Tackling & **0.94** & \(0.79\) & \(0.18\) & \(0.02\) \\ Tackling & \(0.87\) & \(0.37\) & **0.58** & **0.04** \\ High Leg & \(0.87\) & \(0.31\) & **0.63** & **0.06** \\ Holding & \(0.90\) & \(0.60\) & \(0.40\) & \(0.00\) \\ Pushing & \(0.84\) & **0.99** & \(0.01\) & \(0.00\) \\ Elbowing & **0.93** & \(0.43\) & **0.53** & \(0.03\) \\ Challenge & \(0.75\) & **0.94** & \(0.05\) & \(0.01\) \\ Dive & / & \(0.00\) & \(1.00\) & \(0.00\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Referes success rate and severity per foul class.** Referes are successful in most classes but struggle with “Challenge”. Some classes are less likely to return a card, _e.g_. “Tackling” or “High leg”. The success rate for “Dive” is unknown, as we cannot know if a referee whistle for the foul or the dive.
remaining action classes, the referees have a similar success rate of approximately \(87\%\), except for the foul class "Challenge", where the referees have an error rate of \(25\%\). Our analysis is aligned with the finding of Mallo _et al_. [40].
**Severity for different foul classes.** The distribution of the severity among different foul classes can provide insight into how often certain types of fouls result in a card. The results are presented in Table 3. "Tackling", "High leg", and "Elbowing" are three types of fouls that very often result in a yellow card, as they represent fouls that are dangerous for opponents. Contrarily, some classes like "Pushing" or "Challenge" are very unlikely to get a yellow or red card.
## 4 Methodology
Our VAR system is a multi-view video architecture, that automatically identifies different properties for an action. We illustrate our proposed VARS in Figure 3.
### Classification tasks
We formally define two tasks for our dataset.
**Task 1: Fine-grained foul classification.** Given multiple clips of the same foul instance, the objective is to classify the foul into one of \(8\) fine-grained foul classes: "Standing tackling", "Tackling", "High leg", "Pushing", "Holding", "Elbowing', "Challenge", "Dive/Simulation".
**Task 2: Offence severity classification.** Given multiple clips of the same foul instance, the objective is to classify whether the foul constitutes an offence, as well as the severity of the foul. We have defined four classes: "No offence", "Offence + No card", "Offence + Yellow card", and "Offence + Red card". We put aside clips labeled "Between" as well as the clips annotated as "Borderline". Therefore, for this particular task, we use a subset of our SoccerNet-MVFoul dataset.
### Video Assistant Referee System (VARS)
We propose a novel Video Assistant Referee System (VARS) for the task of video recognition from multiple camera views. The pipeline of the VARS is presented in Figure 3. Our VARS takes multiple video clips denoted by \(\mathbf{v}=\{v_{i}\}_{1}^{n}\) as input, showing the same action from \(n\) different views. A video \(v_{i}\) is fed into a video encoder \(\mathbf{E}\) with parameters \(\theta_{E}\) to extract a vector \(f_{i}\) containing the spatio-temporal features for that specific view:
\[f_{i}=\mathbf{E}_{\theta_{\mathbf{E}}}(v_{i})\enspace. \tag{1}\]
We aggregate the feature vectors through a function \(\mathbf{A}\) that outputs a single multi-view representation \(\mathbf{R}\) following:
\[\mathbf{R}=\mathbf{A}(\{f_{i}\}_{i=1}^{n})\enspace, \tag{2}\]
with \(\mathbf{A}\) being a max or mean aggregation function. For the single-task classifier, we input the pooled features through a classification head \(\mathbf{C}\) with parameters \(\theta_{C}\). VARS predicts the final class from the maximum probability score of the classification head, as given by:
\[\mathbf{VARS}=\arg\max\mathbf{C}_{\theta_{\mathbf{C}}}(\mathbf{R})\enspace. \tag{3}\]
We train our model to minimize the following loss:
\[\mathcal{L}=\mathbf{L}(\mathbf{C}_{\theta_{\mathbf{C}}}\,\left(\mathbf{A}(\{ \mathbf{E}_{\theta_{\mathbf{E}}}(v_{i})\}_{i=1}^{n})\right),y)\enspace, \tag{4}\]
with \(\mathbf{L}\) being the cross entropy loss function, and \(y\) the ground truth associated to \(\{v_{i}\}_{i=1}^{n}\). For the offence severity classification, VARS has to understand the game of soccer in order to correctly classify fouls into "No card", "Yellow card", and "Red card". In fact, bringing contextual information about the type of foul inside the network is essential to determine the severity of the offence. As the foul and offence classifiers share common features, we train a model to perform both tasks simultaneously. Our multi-task VARS learns to leverage these shared features to improve its predictions for both tasks. For the multi-task classifier, we define two heads, \(\mathbf{C^{\text{foul}}}\) and \(\mathbf{C^{\text{off}}}\), respectively for the tasks of fine-grained foul classification and offence severity classification. From the probability vector of each task, the VARS will take the maximum as the final prediction:
\[\mathbf{VARS^{t}}=\arg\max\mathbf{C}_{\theta_{\mathbf{C^{t}}}}^{\mathbf{t}}( \mathbf{R})\quad\forall\mathbf{t}\in\{\text{foul},\text{off}\}\enspace. \tag{5}\]
We train our model by minimizing both tasks loss with:
\[\alpha_{\text{foul}}\mathcal{L}^{\text{foul}}+\alpha_{\text{off}}\mathcal{L}^{ \text{off}}\enspace. \tag{6}\]
By choosing different values for \(\alpha\), we can assign more or less importance to tasks. This scaling is necessary when the losses have significantly different magnitudes. In the case of our two tasks, the losses have a similar order of magnitude, so we typically select \(\alpha_{foul}=\alpha_{off}=1\).
Figure 3: **VARS: Video Assistant Referee System.** From multi-view video clips input, our system encodes per-view video features (\(\mathbf{E}\)), aggregates the view features (\(\mathbf{A}\)), and classifies different properties of the foul action (\(\mathbf{C}\)).
**Video encoder E.** We considered different encoders to extract features from the video clips: **(i)** ResNet [24] may be used on videos by running the network on each frame independently and then using a max or mean pooling operation on the features across the frames to obtain a single feature vector that represents the entire video. While this approach works well for extracting spatial features, it does not capture temporal dynamics. **(ii)** R(2+1)D [55] extends the 2D CNN architecture with an additional temporal convolutional layer that operates on a sequence of frames to capture the temporal dynamics of the video. The advantage compared to ResNet is that it both captures spatial and temporal features directly. **(iii)** MViT [15, 37] integrates a multiscale feature representation with a transformer-based architecture to capture both spatial and temporal information from video clips. The feature encoders are typically pre-trained on ImageNet [13] (ResNet) or Kinetics [31] (R2+1D and MViT).
**Multi-view aggregator A.** To combine the extracted features from multiple views, we introduce two different pooling strategies [22], in particular: **(i)** Mean pooling takes the average value for each feature, and **(ii)** Max pooling which takes the maximum value per feature.
**Classification heads C.** Our classification heads consist of two dense layers with softmax activation. The output is a probability vector with dimensions that match the number of classes in the classification problems.
## 5 Experiments
### Experimental setup
**Training details.** For both classification tasks, we leverage clips of \(16\) frames, spanning temporally for \(1\) second, with a spatial dimension of \(224\times 398\) pixels. Specifically, the clips contain \(8\) frames before the foul and \(8\) frames after the foul. The encoders \(\mathbf{E}\) are pre-trained as detailed in the methodology, and the classifier \(\mathbf{C}\) is trained from scratch, while both are trained in an end-to-end fashion. We use a cross-entropy loss, optimized with Adam with an exponential decreasing learning rate starting at \(10^{-4}\) and a batch size of \(8\). The model starts overfitting after \(10\) epochs, and it takes around \(9\) hours to train on a single Nvidia V100 GPU.
**Evaluation metrics.** We report the classification accuracy, which is defined as the ratio of actions correctly classified with respect to the total number of actions. We also provide the top-2 accuracy (where a sample is considered well classified if the class appears in the top two highest confidence predictions) to get more insight into the model's performance. As our dataset is unbalanced, we also provide the balanced accuracy, which is defined as follows:
\[\text{Balanced Accuracy (BA)}=\frac{1}{N}\sum_{i=1}^{N}\frac{TP_{i}}{P_{i}}\enspace, \tag{7}\]
with \(N\) the number of classes, \(TP\) (True Positives) is the number of times where the model correctly predicted the class \(i\) and \(P_{i}\) (Positives) is the number of ground-truth samples for that class in the dataset.
### Main Results
**Task 1: Fine-grained foul classification.** Our results may be found in Table 4. By extracting spatio-temporal features with MViT, we achieve significant improvements in performance compared to ResNet and R(2+1)D. This indicates that using a more advanced feature encoder can significantly enhance the model's ability to identify and classify the type of foul. The influence of the pooling method on the performance is however not significant, although max pooling shows slightly better results. In general, max pooling might be better when not all views are equally informative. Taking the max values helps identify the most important features for the most informative views while ignoring less useful information. In contrast, mean-pooling takes into account the information from all views, including those with a poor perspective. Overall, the best performance is obtained by using MViT as video encoder and max pooling.
**Task 2: Offence severity classification.** For the offence severity classification, we study the same feature encoders and pooling techniques. The top part of Table 5 shows the results obtained by our single-task classifier. Regardless of the used feature extractor or pooling technique, the model has more difficulties in classifying the actions. These difficulties are mainly due to two factors. First, the dataset exclusively consists of actions that were awarded a free kick by the on-field referee. As a result, the "No offence" actions are visually similar to a foul, and not to clear "No offence" actions. The model often struggles to differentiate these actions from actual fouls, which can be further seen in the Supplementary Material. Secondly, the visual appearance of an offence with no card, yellow card, or red card can vary greatly. In Figures 3(a) and 3(b), we compare two frames of two different foul classes that have a little visual similarity. However, in both cases, the defender acted with disregard
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline
**Feature Extractor** & **Pooling** & **Acc. @1** & **Acc. @2** & **BA** \\ \hline ResNet [24] & Mean & \(0.31\) & \(0.56\) & \(0.28\) \\ ResNet [24] & Max & \(0.32\) & \(0.60\) & \(0.28\) \\ R(2+1)D [55] & Mean & \(0.31\) & \(0.55\) & \(0.34\) \\ R(2+1)D [55] & Max & 0.32 & 0.56 & \(0.33\) \\ MViT [15, 37] & Mean & \(0.40\) & \(0.65\) & **0.45** \\ MViT [15, 37] & Max & **0.47** & **0.69** & \(0.43\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Main results for the multi-view video foul classification.** We compare three feature encoders and two pooling methods. The best performance is obtained with MViT and a max pooling between the views. BA indicates the balanced accuracy after normalizing by the frequency of that class.
for the safety of his opponent and therefore resulting in a yellow card. In contrast, Figures 3(c) and 3(d) depict fouls that are visually more similar than the previous two fouls, yet one resulted in "No card" while the other resulted in a "Red card". Minor differences such as the point of contact, the speed of the foul, the distance to the ball, and the intention to play the ball or not, can lead to different classifications.
**Multi-task classifier.** Training a multi-task classifier on related tasks allows the model to utilize the learned information from one task to improve the performance on other tasks. In the bottom part of Table 5, we can see that the multi-task classifier outperforms the single-task classifier regardless of the feature encoder or pooling technique for offence severity classification. Using ResNet to extract spatial features for the type of foul and the offence severity classification does not perform well for either task. The body movements over time and the speed of the players involved in an action are important factors that can greatly impact the outcome of the classification. The multi-task classifier combined with MViT as encoder and max pooling shows promising results in classifying actions into their corresponding offence severity class. Furthermore, the multi-task classifier shows similar results as obtained for the single-task type of foul classification.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \multirow{2}{*}{**Performance**} & \multicolumn{5}{c}{**Viewing Setup**} \\ & L & R1 & L+R1 & R1+R2 & L+R1+R2 \\ \hline Acc\({}_{T1}\) & 0.31 & 0.47 & 0.50 & 0.56 & **0.57** \\ Acc\({}_{T1}\)@2 & 0.54 & 0.68 & 0.70 & 0.69 & **0.72** \\ BA\({}_{T1}\) & 0.29 & 0.38 & 0.36 & **0.44** & 0.39 \\ \hline Acc\({}_{T2}\) & 0.38 & 0.39 & **0.43** & 0.39 & 0.40 \\ Acc\({}_{T2}\)@2 & 0.67 & 0.70 & 0.72 & 0.73 & **0.75** \\ BA\({}_{T2}\) & 0.38 & 0.27 & 0.34 & 0.27 & **0.39** \\ \hline \end{tabular}
\end{table}
Table 6: **Single _vs. multi-view classification**. We compare the performance for single vs multi-views and the influence of the type of view (Live \(L\) and replay \(R\)). We use MViT [15, 37] as feature extractor and max pooling. For both tasks, the best performance is mostly obtained with all three views. BA stands for balanced accuracy, T1 stands for task 1 (foil classification) and T2 stands for task 2 (offence severity classification).
Figure 4: **Example of fouls.** (a) The defender uses his arm as a tool to gain an unfair advantage and ignores the potential danger for his opponent. (b) The defender makes a tackle while taking the risk of his opponent being injured. (c) The defender tries to play the ball in no dangerous way. (d) The defender has no intention to play the ball and only aims to harm his opponent.
Figure 5: **Qualitative results.** VARS predictions for different combinations of views as input.The best performance is obtained with the two replay views.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline
**Feature Extractor** & **Pooling** & **Task** & **Acc.** & **BA** \\ \hline ResNet [24] & Mean & Single & \(0.25\) & \(0.26\) \\ ResNet [24] & Max & Single & \(0.22\) & \(0.25\) \\ R(2+1)D [55] & Mean & Single & \(0.28\) & \(0.30\) \\ R(2+1)D [55] & Max & Single & \(0.27\) & \(0.29\) \\ MViT [15, 37] & Mean & Single & \(0.32\) & \(0.23\) \\ MViT [15, 37] & Max & Single & \(0.29\) & \(0.27\) \\ \hline ResNet [24] & Mean & Multi & \(0.34\) & \(0.25\) \\ ResNet [24] & Mean & Multi & \(0.32\) & \(0.24\) \\ R(2+1)D [55] & Mean & Multi & \(0.34\) & \(0.30\) \\ R(2+1)D [55] & Max & Multi & \(0.39\) & \(0.31\) \\ MViT [15, 37] & Mean & Multi & \(0.38\) & \(0.31\) \\ MViT [15, 37] & Max & Multi & **0.43** & **0.34** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Multi-view video offence and severity classification**. We evaluate our VARS with different feature encoders and pooling methods on a single and multi-task setup. BA stands for the balanced accuracy.
### Detailed analysis
**Single _vs._ multi-view analysis.** We now study the improvement of using multiple views over a single view. To do so, we first created a subset of the test set for which we have clips with two replays and one live action. As evidenced by the top part of Table 6, the type of view has a significant impact on the VARS's ability to detect the correct type of foul. Although the live-action view alone provides worse performance than the replays, combining the live-action view with a replay improves the accuracy slightly compared to using only the replay view in both tasks. This implies that even a poor-quality view can slightly improve the performance. A highly informative view can boost the performance, as we can see by comparing the two replays with a single replay for the type of foul classification. For the offence and severity classification, the VARS seems to benefit more from live actions compared to replays for the offense and severity classification task. One possible explanation is that for the live actions, the VARS takes into account the position of the action on the field, allowing it to learn that the likelihood of a 'No card' or 'Yellow card' is higher in specific areas of the field. For both tasks, we achieved better results by using multiple views, and for most of the metrics, the best performance was achieved by using a live-action clip with two replays. This demonstrates the effectiveness of using multiple views to improve model performance in the type of foul and offence severity classification.
In Figure 5, we show the predictions of the foul classification models while changing the number and type of views. By only using the live action, the VARS is not able to detect the correct type of foul, as confirmed in Table 6. By adding \(1\) or \(2\) replays as input to the model, it is able to detect the foul class with a confidence score ranging from \(76\%\) to \(95\%\). By analyzing the confidence scores, we can see that the view has a big impact of the prediction, which agrees with the results found in Table 6.
**Temporal analysis.** We investigated the temporal context needed to identify fouls and offence severity. In particular, we increased the video length, by reducing the frame rate, in order to maintain the same number of frames to process. Table 7, shows the results of the temporal analysis. We observed that as we increase the temporal context while decreasing the frame rates, the performance of our model decreases. This is likely because the most useful information for our classification tasks is concentrated within a narrow temporal window immediately preceding and following the foul. Adding more temporal context to the model results in the inclusion of frames that do not offer much additional information. By default, we used a frame rate of 16 frames per second, with a temporal context of 1 second, which seemed to strike the best balance between capturing sufficient temporal information and excluding unnecessary frames.
**Per class analysis.** We further analyze the performance per class. The confusion matrices for both tasks are in the supplementary material. We saw that performance varies considerably across classes. For the fine-grained foul classification, the VARS struggles to distinguish between illegal arm movements due to their shared characteristics. It performs well in detecting "Tackling", but often confuses it with "Dive" due to the challenge of distinguishing genuine from deceptive actions in soccer games. The most difficult class for the VARS is "Challenge", as it shares visual similarities with many other classes, making proper generalization during training difficult. Regarding offence classification, the VARS tends to make bad predictions in neighboring classes of the ground truth. For instance, it may classify a foul as "Offence + Yellow card" instead of "Offence + No card". However, the model struggles with "Offence + Red card" due to the limited number of samples in the dataset.
## 6 Conclusion
In summary, our Video Assistant Referee System (VARS) has the potential to bring about a significant improvement in soccer refereeing by ensuring fairness and accuracy at all levels of professional and amateur play. VARS utilizes the latest advances in multi-view video analysis and provides referees with real-time feedback and assists them in making informed decisions that can impact the outcome of soccer games. To prove the effectiveness of VARS, we introduced a novel dataset, SoccerNet-MVFoul, that curates relevant fouls in soccer broadcasts from multiple views and includes foul properties. Our benchmarking results demonstrate that VARS can recognize foul characteristics based on multi-view video processing. By integrating the specific requirements of referees, VARS offers an unbiased and reliable decision-making process for soccer matches.
**Acknowledgement.** This work was partly supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center (VCC) funding and the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI). A. Cioppa is funded by the F.R.S.-FNRS.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Frame rate (FPS)** & **5** & **8** & **12** & **16** \\
**Temporal context** & **3.2s** & **2.0s** & **1.3s** & **1.0s** \\ \hline Accuracy (Foul class.) & \(0.36\) & \(0.38\) & \(0.44\) & \(0.47\) \\ Accuracy (Off. sev. class.) & \(0.39\) & \(0.41\) & \(0.43\) & \(0.43\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Temporal analysis.** We experiment with various temporal context while maintaining a fixed number of 16 frames. In all scenarios, we include 8 frames before and after the foul. |
2304.03178 | Minimal length scale correction in the noise of gravitons | In this paper we have considered a quantized and linearly polarized
gravitational wave interacting with a gravitational wave detector
(interferometer detector) in the generalized uncertainty principle (GUP)
framework. Following the analysis in Phys. Rev. Lett. 127 (2021) 081602
(https://link.aps.org/doi/10.1103/PhysRevLett.127.081602), we consider a
quantized gravitational wave interacting with a gravitational wave detector
(LIGO/VIRGO etc.) using a path integral approach. Although the incoming
gravitational wave was quantized, no Planck-scale quantization effects were
considered for the detector in earlier literatures. In our work, we consider a
modified Heisenberg uncertainty relation with a quadratic order correction in
the momentum variable between the two phase space coordinates of the detector.
Using a path integral approach, we have obtained a stochastic equation
involving the separation between two point-like objects. It is observed that
random fluctuations (noises) and the correction terms due to the generalized
uncertainty relation plays a crucial role in dictating such trajectories.
Finally, we observe that the solution to the stochastic equation leads to time
dependent standard deviation due to the GUP insertion, and for a primordial
gravitational wave (where the initial state is a squeezed state) both the noise
effect and the GUP effects exponentially enhance which may be possible to
detect in future generation of gravitational wave detectors. We have also given
a plot of the dimensionless standard deviation with time depicting that the GUP
effect will carry a distinct signature which may be detectable in the future
space based gravitational wave observatories. | Soham Sen, Sunandan Gangopadhyay | 2023-04-05T07:10:21Z | http://arxiv.org/abs/2304.03178v2 | # Minimal length scale correction in the noise of gravitons
###### Abstract
In this paper we have considered a quantized and linearly polarized gravitational wave interacting with a gravitational wave detector (interferometer detector) in the generalized uncertainty principle (GUP) framework. Following the analysis in Phys. Rev. Lett. 127 (2021) 081602, we consider a quantized gravitational wave interacting with a gravitational wave detector (LIGO/VIRGO etc.) using a path integral approach. Although the incoming gravitational wave was quantized, no Planck-scale quantization effects were considered for the detector in earlier literatures. In our work, we consider a modified Heisenberg uncertainty relation with a quadratic order correction in the momentum variable between the two phase space coordinates of the detector. Using a path integral approach, we have obtained a stochastic equation involving the separation between two point-like objects. It is observed that random fluctuations (noises) and the correction terms due to the generalized uncertainty relation plays a crucial role in dictating such trajectories. Finally, we observe that the solution to the stochastic equation leads to time dependent standard deviation due to the GUP insertion, and for a primordial gravitational wave (where the initial state is a squeezed state) both the noise effect and the GUP effects exponentially enhance which may be possible to detect in future generation of gravitational wave detectors. We have also given a plot of the dimensionless standard deviation with time depicting that the GUP effect will carry a distinct signature which may be detectable in the future space based gravitational wave observatories.
_Introduction:_ It is a well known fact that a single particle freely falling under the effect of gravity, follows the geodesic equation, and in the case of a pair of point particles, they follow the geodesic deviation equation. These trajectories of freely falling objects under the effect of gravity is deterministic in nature and same follows for the geodesic deviation equation. These facts have been verified experimentally at a classical level, however their status at the quantum level has been the subject of intense research theoretically. Although the general theory of relativity gives a perfect description of gravity at a macroscopic level, the quantum nature of gravity is still unknown to the physics community. It is expected that gravity must have a quantum nature and therefore the search for a quantum theory of gravity is pursued. Recently there has been a very important work [1; 2; 3] which demonstrated the effect on falling bodies due to the quantization of the gravitational field. It is observed that the dynamics of separation of a pair of falling bodies have a probabilistic nature which is different from the deterministic nature observed in the case of classical Einstein's gravity. It is also observed that the separation of two particles follows a Langevin-like stochastic equation with a random fluctuation term which is claimed as the quantum generalization of the classical geodesic deviation equation. The investigation reveals that linearized quantum theory of gravity can indeed have a startling effect on the motion of falling bodies. In this regard, we would like to stress that low energy quantum gravity corrections have been predicted by all the existing candidates for a quantum theory of gravity, such as string theory [4; 5], loop quantum gravity [6; 7], and noncommutative geometry [8]. All of them indicate the presence of an observer-independent fundamental length scale (the Planck length \(\approx 10^{-33}\) m) in nature. By modifying the Heisenberg uncertainty principle (HUP), one can incorporate this minimal length. This modified HUP is known as the generalized uncertainty principle (GUP). The relation between gravity and this minimal length scale was first shown in [9; 10] and later in [11]. In the gedanken experiments, a very strong signature of the existence of this minimal length scale was obtained. There have also been several investigations in different areas of theoretical physics to exploit this GUP framework, such as black hole physics [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22], harmonic oscillators [23; 24], optomechanical systems [25; 26; 27] and gravitational wave bar detectors [28; 29; 30]. Recently there have been a few efforts to construct a laboratory-based test to investigate the effects of GUP in optomechanical systems[31; 32]. The general structure of the modified uncertainty relation is given by the following relation [33; 34; 35]
\[\Delta\xi_{i}\Delta\pi_{i}\geq\frac{\hbar}{2}\left[1+\gamma\left(\Delta\pi^{2 }+\langle\pi\rangle^{2}\right)+2\gamma\left(\Delta\pi_{i}^{2}+\langle\pi_{i }\rangle^{2}\right)\right]\]
where the index \(i\) runs from \(1\) to \(3\) and \(\xi_{i}\) and \(\pi_{i}\) are the phase space position and the conjugate momenta. The GUP parameter \(\gamma\) in terms of the dimensionless parameter \(\gamma_{0}\) can be recast as \(\gamma=\frac{\gamma_{0}}{m_{p}^{2}c^{2}}\), where \(m_{p}\) denotes the Planck mass and \(c\) denotes the speed of light. The GUP modified variables \((\tilde{\xi},\tilde{\pi})\) in terms of the old variables \((\xi,\pi)\) are expressed as
\[\tilde{\xi}=\xi\,\ \tilde{\pi}=\pi(1+\gamma\pi^{2}) \tag{1}\]
where we have considered only one spatial dimension. Owning to the above discussion, it is natural to carry out
the analysis in [1; 2; 3] in the GUP framework. This would then incorporate effects from both the quantization of linearized gravity and modification of the Heisenberg's uncertainty relation. The model considered in [1; 2; 3] is as follows. The mirrors in the arm of the falling particles are considered as two freely falling particles. One of the particles has a heavier mass and the other particle has a comparatively smaller mass. The incoming gravitational wave is treated quantum mechanically, and a perturbative approach is taken to include the quantum effects. Our analysis is significantly different from the previous analysis [1; 2; 3] in the sense that we have now considered quantum gravity effects in the detector arm also by considering the existence of a minimal length scale. In order to truly incorporate the effects of the minimal length scale (having a direct connection to gravity), we have replaced the phase space coordinates of the detector part in the particle-graviton interaction Hamiltonian by GUP modified phase space coordinates. In order to analyze the system, we have considered a path integral approach and calculated the Feynman-Vernon influence functional [36] for the modified Hamiltonian. The incoming states have now been considered as minimum uncertainty states or coherent states which most closely resemble the classical gravitational wave. In our analysis, we observe as in [1; 2; 3] that the separation between the two particles obeys a Langevin-like stochastic equation. However, we find terms containing both the effects of the modified uncertainty relation and the quantization of the gravitational waves. It is also observed that in the modified geodesic deviation equation, instead of second order time derivatives of the stochastic term, one can also observe first order time derivatives of the stochastic term solely due to the consideration of the modified uncertainty relation. We at first considered the graviton to exist in a coherent state and then we have extended our analysis for gravitons in squeezed states. In principle, one can find that the noise in the case of coherent states is very small but if one considers squeezed vacuum states, by tuning the squeezing parameter one can really detect such quantum gravitational effects in the next generation of gravitational wave detectors which could in principle give us hints about both the generalized uncertainty relation and the gravitons. Although having a squeezed vacuum state also have the limitation that there must be a source from which such gravitons in squeezed states are being generated. It is though assumed that primordial gravitational waves can be considered to exist in such squeezed states which can be a future candidate for the detection of such enhanced noise spectrum.
_Background:_ The main goal of this paper is to quantize the arm length of a gravitational wave detector (with arm length \(\tilde{\xi}\)) where the phase space coordinates of the detector follow the modified Heisenberg uncertainty priciple. The complete action of the system can be obtained by combining the Einstein-Hilbert action with the action of the detector. In terms of the GUP modified variables one can obtain the action of the system (considering only one polarization of the gravitational wave) as
\[S_{\mathcal{A}}=\int dt\left(\frac{m}{2}\left(\dot{q}^{2}-\omega^{2}q^{2} \right)+\frac{m_{0}}{2}\dot{\tilde{\xi}}^{2}-\mathcal{G}\dot{q}\dot{\tilde{ \xi}}\tilde{\xi}\right) \tag{2}\]
with \(q\) denoting the configuration space variable of the gravitational wave, \(\hbar\omega\) being the energy of the gravitational wave mode, and \(m_{0}\) denoting the smaller mass between the two masses representing the interferometer. Here, \(\mathcal{G}=\frac{m_{0}}{2\nu}\) denotes the graviton-detector coupling constant with \(l_{p}=\sqrt{\frac{\hbar G}{c^{3}}}\) being the Planck length and \(m=\frac{L^{3}c^{5}}{16\hbar G^{2}}\). The Hamiltonian from the action in terms of the phase space variables of the gravitational wave (\(\{q,p\}\)) and the GUP modified phase space variables of the gravitational wave detector (\(\{\tilde{\xi},\tilde{\pi}\}\)) is given by
\[H=\left(\frac{p^{2}}{2m}+\frac{\tilde{\pi}^{2}}{2m_{0}}+\frac{\mathcal{G}p \tilde{\pi}\tilde{\xi}}{mm_{0}}\right)\left[1-\frac{\mathcal{G}^{2}\tilde{ \xi}^{2}}{mm_{0}}\right]^{-1}+\frac{1}{2}m\omega^{2}q^{2}\.\]
The above Hamiltonian in terms of the unmodified detector variables (\(\{\xi,\pi\}\)) upto \(\mathcal{O}(\gamma)\) can be recast as
\[H=\frac{\frac{p^{2}}{2m}+\frac{\pi^{2}}{2m_{0}}+\frac{\mathcal{G}p\pi\xi}{mm_ {0}}+\gamma\left(\frac{\pi^{4}}{m_{0}}+\frac{\mathcal{G}p\pi^{3}\xi}{mm_{0}} \right)}{1-\frac{\mathcal{G}^{2}\xi^{2}}{mm_{0}}}+\frac{1}{2}m\omega^{2}q^{2} \tag{3}\]
where we have used \(\tilde{\pi}=\pi(1+\gamma\pi^{2})\). Here, \(\xi\) denotes the geodesic separation of the lighter mass \(m_{0}\) from the heavier mass. One can now elevate this Hamiltonian from classical description to quantum description by introducing appropriate commutation relations between the coordinate variables and their conjugate momenta. In this analysis we consider the initial state of the gravitational wave as \(|\psi_{\omega}\rangle\) and that of the final state as \(|\mathcal{F}\rangle\). As we do not know the final state of the gravitational wave, we need to sum over all \(|\mathcal{F}\rangle\) states. As initially there was no coupling between the initial gravitational wave state and the detector state, we will consider them as a tensor product state. Now via spontaneous and stimulated emission procedures the detector masses will both emit and absorb gravitons. Here, our aim is to compute the probability and the form of the probability is given as follows
\[P_{\psi_{\omega}}^{[\phi_{i}\rightarrow\phi_{f}]}=\sum_{|\mathcal{F}\rangle} \left|\langle\mathcal{F},\phi_{f}|\hat{U}(T+\Delta,-\Delta)|\psi_{\omega},\phi _{i}\rangle\right|^{2} \tag{4}\]
where \(|\phi_{f}\rangle\) and \(|\phi_{i}\rangle\) are the final state and the initial states of the particle at times \(t=T+\Delta\) and \(t=-\Delta\) respectively. \(\hat{U}\) is the unitary time evolution operator in eq.(4) and associated with the quantum mechanical analogue to the Hamiltonian in eq.(3). It is very important to note that the interaction is turned on in the interval \(t=0\) to \(t=T\). Inserting complete set of joint position
eigenstates and summing over all the final gravitational wave states \(|\mathcal{F}\rangle\) in eq.(4), one can rewrite the transition probability as
\[P^{[\phi_{i}\rightarrow\phi_{f}]}_{\psi_{\omega}}=\int dq_{i}dq_{i} ^{\prime}dq_{f}d\xi_{i}d\xi_{i}^{\prime}d\xi_{f}d\xi_{f}^{\prime}\psi_{\omega}(q _{i})\psi_{\omega}^{*}(q_{i}^{\prime})\phi_{i}(\xi_{i})\] \[\times\phi_{i}^{*}(\xi_{i}^{\prime})\phi_{f}^{*}(\xi_{f})\phi_{f} (\xi_{f}^{\prime})~{}\langle q_{i}^{\prime},\xi_{i}^{\prime}|\hat{U}^{\dagger}( T+\Delta,-\Delta)|q_{f},\xi_{f}^{\prime}\rangle\] \[\times\langle q_{f},\xi_{f}|\hat{U}(T+\Delta,-\Delta)|q_{i},\xi_ {i}\rangle~{}. \tag{5}\]
Rewriting each of the amplitudes present in the canonical path integral form and executing the path integral over \(\pi\), one can now recast the amplitude in a much compact form given by
\[\langle q_{f},\xi_{f}|\hat{U}(T+\Delta,-\Delta)|q_{i},\xi_{i} \rangle=\int\tilde{\mathcal{D}}\xi\exp\Bigl{[}\frac{im_{0}}{2\hbar}\int_{- \Delta}^{T+\Delta}dt\] \[\times[\hat{\xi}^{2}-2\gamma m_{0}^{2}\hat{\xi}^{4}]\Bigr{]}\int \mathcal{D}q\mathcal{D}pe^{\frac{i}{\hbar}\int_{-\Delta}^{T+\Delta}dt[p\dot{q} -H_{\zeta}^{\gamma}[q,p]]}~{}.\]
In the above expression, the form of the reduced Hamiltonian is given by
\[H_{\xi}^{\gamma}[q,p]=\frac{(p+\mathcal{G}\xi\xi(1-3\gamma m_{0}^{2}\hat{\xi}^ {2}))^{2}}{2m}+\frac{1}{2}m\omega^{2}q^{2}~{}.\]
We can finally obtain the form of the probability in eq.(5) as
\[P^{[\phi_{i}\rightarrow\phi_{f}]}_{\psi_{\omega}}=\int d\xi_{i} d\xi_{i}^{\prime}d\xi_{f}d\xi_{f}^{\prime}\phi_{i}(\xi_{i})\phi_{i}^{*}(\xi_{i}^{ \prime})\phi_{f}^{*}(\xi_{f})\phi_{f}(\xi_{f}^{\prime})\] \[\int\tilde{\mathcal{D}}\xi\tilde{\mathcal{D}}\xi^{\prime}e^{ \frac{i}{\hbar}\int_{-\Delta}^{T+\Delta}dt\frac{m_{0}}{2}\left[(\hat{\xi}^{2} -\hat{\xi}^{\prime 2})-2\gamma m_{0}^{2}(\hat{\xi}^{4}-\hat{\xi}^{\prime 4})\right]}F^{ \gamma}_{\psi_{\omega}}[\xi,\xi^{\prime}] \tag{6}\]
where we have defined the following two quantities,
\[\langle q_{f}|\hat{U}_{\xi}^{\gamma}(T+\Delta,-\Delta)|q_{i} \rangle\equiv\int\mathcal{D}q\mathcal{D}pe^{\frac{i}{\hbar}\int_{-\Delta}^{T+ \Delta}dt[p\dot{q}-H_{\zeta}^{\gamma}[q,p]]},\] \[F^{\gamma}_{\psi_{\omega}}[\xi,\xi^{\prime}]\equiv\langle\psi_{ \omega}|\hat{U}_{\xi^{\prime}}^{\gamma\dagger}[T+\Delta,-\Delta]\hat{U}_{\xi}^ {\gamma}[T+\Delta,-\Delta]|\psi_{\omega}\rangle~{}.\]
Here, \(F^{\gamma}_{\psi_{\omega}}[\xi,\xi^{\prime}]\) gives the Feynman-Vernon influence functional. It is very important to observe that influence functional is the only term in the probability containing the harmonic oscillator state (\(|\psi_{\omega}\rangle\)) and providing a coupling between the harmonic oscillator and the detector variable. With the generic structure of the probability in hand, we shall now proceed to explicitly compute the form of the influence functional. It is important to observe that we are currently running our analysis for a single mode of the gravitational wave and we shall extend it for multiple modes also.
_The influence functional:_ From the form of the Hamiltonian \(\hat{H}_{\xi}^{\gamma}(q,p)\), it can be seen that the instantaneous eigenstates are general harmonic oscillator eigenstates generated by the shift of the momentum in the momentum space via a parameter \(\mathcal{G}\xi\xi(1-3\gamma m_{0}^{2}\hat{\xi}^{2})\). In order the convert the the integrals from \(-\Delta\) to \(T+\Delta\) to integrals from \(0\) to \(T\), we need to redefine the Heisenberg eigenstates from \(|\psi_{\omega}\rangle\) to \(e^{-\frac{i}{\hbar}\hat{H}_{0}\Delta}|\psi_{\omega}\rangle\) and we also need to introduce the modified wave functions given by the following two relations
\[\tilde{\phi}_{i}(\tilde{\xi}_{i}) =\int d\xi_{i}\phi_{i}(\xi_{i})\int\tilde{\mathcal{D}}\xi e^{ \frac{i}{\hbar}\int_{-\Delta}^{0}dt\frac{m_{0}}{2}(\hat{\xi}^{2}-2\gamma m_{0}^ {2}\hat{\xi}^{4})}~{},\] \[\tilde{\phi}_{f}(\tilde{\xi}_{f}) =\int d\xi_{f}\phi_{f}(\xi_{f})\int\tilde{\mathcal{D}}\xi e^{- \frac{i}{\hbar}\int_{T}^{T+\Delta}dt\frac{m_{0}}{2}(\hat{\xi}^{2}-2\gamma m_{ 0}^{2}\hat{\xi}^{4})}~{}.\]
In the above two relations, we have defined \(\xi(-\Delta)=\xi_{i}\), \(\xi(T+\Delta)=\xi_{f}\), \(\xi(0)=\tilde{\xi}_{i}\), and \(\xi(T)=\tilde{\xi}_{f}\). The modified probability formula takes the form as follows
\[P^{[\phi_{i}\rightarrow\phi_{f}]}_{\psi_{\omega}}\equiv\int d \xi_{i}d\xi_{i}^{\prime}d\xi_{f}d\xi_{f}^{\prime}\phi_{i}(\xi_{i})\phi_{i}^{*}( \xi_{i}^{\prime})\phi_{f}^{*}(\xi_{f})\phi_{f}(\xi_{f}^{\prime})\] \[\int[\tilde{\mathcal{D}}\xi]^{\xi_{f},T}_{\xi_{i},0}[\tilde{ \mathcal{D}}\xi^{\prime}]^{\xi_{f}^{\prime},T}_{\xi_{f}^{\prime},0}e^{\frac{i}{ \hbar}\int_{0}^{T}dt[L_{0}^{2}(\xi)-L_{0}^{2}(\xi^{\prime})]}F^{\gamma}_{\psi_{ \omega}}[\xi,\xi^{\prime}] \tag{7}\]
where \(L_{0}^{\gamma}(\xi)=\frac{m_{0}}{2}(\hat{\xi}^{2}-2\gamma m_{0}^{2}\hat{\xi}^{4})\) and we have redefined the detector variable from \(\tilde{\xi}\) to \(\xi\) (with \(\xi(0)=\xi_{i}\), \(\xi(T)=\xi_{f}\) and \(\xi^{\prime}(0)=\xi_{i}^{\prime}\), \(\xi^{\prime}(T)=\xi_{f}^{\prime}\)). Due to the change in the integration limit, the Feynman-Vernon influence functional also gets modified. The modified Feynman-Vernon influence functional has the form
\[F^{\gamma}_{\psi_{\omega}}[\xi,\xi^{\prime}]=\langle\psi_{\omega}|e^ {\frac{i}{\hbar}\tilde{\mathcal{G}}\xi_{i}^{\prime}\xi_{i}^{\prime}[1-3\gamma m_{0} ^{2}\hat{\xi}^{\prime 2}]}|\hat{U}_{\gamma\xi^{\prime}}^{\dagger\dagger}(T,0)\] \[\times e^{-\frac{i}{\hbar}\tilde{\mathcal{H}}(T)\mathcal{G}\xi_{f}^{ \prime}\xi_{f}^{\prime}[1-3\gamma m_{0}^{2}\hat{\xi}_{f}^{\prime 2}]}e^{\frac{i}{\hbar}\tilde{\mathcal{H}}_{I}(T)\mathcal{G}\xi_{f}\xi_{f}[1-3 \gamma m_{0}^{2}\hat{\xi}_{f}^{\prime 2}]} \tag{8}\] \[\times\hat{U}_{\gamma\xi}^{I}(T,0)e^{-\frac{i}{\hbar}\tilde{ \mathcal{G}}\xi_{i}\xi_{i}[1-3\gamma m_{0}^{2}\hat{\xi}_{i}^{2}]}|\psi_{\omega}\rangle\]
where \(\hat{U}_{\gamma\xi}^{I}(T,0)\) gives the unitary time evolution operator in the interaction picture and \(\hat{q}_{I}(T)=e^{\frac{i}{\hbar}\hat{H}_{0}T}\hat{q}e^{\frac{i}{\hbar}\hat{H}_{0}T}\). In order to proceed further, we need to decompose the unitary time evolution operators in terms of the time ordered exponential functions in the interaction picture and finally write the form of the unitary time evolution operator in the interaction picture as follows
\[\hat{U}_{\gamma\xi}^{I}(T,0)\equiv e^{-\frac{i\mathcal{G}}{\hbar}\hat{q}_{I}(T )\xi_{f}\hat{\xi}_{f}[1-3\gamma m_{0}^{2}\hat{\xi}_{f}^{\prime 2}]}e^{\frac{i\mathcal{G}}{\hbar}\int_{0}^{T}dt~{}\hat{q}_{I}(t)Z(t)}\] \[e^{\frac{i\mathcal{G}}{\hbar}\hat{q}_{I}(0)\xi_{i}\hat{\xi}_{i}(1- 3\gamma m_{0}^{2}\hat{\xi}_{i}^{2})}e^{-\frac{\mathcal{G}^{2}}{8\pi^{2}}\int_{0}^ {T}dt\int_{0
and the phase factor in the exponential term of eq.(10) is given by the following relation
\[\begin{split} i\Phi^{\gamma}_{\emptyset_{\omega}}[\xi,\xi^{\prime} ]\equiv&-\frac{\mathcal{G}^{2}}{8\hbar m\omega}\int_{0}^{T}dt\int_{0 }^{t}dt^{\prime}(Z(t)-Z^{\prime}(t))\\ &\times(Z(t^{\prime})e^{-i\omega(t-t^{\prime})}-Z^{\prime}(t^{ \prime})e^{i\omega(t-t^{\prime})})\.\end{split} \tag{12}\]
With the form of the Feynman-Vernon influence functional in hand, we are now in a position to consider different cases for the state \(|\psi_{\omega}\rangle\) of the gravitational wave. We now consider a gravitational wave-mode in a coherent state \(|\psi_{\omega}\rangle=|\alpha_{\omega}\rangle\) with eigenvalue \(\alpha_{\omega}=\sqrt{\frac{m\omega}{2\hbar}}\zeta_{\omega}e^{-i\phi_{\omega}}\) where the form of the classical gravitational wave mode is given by \(q_{cl}(t)=\zeta_{\omega}\cos(\omega t+\phi_{\omega})\). One can now easily compute the form of the influence functional in the single-mode analysis and proceed to compute the influence functional for a continuum of such modes (or a gravitational field). The influence functional of the field can be considered as a product of the influence functional for the individual modes (\(F^{\gamma}_{\Psi}[\xi,\xi^{\prime}]=\prod\limits_{\vec{k}}F^{\gamma}_{\psi_{ \omega}(\vec{k})}[\xi,\xi^{\prime}]\)) where the gravitational field is given as \(|\Psi\rangle=\mathop{\otimes}\limits_{\vec{k}}|\psi_{\omega(\vec{k})}\rangle\). One can finally compute the transition probability for a gravitation field with its field modes in coherent states as follows
\[\begin{split}& P_{\Psi}\equiv\int d\xi_{i}d\xi^{\prime}_{i}d\xi _{f}d\xi^{\prime}_{f}\phi_{i}(\xi_{i})\phi^{*}_{i}(\xi^{\prime}_{i})\phi^{*}_{ f}(\xi_{f})\phi_{f}(\xi^{\prime}_{f})\int\mathcal{\bar{D}}\xi\bar{\mathcal{D}}\xi^{ \prime}\\ &\times\int\mathcal{\bar{D}}\mathcal{N}_{0}\ \exp\left[-\frac{1}{2}\int_{0}^{T}dt\int_{0}^{T}dt^{\prime}\mathcal{A}_{0}^{-1 }(t,t^{\prime})\mathcal{N}_{0}(t)\mathcal{N}_{0}(t^{\prime})\right]\\ &\times\exp\biggl{[}\frac{im_{0}}{2\hbar}\int_{0}^{T}dt\biggl{[}( \dot{\xi}^{2}-\dot{\xi^{\prime}}^{2})-2\gamma m_{0}^{2}(\dot{\xi}^{4}-\dot{\xi }^{\prime}{}^{4})\\ &+\biggl{[}\frac{(\bar{h}(t)+\mathcal{N}_{0}(t))}{2}-\frac{m_{0}G }{4}[\dot{Z}(t)+\dot{Z}^{\prime}(t)]\biggr{]}\left[Z(t)-Z^{\prime}(t)\right] \biggr{]}\end{split} \tag{13}\]
where \(\bar{h}(t)=\frac{1}{I_{p}}\sum\zeta_{\omega}\cos(\omega t+\phi_{\omega})\). To obtain the final form of the probability in eq.(13), we have made use of the Feynman-Vernon trick. The function \(\mathcal{N}_{0}(t)\) has the interpretation of a noise term, that is a stochastic random function with a Gaussian probability density. Indeed, one can define the average of \(\mathcal{N}_{0}(t)\) which vanishes [3]
\[\langle\mathcal{N}_{0}(t)\rangle \equiv\int\mathcal{D}\mathcal{N}_{0}\exp\Bigl{[}-\frac{1}{2}\int _{0}^{T}\int_{0}^{T}dt^{\prime}dt^{\prime\prime}\mathcal{A}_{0}^{-1}(t^{\prime },t^{\prime\prime})\] \[\times\mathcal{N}_{0}(t^{\prime})\mathcal{N}_{0}(t^{\prime\prime} )\Bigr{]}\mathcal{N}_{0}(t)=0. \tag{14}\]
The function \(\mathcal{A}_{0}(t,t^{\prime})\) is the autocorrelation function of \(\mathcal{N}_{0}(t)\) as [3]
\[\begin{split}&\langle\mathcal{N}_{0}(t)\mathcal{N}_{0}(t^{\prime}) \rangle\equiv\int\mathcal{D}\mathcal{N}_{0}\exp\Bigl{[}-\frac{1}{2}\int_{0}^{T }\int_{0}^{T}dt^{\prime\prime}dt^{\prime\prime\prime}\\ &\times\mathcal{A}_{0}^{-1}(t^{\prime\prime},t^{\prime\prime \prime})\mathcal{N}_{0}(t^{\prime\prime})\mathcal{N}_{0}(t^{\prime\prime \prime})\Bigr{]}\mathcal{N}_{0}(t)\mathcal{N}_{0}(t^{\prime})=\mathcal{A}_{0}(t,t^{\prime})\.\end{split} \tag{15}\]
This autocorrelation function will be of immense importance as we shall see below. In case of the coherent state analysis, the auto correlation function has the form [3]
\[\mathcal{A}_{0}(t,t^{\prime})=\frac{4\hbar G}{\pi}\int_{0}^{\infty}d\omega \omega\cos(\omega(t-t^{\prime})) \tag{16}\]
which is in general divergent in nature. But for a gravitational wave detector, it is sensitive to a certain range of the gravitational wave frequency and therefore one can regularize the integral by applying a maximum cut-off frequency \(\omega_{max}\). Due to the usage of a dipole-like approximation in our current analysis, we can use the maximum value of the frequency as \(\frac{2\pi c}{\xi_{0}}\) with \(\xi_{0}\) being the resting arm length of the detector.
_Dynamics of the arm length:_ With the form of the probability in hand, we can now compute the quantum-dynamics of the detector arm length \(\xi\). In order to obtain the effective stochastic equation, we consider the saddle point approximation. The saddle point gives the maximum contribution to the path integral in eq.(13). For the gravitational wave in the coherent state, following the procedure in [3], one can obtain the following differential equation for the detector variable \(\xi(t)\)
\[\begin{split}&\ddot{\xi}-\frac{1}{2}\biggl{[}\left(\ddot{\bar{h}}(t)+ \mathcal{\tilde{N}}_{0}(t)-\frac{m_{0}G}{c^{5}}\frac{d^{5}}{dt^{5}}(\xi^{2} )\right)(1+3\gamma m_{0}^{2}\dot{\xi}^{2})+\\ &\frac{3\gamma m_{0}^{3}G}{c^{5}}\frac{d^{4}}{dt^{4}}\left(\frac{ d}{dt}(\xi^{2})\dot{\xi}^{2}\right)\biggr{]}\biggr{[}\xi+3\gamma m_{0}^{2}(\dot{\xi}^{3}+3 \xi\dot{\xi}\dot{\xi})\biggl{[}\ddot{\bar{h}}(t)+\\ &\dot{\mathcal{N}}_{0}(t)-\frac{m_{0}G}{c^{5}}\frac{d^{4}}{dt^{4}}( \xi^{2})\biggr{]}=0\.\end{split} \tag{17}\]
Eq.(17) is one of the main results in our paper. It is important to observe that the geodesic deviation equation is replaced now by a quantum stochastic-equation. The most important observation is that now the quantum geodesic equation is governed by the terms coupling the effects of the GUP parameter and the noise term. Unlike [1; 2; 3], there are also terms involving the first order derivative of the noise term with respect to time. The \(\ddot{\bar{h}}(t)\xi\) term in eq.(17) is a tidal acceleration term due to the passing of a classical gravitational wave and the fifth order time derivative term is the dissipative gravitational radiation reaction term [37; 38; 39; 40]. This term \(\ddot{\bar{h}}(t)\xi\) will be important as will be clear in the subsequent discussion. The other higher derivative terms are corrections to the gravitational radiation reaction due to the modification in the uncertainty relation of the detector variables.
For a gravitational wave in the coherent state, the noise spectrum is of the order of the Planck length making it a near impossible task to detect the signatures of the gravitons, let alone the generalized uncertainty relation. Therefore, we shall consider the gravitational wave in a squeezed state \(\hat{S}_{z_{\omega}}|\mathbf{0}_{\omega}\rangle\), where \(\hat{S}_{z_{\omega}}=e^{\frac{i}{2}(z_{\omega}^{2}\hat{a}^{-2}-z_{\omega}\hat{a} ^{(2)})}\) gives the squeezing operator with \(z_{\omega}=r_{\omega}e^{i\phi_{\omega}}\) being the
squeezing parameter. The quantum geodesic equation for a gravitational wave in a squeezed state gives the following equation
\[\ddot{\xi}-\frac{1}{2}\bigg{[}\left(\tilde{\mathcal{N}}^{n.s.}(t)+ \sqrt{\cosh 2r}\tilde{\mathcal{N}}_{0}(t)-\frac{m_{0}G}{c^{5}}\frac{d^{5}}{dt^{5} }(\xi^{2})\right)(1+\] \[3\gamma m_{0}^{2}\dot{\xi}^{2})+\frac{3\gamma m_{0}^{3}G}{c^{5}} \frac{d^{4}}{dt^{4}}\left(\frac{d}{dt}(\xi^{2})\dot{\xi}^{2}\right)\bigg{]} \xi+3\gamma m_{0}^{2}\bigg{[}\tilde{\mathcal{N}}^{n.s.}(t)\] \[+\sqrt{\cosh 2r}\tilde{\mathcal{N}}_{0}(t)-\frac{m_{0}G}{c^{5}} \frac{d^{4}}{dt^{4}}(\xi^{2})\bigg{]}(\dot{\xi}^{3}+3\xi\dot{\xi}\ddot{\xi})=0 \tag{18}\]
where \(r\) denotes the real part of the squeezing parameter for the quantum field and \(\mathcal{N}^{n.s.}(t)\) denotes the non-stationary noise generated due to the time modulation of the noise in squeezed states. It is very important to observe in eq.(18) that the associated noise term now can be exponentially increased along with the terms generated due to the generalized uncertainty principle for values of \(r\) greater than unity. This induces an increased chance in the detection of such minuscule corrections present in such gravitational wave detection scenarios. To proceed further, it is important to note that we can obtain a solution of the given Langevin-like equations by means of perturbative calculations. We can get rid of the dissipative gravitational radiation reaction terms in eq.(s)[17, 18][3]. In order to do so one needs to consider that \(\xi\) is measured in a coarse grained manner which in turns result in the higher derivatives to be negligible. We shall try to obtain an approximate solution of the time dependent geodesic separation for the gravitational wave to be initially in a coherent state. In order to find a solution to eq.(s)[17, 18], we use an iterative approach. For the base equation \(\ddot{\xi}(t)=0\) without the higher order terms, we can obtain a zeroth order solution of the form \(\xi^{(0)}(t)=\xi_{0}+\lambda t\), where the constant \(\lambda\) has the dimension of velocity and can have a maximum value \(\lambda=c\). The maximum interaction time between the graviton being absorbed and released by the detector is \(t_{\text{max}}\sim\frac{\xi_{0}}{c}\) and therefore the linear time dependent term in \(\xi(t)\) can go up to \(\lambda t_{\text{max}}\). Following the same iterative procedure, we can obtain a most general solution of eq.(17) up to \(\mathcal{O}(\gamma,\mathcal{N}_{0},\bar{h},\gamma\mathcal{N}_{0},\gamma\bar{h})\) as follows
\[\xi(t)\cong (\xi_{0}+\lambda t)\left[1+\frac{1}{2}\left[1+3\gamma m_{0}^{2} \lambda^{2}\right])(\bar{h}(t)+\mathcal{N}_{0}(t))\right]\] \[-\lambda(1+6\gamma m_{0}^{2}\lambda^{2})\int_{0}^{t}dt^{\prime}( \bar{h}(t^{\prime})+\mathcal{N}_{0}(t^{\prime})). \tag{19}\]
Now, the higher limit of the integral in eq.(19) has a cutoff at \(t=t_{\text{max}}\). Our aim now is to calculate the standard deviation \(\sigma=\sqrt{\langle(\xi(t)-\langle\xi(t)\rangle)^{2}\rangle}\). We can separate the standard deviation term into two parts as \(\sigma(t)\cong\sigma_{0}(t)+\sigma_{\gamma}(t)\). The \(\sigma_{0}(t)\) part has been calculated in [2; 3] and reads
\[\sigma_{0}(t)\sim\sqrt{2\pi}l_{p}\sim 10^{-35}\ \text{m}. \tag{20}\]
In case of the GUP contribution of the standard deviation, we need to consider the time dependent parts and ignore the linear time-dependent contribution. We then obtain the form of the dimensionless parameter \(\frac{\sigma_{\gamma}(t)}{\sqrt{2\pi}l_{p}}\) to be (with \(\lambda\) set to its maximum value)
\[\frac{\sigma_{\gamma}(t)}{\sqrt{2\pi}l_{p}}\cong 3\gamma m_{0}^{2}c^{2} \bigg{[}1-\frac{2}{\pi(1+\frac{ct}{\xi_{0}})}\frac{\xi_{0}\sin^{2}\left[\frac {\pi ct}{\xi_{0}}\right]}{\pi ct} \tag{21}\] \[+\frac{4}{\pi^{2}\left(1+\frac{ct}{\xi_{0}}\right)^{2}}\left( \gamma_{\varepsilon}-\text{Ci}\left[\frac{2\pi ct}{\xi_{0}}\right]+\ln\left[ \frac{2\pi ct}{\xi_{0}}\right]\right)\bigg{]}\]
where \(\gamma_{\varepsilon}\) gives the Euler constant and Ci denotes the cosine integral function [41]. For a gravitational wave in the squeezed coherent state, we obtain the form of the standard deviation to be (considering the static part only)
\[\sigma_{\text{Squeezed}}(t)=\sqrt{\cosh 2r}\sigma(t) \tag{22}\]
where \(\sigma(t)=\sigma_{0}(t)+\sigma_{\gamma}(t)\).
_Phenomenological aspects of the model:_ It is important to note that the gravitational wave observatories LIGO (/VIRGO) has an \(L\) shaped structure with the arm length at rest to be \(\xi_{0}=4\ \text{km}\) (\(\xi_{0}=3\ \text{km}\) for VIRGO). For the mirror suspended at the both ends of the Fabry-Perot cavity, the mirror coating is made up of fused Silica (mass of a single \(\text{SiO}_{2}\) molecule is \(m_{\text{SiO}_{2}}\sim 10^{-25}\ \text{kg}\)) which serves as the low-index layer and Tantalum pentoxide (mass of a single \(\text{Ta}_{2}\text{O}_{5}\) molecule is \(m_{\text{Ta}_{2}\text{O}_{5}}\sim 10^{-24}\ \text{kg}\)) which serves as the high-index layer [42]. We can indeed obtain a bound on the GUP parameter using these parameters from the existing gravitational wave observatories. Note that, \(\gamma m_{0}^{2}c^{2}\sim\gamma_{0}\times 10^{-33}\) (for \(m_{0}\sim 10^{-24}\ \text{kg}\)) and from the requirement \(\zeta\gamma m_{0}^{2}c^{2}<1\) (where the dimensionless constant \(\zeta\) is a number of order 10), we can impose a bound on the dimensionless quadratic GUP parameter to be \(\gamma_{0}<10^{31}\) which is weaker than the bound obtained earlier for a resonant bar detector interacting with a gravitational wave in [43] but tighter than the bound obtained in [44] using gravitational wave observation data. We shall now try to give a basic estimate on the detectability of the GUP effect from the standard deviation \(\sigma(t)\). For the gravitational wave to be in coherent state, it is important to understand that \(\sigma\cong\sqrt{2\pi}l_{p}\sim 10^{-35}\text{m}\) and current detectability lies around \(10^{-18}\ \text{m}\). In case of the initial graviton state being in a squeezed state, we can indeed observe that \(\sigma^{\text{Squeezed}}(t)=\sqrt{\cosh 2r}\sigma(t)\) which indicates that for a sufficiently high squeezing parameter the standard deviation due to the induced noise and the GUP effect may be detectable. In general such states can only generate in post-inflationary scenarios [45; 46; 47] leading to a very rare chance of detection of such primordial gravitational waves. For a "grand unified theory" inflation, the frequency is at \(\omega\sim 0.1Hz\) which is beyond the frequency
range of both LIGO and VIRGO. This frequency range can although be detectable by the future space based gravitational wave observatories DECIGO or LISA1. For such a primordial gravitational wave \(e^{r}\sim 10^{18}\)[48] which results in the enhancing parameter to be of the order of \(\sqrt{\cosh 2r}\cong\frac{1}{\sqrt{2}}e^{r}\sim 10^{18}\). We will primarily consider the LISA interferometer with \(\xi_{0}\sim 10^{6}\) km and maximum frequency \(\omega_{\rm max}\sim 1\) Hz [3]. LISA has a projected sensitivity at \(10^{-18}\) m [3]. With the inherent squeezing in the primordial gravitational wave and a maximum interaction time of \(t_{\rm max}\sim 3.33\) sec one can estimate that the standard deviation is around (while the detector has just stopped interacting with the gravitational wave) \(\sigma_{0}^{\rm Squeezed}(t_{\rm max})\sim\sqrt{\cosh 2r}\times 10^{-35}\) m \(\sim 10^{-17}\) m which is in the detectable range of the LISA observatory. A very important result can be observed in terms of the GUP part of the standard deviation. The contribution to the standard deviation due to the GUP effect can be observed (from eq.(21)) to have a value \(\sigma_{\gamma}^{\rm Squeezed}(t_{\rm max})\sim\sqrt{\cosh 2r}\ 10^{-37}\) m \(\sim 10^{-19}\) m. Hence, we find that for primordial gravitational waves, the standard deviation carrying the signature of the graviton has a value \(\sigma_{0}^{\rm Squeezed}\sim 10^{-17}\) m and the GUP effect lies in the range \(\sigma_{\gamma}^{\rm Squeezed}\sim(10^{-19}-10^{-20})\) m which is just one order of magnitude beyond the projected sensitivity of the LISA observatory. The ratio of \(\sigma_{\gamma}^{\rm Squeezed}(t_{\rm max})\) with \(\sigma_{0}^{\rm Squeezed}(t_{\rm max})\) is \(\sigma_{\gamma}^{\rm Squeezed}/\sigma_{0}^{\rm Squeezed}\sim 10^{-2}\). The analysis has been done for a primordial gravitational wave with squeezing parameter \(r\sim 42\). If it is possible to detect a primordial gravitational wave with a squeezing parameter \(r\sim 44\), we then find that the GUP contribution to have a value \(\sigma_{\gamma}^{\rm Squeezed}(t)\sim 10^{-18}\) m which indeed will be in the projected sensitivity range of the LISA observatory. It will then be possible to detect the stochastic noise effect due to the existence of gravitons as well as the existence of GUP. Plotting the standard deviation due to the GUP contribution divided by \(\sqrt{2\pi}l_{p}\) with respect to time, we can observe a very unique behaviour based on the time of sampling of the \(\xi(t)\) data in Fig.(1). We observe from Fig.(1) that the value of the dimensionless number \(\frac{\sigma(t)}{\sqrt{2\pi}l_{p}}\) decreases and then increases with time. If the standard deviation can be calculated at a fixed time interval during when the interaction happens and such a dip in the standard deviation value is observed, we can claim the existence of the generalized uncertainty principle. Hence, for an advanced gravitational wave observatory, it may be possible to detect both the existence of gravitons as well as the existence of a fundamental minimal length scale correction in the Heisenberg uncertainty principle. This would indeed point towards a quantum nature of gravity.
Footnote 1: DECIGO: Decihertz Interferometer Gravitational Wave Observatory and LISA: Laser Interferometer Space Antenna.
_Summary:_ In this paper, we have considered a linearly polarized gravitational wave interacting with a gravitational wave detector. In the current model a quantized gravitational wave interacts with a detector where the detector variables obey the modified Heisenberg uncertainty relation (also known as the generalized uncertainty principle). In our analysis, we have considered that initially the gravitational wave was not interacting with the detector making it possible to write the initial state of the system as a tensor product state of the state corresponding to the graviton and the initial state corresponding to the detector. Following the approach in [1; 2; 3], we have summed over all the final states of the graviton. We then integrate out the coordinates and momenta corresponding to the graviton to obtain the influence functional involved in this entire process using a path integral approach. Following this one graviton analysis, we have extended our study to a gravitational field where the field modes are in coherent states and then squeezed states respectively. We finally obtain a stochastic Langevin-like equation including a noise term which can be considered as a quantum gravitational correction to the classical geodesic deviation equation. Our work focusses in the inclusion of a minimal length scale correction in the Heisenberg uncertainty principle, corresponding to the detector variables, leading to a GUP modified stochastic Langevin equation. We then obtain an approximate solution of the time dependent detector arm length and calculate the corresponding standard deviation in it. Although in case of the field modes being in a coherent state leads to a minuscule correction to the classical geodesic deviation equation, for a squeezed state analysis we find that along with the noise term the GUP effect also gets an exponential boost due to the existence of a tunable squeezing parameter. It is surprising in a sense that the GUP effect is coming from the detec
tor variables spanning the modified phase space only and therefore the squeezing embedded in the field states can amplify hidden signatures of the minimum length scale corrections considered in the detector. We then obtain a bound on the dimensionless GUP parameter which is tighter than the bounds obtained earlier using gravitational wave data. We finally plot the dimensionless standard deviation term due to the GUP effect with respect to time and observe that it may be possible to detect hidden GUP signatures while detecting primordial gravitational waves (with a squeezing parameter \(r\simeq 44\)) in future generation of gravitational wave detectors (LISA) along with the detection of gravitons.
|
2308.14909 | Pruning Self-Attention for Zero-Shot Multi-Speaker Text-to-Speech | For personalized speech generation, a neural text-to-speech (TTS) model must
be successfully implemented with limited data from a target speaker. To this
end, the baseline TTS model needs to be amply generalized to out-of-domain data
(i.e., target speaker's speech). However, approaches to address this
out-of-domain generalization problem in TTS have yet to be thoroughly studied.
In this work, we propose an effective pruning method for a transformer known as
sparse attention, to improve the TTS model's generalization abilities. In
particular, we prune off redundant connections from self-attention layers whose
attention weights are below the threshold. To flexibly determine the pruning
strength for searching optimal degree of generalization, we also propose a new
differentiable pruning method that allows the model to automatically learn the
thresholds. Evaluations on zero-shot multi-speaker TTS verify the effectiveness
of our method in terms of voice quality and speaker similarity. | Hyungchan Yoon, Changhwan Kim, Eunwoo Song, Hyun-Wook Yoon, Hong-Goo Kang | 2023-08-28T21:25:05Z | http://arxiv.org/abs/2308.14909v1 | # Pruning Self-Attention for Zero-Shot Multi-Speaker Text-to-Speech
###### Abstract
For personalized speech generation, a neural text-to-speech (TTS) model must be successfully implemented with limited data from a target speaker. To this end, the baseline TTS model needs to be amply generalized to out-of-domain data (i.e., target speaker's speech). However, approaches to address this out-of-domain generalization problem in TTS have yet to be thoroughly studied. In this work, we propose an effective pruning method for a transformer known as _sparse attention_, to improve the TTS model's generalization abilities. In particular, we prune off redundant connections from self-attention layers whose attention weights are below the threshold. To flexibly determine the pruning strength for searching optimal degree of generalization, we also propose a new differentiable pruning method that allows the model to automatically learn the thresholds. Evaluations on zero-shot multi-speaker TTS verify the effectiveness of our method in terms of voice quality and speaker similarity.
Hyungchan Yoon\({}^{1}\), Changhwan Kim\({}^{1}\), Eunwoo Song\({}^{2}\), Hyun-Wook Yoon\({}^{2}\), Hong-Goo Kang\({}^{1}\)\({}^{1}\)Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea,
\({}^{2}\)NAVER Cloud, South Korea
[hcy71, chkim]@dsp.yonsei.ac.kr, {eunwoo.song, hyunwook.yoon}@navercorp.com, [email protected]
**Index Terms** Text-to-speech, zero-shot, generalization, sparse attention
## 1 Introduction
With the advancement of deep learning technologies, recent studies in text-to-speech (TTS) have shown a rapid progress. In terms of generation quality, single- and multi-speaker TTS models can synthesize human-like voices with sufficient training data from the target speaker(s) [1, 2, 3, 4, 5]. Further, several few- or zero-shot multi-speaker TTS models have recently been developed to synthesize out-of-domain (OOD) speech with limited data from the target speaker [6, 7, 8, 9, 10, 11]. These models are trained using a large multi-speaker dataset to learn a general TTS mapping relationship conditioned on speaker representations. Then, they are either additionally fine-tuned with a few samples of the target speaker (few-shot) or used directly (zero-shot) for synthesis.
Especially, zero-shot multi-speaker TTS models [8, 9, 10, 11] are widely being studied due to their unique advantage of not requiring any training data from the target speaker. A common approach of these models is to extract the speaker representations from reference speech using a reference encoder [7, 12, 13]. These representations contain various prosodic characteristics such as pronunciation style, speed [14, 15] of the reference speech, as well as speaker identity. As such, the speaker representation is learned to play a crucial role as a latent vector that determines the prosodic characteristics of the synthesized speech during training. During inference, the speaker representation is extracted from the voice of the unseen speaker, enabling the generation of the desired voice.
However, zero-shot multi-speaker TTS models face the problem of domain mismatch between training and inference, unlike conventional TTS models that aim to synthesize only in-domain speech (i.e., speech from seen speakers). Specifically, the latter must be generalized only to the unseen text, whereas the former must generalized not only to the unseen text but also to the reference speech of unseen speakers. Therefore, the challenge of improving synthesis performance in zero-shot multi-speaker TTS lies in generalizing the TTS models to OOD data, which refers to speech from unseen speakers.
One additional challenge faced by zero-shot multi-speaker TTS models is that they require varying levels of generalization ability depending on the dataset they are trained on. When there is a high degree of domain mismatch between the training and test data, such as differences in recording environments, the models require more generalization to prevent overfitting. Conversely, when there is little domain mismatch, over-generalization can lead to degraded performance. Therefore, finding the optimal strength of generalization is crucial for improving the synthesis performance of these models. However, current zero-shot multi-speaker TTS models lack a systematic approach to this problem and have difficulty controlling the generalization strength once developed. While adjusting the number of parameters is a classical approach to controlling generalization [16], it can be a manual and time-consuming process.
To this end, we propose a new controllable generalization method for zero-shot multi-speaker TTS models. In particular, we focus on the transformer [17], which is the foundation for many TTS models. Our method draws on previous studies in various research fields (such as image generation and speech recognition) demonstrating the effectiveness of optimizing the self-attention module in a generalization objective [18, 19, 20, 21, 22]. In particular, they enhanced generalization abilities by adding sparsity to the self-attention connections. For instance, Child et al. [18] factorized the self-attention matrix into sparse subsets, and Kim et al. [21] proposed removing the low-weight connections during inference.
In this study, we design a sparse attention method for zero-shot multi-speaker TTS to successfully solve its OOD generalization problem. The method is implemented by pruning off the connections from self-attention layer; we also propose a _differentiable pruning_ technique that can easily control the degree of generalization. Our contributions are outlined below:
* **New Application.** We apply the sparse attention mechanism to the TTS model, which eliminates redundant connections from the self-attention layer. Because the TTS model is trained under a condition that only uses high-weight residual connections, the sparse attention mechanism significantly improves its generalization ability. In particular, adding sparsity to the self-attention module reduces the number of parameters engaged in the overall TTS training by preventing backpropagation of gradients through low-weight connections, which alleviates overfitting.
* **Novel Pruning Technique.** We explore optimal pruning techniques for the sparse attention. We first introduce a vanilla pruning approach that eliminates the connections whose attention weights are below a predetermined threshold. To flexibly adjust the pruning strength in case of various degrees of domain mismatch, we further propose a differentiable pruning method that adopts learnable thresholds.
* **Performance.** Experiments on zero-shot TTS show that our proposed method notably improves the performance of OOD speech synthesis1. Footnote 1: Audio samples are available at: [https://hcy7lo.github.io/SparseTTS-demo/](https://hcy7lo.github.io/SparseTTS-demo/)
## 2 Related Works
Owing to the increasing demand for customized voice synthesis, the OOD generalization problem has recently been studied in zero-shot multi-speaker TTS works. StyleSpeech [7] used meta-learning to make a TTS model effectively adapt to OOD voice and conditioned speaker representations to the model using few variables to minimize the domain mismatch. For the same purpose, nnSpeech [8] introduced a speaker-guided conditional variational autoencoder to define speaker representations as Gaussian latent variables rather than high-dimensional embeddings. Furthermore, GenerSpeech [11] leveraged wav2vec2.0 [23], a contrastive model learned with numerous speech data, to obtain more robust speaker representations. Unlike the abovementioned approaches, we use the self-attention pruning method to directly generalize the basic architecture (i.e., transformer) of the TTS model, implying that it is applicable to other models with minimal modifications.
## 3 Proposed Method
We selected StyleSpeech [7] as a baseline because it is a representative zero-shot multi-speaker TTS model built on a non-autoregressive transformer. As depicted in Fig 1, its architecture comprises a transformer-based phoneme encoder and mel-spectrogram decoder, a variance adaptor, and a reference encoder. The variance adaptor, located between the encoder and decoder, predicts the pitch, energy, and duration from phoneme-level embeddings; it then expands these embeddings to frame-level using the predicted duration values. The reference encoder extracts a speaker representation from the input reference speech and conditions it to the encoder and decoder via Style-Adaptive Layer Normalization [7] technique. More details, including loss terms and model configurations, are presented in [2, 7].
### Sparse Attention
We implement sparse attention by pruning redundant connections, and we only apply it to the decoder for the following two reasons: 1) The sequence length (\(N\)) of the decoder (frame-level) is much longer than that of the encoder (phoneme-level), indicating that the decoder has a significantly larger number of self-attention connections (\(N\times N\)) than the encoder; as a result, the decoder self-attention module requires more sparsity to be generalized. 2) According to our investigation, applying sparse attention to the encoder rather degrades the model performance because it reduces the modeling capacity of the original self-attention module. We define sparse masks and apply them to all the attention heads of the decoder self-attention modules. Depending on the mask generation methods, we propose two types of pruning techniques: **vanilla** and **differentiable**.
#### 3.1.1 Vanilla Pruning
Given queries \(Q\) and keys \(K\) obtained by two linear transformations \(W_{q}\) and \(W_{k}\), respectively, to the input sequence \(\mathbf{x}\),
\[Q=W_{q}\mathbf{x},\,K=W_{k}\mathbf{x}, \tag{1}\]
we first denote the attention probability of the \(h\)-th head of the multi-head self-attention layer [17] as \(\mathcal{A}_{h}\):
\[\mathcal{A}_{h}(i,j)=softmax\left(\frac{Q_{h}{K_{h}}^{T}}{\sqrt{d}}\right)_{(i, j)}, \tag{2}\]
where \(Q_{h}\) and \(K_{h}\) are the queries and keys of the \(h\)-th head, respectively, and \(d\) is their dimension. \(\mathcal{A}_{h}(i,j)\) indicates the weight score of the \(i\)-th query corresponding to the \(j\)-th key. We then define a sparse mask matrix \(SM^{h}\) of \(h\)-th head as follows:
\[SM^{h}_{(i,j)}=\begin{cases}1&\text{if }\mathcal{A}_{h}(i,j)\geq\mu_{i}\\ 0&\text{if }\mathcal{A}_{h}(i,j)<\mu_{i}\end{cases}, \tag{3}\]
\[\mu_{i}=\frac{1}{N}\sum_{j=1}^{N}\mathcal{A}_{h}(i,j), \tag{4}\]
where \(N\) is the length of the input sequence \(\mathbf{x}\). Applied to \(\mathcal{A}_{h}\), the \(SM^{h}\) mask prunes its weak connections, whose weights are below the average attention weights \(\mu_{i}\) along the key axis.
During the experiment, we observed that using a common sparse mask combined along the head-axis outperforms applying \(SM^{h}\) to each head individually. In detail, we consider each head's activated positions for all the other heads; we define an adjusted sparse mask \(SM_{OR}:=\bigcup_{h=1}^{H}SM^{h}\) where \(H\) is the number of heads, and identically apply it to all attention heads. The \(SM_{OR}\) mask is used during both training and inference.
#### 3.1.2 Differentiable Pruning
In the vanilla pruning (VP) method, the threshold of \(SM^{h}\) is passively determined as the mean value of attention weights \(\mu_{i}\). However, the optimal threshold values vary depending on the number of layers, type of generation tasks, and degree of domain mismatch; thus, flexibly setting the threshold is preferable. To this end, we propose a novel differentiable pruning
Figure 1: Overview of StyleSpeech. The speaker representation is extracted from the reference encoder and provided to the encoder and decoder via Style-Adaptive Layer Normalization technique.
(DP) method with learnable thresholds, inspired by [22] which shares the same motivation in natural language processing task.
Fig. 2 illustrates the overview of DP. In contrast to VP, which uses the predefined threshold, we first define a hard sparse mask of \(h\)-th head \(SM_{hard}^{h}\) that inherits the learnable threshold \(\theta/N\):
\[SM_{hard(i,j)}^{h}=\begin{cases}1&\text{if }\mathcal{A}_{h}(i,j)\geq\theta/N\\ 0&\text{if }\mathcal{A}_{h}(i,j)<\theta/N\end{cases}, \tag{5}\]
where \(\theta\) is a trainable threshold parameter, and \(N\) is the length of the sequence used to adjust the threshold value based on variations in input length. However, because the process of obtaining the binary mask \(SM_{hard}^{h}\) is not differentiable, we cannot directly update \(\theta\) by gradient descent. To solve this problem, we additionally adopt a differentiable soft sparse mask \(SM_{soft}^{h}\) defined by a sigmoid function as follows:
\[SM_{soft}^{h}=\sigma\left(\frac{\mathcal{A}_{h}-\theta/N}{T}\right), \tag{6}\]
where \(T\) is the temperature set to \(0.01\) to approximate \(SM_{soft}^{h}\) to \(SM_{hard}^{h}\). The value of \(SM_{soft}^{h}\) is close to \(1\) where the attention weight is higher than the threshold \(\theta/N\) and is close to \(0\) in the opposite case.
We then propose a two-phase training method, summarized in Algorithm 1. In phase 1, the entire model is trained with original TTS loss terms \(\mathcal{L}_{tts}\) (i.e., loss terms of StyleSpeech [7]) using the soft sparse masks \(SM_{soft}\) to update model parameters including thresholds \(\theta\). We also add sparsity loss \(\mathcal{L}_{sp}\) as a regularization term to ensure pruning behavior, as shown below:
\[\mathcal{L}_{sp}=\frac{1}{LH}\sum_{l=1}^{L}\sum_{h=1}^{H}\left(\overline{SM} _{soft}^{h}-R\right)^{2}, \tag{7}\]
where the sparsity ratio \(R\) is a hyperparameter that indirectly determines the pruning strength; \(L\) denotes the number of transformer layers, and \(H\) represents the number of heads in each layer. Sparsity loss \(\mathcal{L}_{sp}\) is defined as the average \(L2\)-distance between the soft sparse mask's mean value \(\overline{SM}_{soft}^{h}\) and \(R\) across all attention heads and decoder layers. This loss term forces the model to be generalized to OOD data; without it, thresholds \(\theta\) do not converge to meaningful values during training. This is because if only \(\mathcal{L}_{tts}\) is used, the model obtains the lowest training loss value when _no connections are pruned_ (i.e., \(\theta\) is stuck in \(0\)) for _in-domain_ data. The value of \(R\) is set between 0 and 1, and a lower R value prunes more connections. In summary, two loss terms are used when updating \(\theta\): original TTS loss terms \(\mathcal{L}_{tts}\) and the regularization term \(\mathcal{L}_{sp}\). As mentioned previously, \(\mathcal{L}_{tts}\) pulls \(\theta\) down towards \(0\), while \(\mathcal{L}_{sp}\) aims to prevent this for generalization to OOD data. The threshold is consequently balanced by two opposing losses, pruning off self-attention connections only to the extent that it does not significantly harm the original objective of minimizing \(\mathcal{L}_{tts}\). Thus, the degree of generalization can be controlled by varying the sparsity ratio \(R\), while minimizing overgeneralization.
In phase 2, model parameters except \(\theta\) are updated using the hard sparse masks \(SM_{hard}\), whose thresholds \(\theta\) are learned in phase 1. Here, \(\mathcal{L}_{sp}\) is not used and the model is trained under the _hard_ pruning condition (low-weight connections are completely masked) with fixed pruning strength. This final model is used during inference.
## 4 Experiments
### Experimental Setup
**Dataset and Preprocessing.** We used two subset datasets from LibriTTS [24] (_train-clean-100_ and _train-clean-360_) to train our model, which contain 245 hours of speech from 1151 speakers. For inference, VCTK [25] (108 unseen speakers) dataset is used for zero-shot TTS. For the method of preprocessing text and speech, we followed StyleSpeech [7].
**Model Details.** We experimentally evaluated the performance of VP and DP considering StyleSpeech [7] as a baseline. Consistent with [7], the encoder and decoder comprise 4 FFT blocks [2] with 2 self-attention heads each. For DP, 4 threshold parameters \(\theta\) were declared in each decoder FFT block and were identically initialized to 0. The training configurations for all implemented models were set to be the same as in [7], except that the models were trained for 300k steps. In the case of DP, we advanced to phase 2 after training for 40k steps in phase 1. For evaluation, we used the HiFi-GAN V1 vocoder [26] to convert mel-spectrograms to audios. In addition, two references were used for comparison: 1) ground truth audios and 2) audios generated by HiFi-GAN V1 (Voc.) conditioned on ground truth mel-spectrograms.
**Evaluation Metrics.** Regarding subjective metrics, mean opinion score (MOS) evaluates the naturalness of speech, and similarity MOS (SMOS) evaluates speaker similarity. Both metrics were scored on a scale of 1-5 by 16 raters, and we present them with 95% confidence intervals (CI). We used character error rate (CER) and speaker embedding cosine similarity (SECS) to evaluate intelligibility and speaker similarity as objective metrics. For CER, we transcribed the synthesized speech using the pre-trained speech recognition model provided by the SpeechBrain toolkit [27]. SECS is defined as the cosine similarity between speaker embeddings derived from the pre-trained speaker ver
Figure 2: Overview of the differentiable pruning. The soft sparse mask is used during phase 1 of training, and the hard sparse mask is used during phase 2 of training and during inference. The pruned attention head is multiplied with \(V_{h}\) (values of \(h\)-th head) for succeeding process of self-attention operation.
ification model [28] from [27]. Thus, MOS and CER assess speech quality, whereas SMOS and SECS assess similarity to the target speaker.
### Evaluation on Zero-Shot TTS
For zero-shot TTS, we used arbitrary text input and randomly sampled one reference speech from each VCTK speaker for the reference encoder's input. 15 synthesized samples were used for MOS and SMOS, and 100 samples were used for CER and SECS.
From Table 1, we make the following observations: 1) The model with VP outperforms the baseline in all metrics except CER, demonstrating the generalization ability of the pruning method. 2) All models with DP remarkably surpass the baseline and the model with VP, particularly in terms of voice quality. 3) The results among models with DP show the trade-off relationship between pruning strength and performance. In the first viewpoint, the model is successfully generalized by pruning more connections (\(R:0.50\to 0.45\)), resulting in a sharp increase in naturalness (+0.23 MOS). In contrast, excessive pruning (\(R:0.45\to 0.40\)) rather reduces the model's original modeling capacity (i.e., overgeneralization); it causes a slight degradation in overall performance in our experiment. Intuitively, pruning all connections is the same as removing the entire self-attention module.
In summary, we conclude that DP significantly improves zero-shot TTS performance. Owing to its ability to adjust pruning strength, the model is also scalable to different degrees of domain mismatch (e.g., small \(R\) in large domain mismatch).
### Ablation Study
Table 2 shows the results of the ablation studies related to the two DP design techniques. We chose DP with \(R=0.45\) as the baseline because it performs best in terms of naturalness and similarity. In the first experiment, we skipped the training phase 2 that uses the hard masks \(SM_{hard}\); we only used the soft masks \(SM_{soft}\) for training and inference. Results show that the two-phase training method is effective. Concretely, in phase 2, _hard_ pruning with updated thresholds improves the model's generalization performance by completely excluding low-weight connections during the text-to-mel conversion process. In the second experiment, we removed the regularization term \(\mathcal{L}_{sp}\), originally used in the training phase 1. Without \(\mathcal{L}_{sp}\), the model shows poor performance because pruning does not occur at all. We also discovered that the thresholds \(\theta\) were not updated from their initial value of \(0\), as noted in section 3.1.2.
### Analysis of Differentiable Pruning
To further analyze DP, we present the updated final thresholds \(\theta\) of models with DP in Table 3. As expected, a smaller \(R\) value generally leads to higher threshold values, indicating that more connections are pruned. Fig. 3 represents the pruned attention heads of theses models using a specific text utterance and random reference speech. The previously mentioned relationship between \(R\) and pruning strength is also confirmed in the figure. Remarkably, the pruned TTS models use only a few self-attention connections for high synthesis quality, implying that DP prevents the decoder from overfitting to in-domain data and improves the generalization performance. More materials of visualizations are in our demo page.
## 5 Conclusion
In this work, we proposed a self-attention pruning method for improving the generalization abilities of zero-shot multi-speaker TTS models. Furthermore, we investigated the optimal pruning techniques and emphasized the importance of differentiable pruning (DP), that can control the pruning strength augmented with the proposed two-phase training method. We then used it to generalize the mel-spectrogram decoder; evaluation on zero-shot multi-speaker TTS confirmed its superiority in terms of voice quality and speaker similarity. Future works include the application of DP for more severe domain mismatch cases.
## 6 Acknowledgement
This work was supported by Voice&Avatar, NAVER Cloud, Seongnam, Korea.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Model** & **MOS(\(\uparrow\))** & **SMOS(\(\uparrow\))** & **CER(\(\downarrow\))** & **SECS(\(\uparrow\))** \\ \hline Ground Truth & 4.76\(\pm\)0.07 & - & - & - \\ GT mel + Voc. & 4.67\(\pm\)0.08 & - & - & - \\ \hline Baseline & 3.43\(\pm\)0.12 & 2.99\(\pm\)0.16 & 4.56 & 0.268 \\ VP & 3.46\(\pm\)0.12 & 3.10\(\pm\)0.15 & 5.17 & 0.275 \\ DP(\(R=0.50\)) & 3.53\(\pm\)0.12 & 3.18\(\pm\)0.15 & 3.96 & **0.279** \\ DP(\(R=0.45\)) & **3.76\(\pm\)0.11** & **3.23\(\pm\)0.15** & 3.96 & 0.278 \\ DP(\(R=0.40\)) & 3.75\(\pm\)0.12 & 3.20\(\pm\)0.16 & **3.73** & 0.276 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of MOS, SMOS with 95% CI, CER, and SECS results of zero-shot TTS; we used the StyleSpeech framework as the baseline system. Note that VP and DP denote the vanilla and differentiable pruning techniques, respectively. For DP, we conducted 3 experiments by varying the sparsity ratio \(R\). The best performances are in boldface.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Model** & **MOS(\(\uparrow\))** & **SMOS(\(\uparrow\))** & **CER(\(\downarrow\))** & **SECS(\(\uparrow\))** \\ \hline DP(\(R=0.45\)) & **3.76\(\pm\)0.11** & **3.23\(\pm\)0.15** & **3.96** & **0.278** \\ \hline w/o \(SM_{hard}\) & 3.65\(\pm\)0.11 & 3.02\(\pm\)0.16 & 4.21 & 0.274 \\ w/o \(\mathcal{L}_{sp}\) & 3.46\(\pm\)0.12 & 2.87\(\pm\)0.15 & 5.77 & 0.263 \\ \hline \hline \end{tabular}
\end{table}
Table 2: MOS, SMOS with 95% CI, CER, and SECS results of ablation studies. The best performances are in boldface.
Figure 3: _Pruned attention heads for the utterance “How many attention connections are pruned?”. Samples are first heads of the fourth decoder layers and are generated from DP with (a) \(R=0.40\), (b) \(R=0.45\), and (c) \(R=0.50\)._ |
2305.03251 | Meta-Maintanance for Dockerfiles: Are We There Yet? | Docker allows for the packaging of applications and dependencies, and its
instructions are described in Dockerfiles. Nowadays, version pinning is
recommended to avoid unexpected changes in the latest version of a package.
However, version pinning in Dockerfiles is not yet fully realized (only 17k of
the 141k Dockerfiles we analyzed), because of the difficulties caused by
version pinning. To maintain Dockerfiles with version-pinned packages, it is
important to update package versions, not only for improved functionality, but
also for software supply chain security, as packages are changed to address
vulnerabilities and bug fixes. However, when updating multiple version-pinned
packages, it is necessary to understand the dependencies between packages and
ensure version compatibility, which is not easy. To address this issue, we
explore the applicability of the meta-maintenance approach, which aims to
distribute the successful updates in a part of a group that independently
maintains a common artifact. We conduct an exploratory analysis of 7,914
repositories on GitHub that hold Dockerfiles, which retrieve packages on GitHub
by URLs. There were 385 repository groups with the same multiple package
combinations, and 208 groups had Dockerfiles with newer version combinations
compared to others, which are considered meta-maintenance applicable. Our
findings support the potential of meta-maintenance for updating multiple
version-pinned packages and also reveal future challenges. | Takeru Tanaka, Hideaki Hata, Bodin Chinthanet, Raula Gaikovina Kula, Kenichi Matsumoto | 2023-05-05T02:33:45Z | http://arxiv.org/abs/2305.03251v1 | # Meta-Maintanance for Dockerfiles: Are We There Yet?
###### Abstract.
Docker allows for the packaging of applications and dependencies, and its instructions are described in Dockerfiles. Nowadays, version pinning is recommended to avoid unexpected changes in the latest version of a package. However, version pinning in Dockerfiles is not yet fully realized (only 17k of the 141k Dockerfiles we analyzed), because of the difficulties caused by version pinning. To maintain Dockerfiles with version-pinned packages, it is important to update package versions, not only for improved functionality, but also for software supply chain security, as packages are changed to address vulnerabilities and bug fixes. However, when updating multiple version-pinned packages, it is necessary to understand the dependencies between packages and ensure version compatibility, which is not easy. To address this issue, we explore the applicability of the meta-maintenance approach, which aims to distribute the successful updates in a part of a group that independently maintains a common artifact. We conduct an exploratory analysis of 7,914 repositories on GitHub that hold Dockerfiles, which retrieve packages on GitHub by URLs. There were 385 repository groups with the same multiple package combinations, and 208 groups had Dockerfiles with newer version combinations compared to others, which are considered meta-maintenance applicable. Our findings support the potential of meta-maintenance for updating multiple version-pinned packages and also reveal future challenges.
Dockerfile, version pinning, software supply chain, meta-maintenance 2023
2
## 1. Introduction
Docker is a widely used containerization tool that allows developers to avoid the problem of _dependency hell_ by packaging applications and dependencies for each environment and running them in an isolated execution environment (Takeru Tanaka, Hideaki Hata, Bodin Chinthanet, Raula Gaikovina Kula, and Kenichi Matsumoto, 2023). Meta-Maintanance for Dockerfiles: Are We There Yet?. In _Proceedings of ACM Conference (Conference'17)_. ACM, New York, NY, USA, 10 pages. [https://doi.org/10.1145/nmmmn.nnmmm](https://doi.org/10.1145/nmmmn.nnmmm)
## 2. Introduction
Docker is a widely used containerization tool that allows developers to avoid the problem of _dependency hell_ by packaging applications and dependencies for each environment and running them in an isolated execution environment (Takeru Tanaka, Hideaki Hata, Bodin Chinthanet, Raula Gaikovina Kula, and Kenichi Matsumoto, 2023). Meta-Maintanance for Dockerfiles: Are We There Yet?. In _Proceedings of ACM Conference (Conference'17)_. ACM, New York, NY, USA, 10 pages. [https://doi.org/10.1145/nmmmn.nnmmm](https://doi.org/10.1145/nmmmn.nnmmm)
## 3. Introduction
Docker is a widely used containerization tool that allows developers to avoid the problem of _dependency hell_ by packaging applications and dependencies for each environment and running them in an isolated execution environment (Takeru Tanaka, Hideaki Hata, Bodin Chinthanet, Raula Gaikovina Kula, and Kenichi Matsumoto, 2023). However, previous empirical studies on Dockerfiles that describe the steps to build containers have revealed that dependency issues occur frequently (Kelly, 2014; Kelly, 2014).
Pinning to a specific version of a package is called _version pinning_ and is recommended to reproduce the expected environment when building containers at different times (Takeru Tanaka, Hideaki Hata, Bodin Chinthanet, Raula Gaikovina Kula, and Kenichi Matsumoto, 2023). Version pinning in a Dockerfile is expected to avoid build failures due to unexpected changes in the latest version of packages. Not limited to Dockerfiles, version pinning is a common way to ensure fine-grained control over software packages. This practice reduces the risk of changes, unknown bugs, and vulnerabilities that may only be present in new releases (Kelly, 2014). However, fixing dependencies prevents the adoption of important security and bug fixes. Thus, unless constantly updated to new versions of the constraints, projects will become less secure over time (Kelly, 2014). This problem manifests itself when Dockerfiles are used for long periods of time. Previous studies on Dockerfiles have shown that package version updates are insufficient, despite the fact that containers contain many old and vulnerable packages (Takeru Tanaka, 2014; Tanaka, 2014). Therefore, periodic version updates are essential, and the problem of dependencies reappears in Dockerfiles.
As seen in the recent Apache Log4j vulnerability issue,1 software supply chain security has become essential and there are several social movements. To improve and secure open source software supply chain issues, the Open Security Software Foundation (OpenSSF), a project of the Linux Foundation, has announced the launch of the Alpha-Omega Project.2 This project aims to improve the security of the global open source software (OSS) supply chain by working with project maintainers to systematically find and fix new vulnerabilities in open source code. The "Alpha" supports the most important OSS projects, while the "Omega" targets at least |
2303.15965 | SFHarmony: Source Free Domain Adaptation for Distributed Neuroimaging
Analysis | To represent the biological variability of clinical neuroimaging populations,
it is vital to be able to combine data across scanners and studies. However,
different MRI scanners produce images with different characteristics, resulting
in a domain shift known as the `harmonisation problem'. Additionally,
neuroimaging data is inherently personal in nature, leading to data privacy
concerns when sharing the data. To overcome these barriers, we propose an
Unsupervised Source-Free Domain Adaptation (SFDA) method, SFHarmony. Through
modelling the imaging features as a Gaussian Mixture Model and minimising an
adapted Bhattacharyya distance between the source and target features, we can
create a model that performs well for the target data whilst having a shared
feature representation across the data domains, without needing access to the
source data for adaptation or target labels. We demonstrate the performance of
our method on simulated and real domain shifts, showing that the approach is
applicable to classification, segmentation and regression tasks, requiring no
changes to the algorithm. Our method outperforms existing SFDA approaches
across a range of realistic data scenarios, demonstrating the potential utility
of our approach for MRI harmonisation and general SFDA problems. Our code is
available at \url{https://github.com/nkdinsdale/SFHarmony}. | Nicola K Dinsdale, Mark Jenkinson, Ana IL Namburete | 2023-03-28T13:35:10Z | http://arxiv.org/abs/2303.15965v1 | # SFHarmony: Source Free Domain Adaptation for Distributed Neuroimaging Analysis
###### Abstract
To represent the biological variability of clinical neuroimaging populations, it is vital to be able to combine data across scanners and studies. However, different MRI scanners produce images with different characteristics, resulting in a domain shift known as the 'harmonisation problem'. Additionally, neuroimaging data is inherently personal in nature, leading to data privacy concerns when sharing the data. To overcome these barriers, we propose an Unsupervised Source-Free Domain Adaptation (SFDA) method, SFHarmony. Through modelling the imaging features as a Gaussian Mixture Model and minimising an adapted Bhattacharyya distance between the source and target features, we can create a model that performs well for the target data whilst having a shared feature representation across the data domains, without needing access to the source data for adaptation or target labels. We demonstrate the performance of our method on simulated and real domain shifts, showing that the approach is applicable to classification, segmentation and regression tasks, requiring no changes to the algorithm. Our method outperforms existing SFDA approaches across a range of realistic data scenarios, demonstrating the potential utility of our approach for MRI harmonisation and general SFDA problems. Our code is available at [https://github.com/nkdinsdale/SFHarmony](https://github.com/nkdinsdale/SFHarmony).
## 1 Introduction
Deep learning (DL) models have proved to be powerful tools for neuroimage analysis. However, the majority of neuroimaging datasets remain small, posing a challenge for the training of sophisticated architectures with many parameters. Thus, it is common practice to combine data from multiple sites and MRI scanners, both to increase the amount of data available for training, and to represent the breadth of biological variability that can be expected in diverse populations. However, the combination of data across MRI scanners with different acquisition protocols and hardware leads to an increase in non-biological variance [25, 26, 50], which can be large enough to mask the biological signals of interest [49], even after careful pre-processing with state-of-the-art neuroimaging pipelines [23]. The development of _harmonisation_ methods is therefore vital to enable the joint unbiased analysis of neuroimaging data from different scanners and studies.
The key goal for harmonisation methods is to be discriminative for the main task of interest whilst creating shared feature representations of the data across acquisition scanners, clearly mirroring the goal of domain adaptation (DA) [15]. The majority of deep learning based harmonisation methods are based on DA methods, either using adversarial approaches to create shared feature embeddings [15, 24], or using generative approaches to create harmonised images [11, 63].
However, the vast majority of existing methods fail to be applicable in many realistic data scenarios. For example, MR images are inherently personal information so their sharing is protected by legislation, such as GDPR [9] and HIPAA [39]. Thus, the assumption of centralised data stores for model training is infeasible, particularly when working with clinical imaging data, which will be essential in order to produce representative models [14, 52]. Distributed learning offers a promising solution, but the few proposed distributed harmonisation methods [8, 16] assume the simultaneous presence of the source and target data. The source data may not be available for the adaptation phase, for instance, due to confidentiality agreements, loss of the source
data, or computational constraints [3]. Further, federated DA methods such as [16] would require retraining of the model to incorporate any new sites, which is infeasible and computationally expensive.
Therefore, we explore an unsupervised DA setting where only the source model, instead of the source data, is provided to the unlabelled target domain for harmonisation, known as Source Free Domain Adaptation (SFDA). This setting inherently protects individual privacy, whilst allowing the efficient incorporation of new sites without requiring target labels. We propose a simple yet effective solution, termed SFHarmony, which aims to match feature embeddings from the source and target, through characterising the embeddings as a Gaussian Mixture model (GMM) and the use of a modified Bhattacharyya distance [4]. This requires no modifications to the training of the source model, and the only additional communication is of summary statistics of the source feature embedding, allowing it to be simply applied to existing architectures. The summary statistics contain no information about individuals.
Our contributions are as follows: 1) We propose a new method for SFDA, SFHarmony, based on aligning feature embeddings, utilising a modified Bhattacharyya distance, requiring no changes to source training; 2) We demonstrate the method's applicability to classification, segmentation and regression tasks, and show that the approach outperforms existing SFDA methods for domain shifts experienced when working with neuroimaging data; 3) We demonstrate the robustness of the method to additional challenges likely to be faced when working with real world imaging data: differential privacy and label imbalance.
## 2 Related Work
**Unsupervised Domain Adaptation (UDA):** UDA aims to exploit the knowledge learned from a source dataset to help to create a discriminative model for a related but unlabelled target dataset [33]. DL-based UDA approaches can broadly be split into three categories [33]: discrepancy-based, reconstruction-based, and adversarial. Discrepancy based approaches aim to minimise a divergence criterion, which measures the distance between the source and target data distributions encoded in a learned feature space [10, 27, 35, 48]. Reconstruction-based approaches, instead, use reconstruction as a proxy task to enable the learning of a shared representation for both image domains [5, 22, 38]. Finally, adversarial approaches deploy a discriminator that aims to identify the source of the data; the model is trained both to do the task and to trick the discriminator, creating domain invariant features [21, 51]. These methods all assume simultaneous access to the source and target data, which poses data privacy challenges.
**Federated Learning and Domain Adaptation:** Federated learning (FL) has been proposed as a method to train models on distributed data [36]. The data are kept on their local servers, and users train local models with private data and communicate the weights or gradients between sites for aggregation. Many FL approaches focus on minimising the impact of distribution shifts between clients [28, 45, 55]; however, most assume that the data at all sites are fully labelled. However, federated DA enables the incorporation of an unlabelled site into the federation without sharing data. FADA [40] is a federated DA method, where features are shared between sites in a global knowledge store. The shar
Figure 1: Schematic of the proposed SFHarmony method. The method fits a GMM to the source features, shares these via a global model store, and then completes SFDA by aligning the source and target feature distributions utilising a modified Bhattacharyya distance. \(\mathbf{Q}^{s}\) is the source feature representation and \(\mathbf{Q}^{t}\) is the target feature representation, and the figure shows the GMM setup, for a single feature \(i\), when we are working with \(K\), the number of components, being 2. This trivially generalises to more or less components.
ing of features, however, still poses privacy concerns as images may be recoverable from the features [19]. Thus, Fed-Harmony [16] instead encodes the features as Gaussian distributions and thus only the mean and standard deviations of the features need to be shared. Both of these methods still assume access to the source data during training and rely on adversarial approaches that are often unstable and hard to train. Other federated DA methods produce domain-specific models or ensembles [19, 41, 59, 61], meaning that the final predictions depend on the domain of the data.
**Source Free Domain Adaptation:** SFDA takes the federated approach a step further and assumes that there is no access to the source data available at all: only the source model is available for model adaptation. The majority of SFDA methods have been developed for classification [1, 13, 32, 33, 43, 60, 17], with a few being proposed for segmentation [30, 31, 34, 44, 57]. There are two main approaches taken for SFDA. The first set of approaches are generative, aiming to create source samples using the source model weights [32, 34]. These approaches, however, pose concerns about individual privacy, especially when working with medical images and low numbers of samples [19] and cannot be simply applied to complex target tasks, limiting their utility when working with MRI data [53]. The second set aim to minimise model entropy to improve predictions, guided by various pseudo labelling or uncertainty techniques to prevent mode collapse [1, 13, 17, 29, 33, 43, 60]. These methods are often effective, but largely limited to classification tasks, and may require changes to the source model training to be effective [33, 60]. AdaMI [3] was proposed directly for medical image segmentation, but requires an estimate of the proportion of each label to prevent mode collapse. This ratio is hard to estimate for labels with high variability across populations, such as tumours or lesions. We could not identify any methods proposed for regression, where the lack of softmax outputs limit the direct application of methods based on entropy minimisation.
**Harmonisation:** Many existing harmonisation approaches are based on COMBAT [20, 7, 42], which uses a linear model to represent the scanner effects on image-derived features. DL-based approaches for harmonisation generally utilise a DA approach, with many being generative, aiming to produce 'harmonised' images [6, 11, 37, 62, 37], while the other branch uses adversarial approaches to harmonise the learned model features for a given task [15, 24]. All of these methods assume simultaneous access to the source and target data, with some even requiring paired data [11]. The only existing methods for harmonisation which consider data privacy are Distributed COMBAT [8] and FedHarmony [16]; however, both assume constant communication with the source site.
## 3 Method
The aim of this work is to create a SFDA method applicable to neuroimaging tasks, and to demonstrate its suitability for MRI harmonisation. Thus, the goal is to create a model where two images with the same label would share a feature embedding, regardless of the acquisition scanner - the domain of the data. We thus follow the framework of [51] and consider the network to be formed of a feature extractor, with parameters \(\mathbf{\Theta}_{repr}\), and a label predictor, \(\mathbf{\Theta}_{p}\). This network architecture is the same across source and target sites. The general schematic for training is shown in Fig. 1.
### Creation of the Source Model
The first stage is the training of the source model. This assumes the availability of a labelled training dataset \(D^{s}=\{\mathbf{X}^{s},\mathbf{y}^{s}\}\), where the image and label pairs depend on the task of interest. Unlike some existing methods [33, 60], our proposed approach requires no changes to the training of the source model or to the architecture. The model can thus be flexibly trained following the standard training procedure for the source data, with the goal being to create a well-trained source model. In our experiments, we consider the simplest source training, minimising a loss function (\(L_{task}\)) dependent on the task of interest with full supervision:
\[L(\mathbf{X}^{s},\mathbf{y}^{s};\mathbf{\Theta}^{s}_{repr},\mathbf{\Theta}^{s}_{p})= \frac{1}{N_{s}}\sum_{i}^{N_{s}}L_{task}(\mathbf{X}^{s}_{i},\mathbf{y}^{s}_{i}) \tag{1}\]
where \(N_{s}\) is the total amount of labelled source data.
### Global Information Store
For successful SFDA, we need to align the learned feature embedding, \(\mathbf{Q}^{s}=f(\mathbf{X}^{s},\mathbf{\Theta}^{s}_{repr})\) for the source and target data. To achieve this without requiring the source data, we propose to follow the precedent of existing privacy-preserving medical imaging approaches [8, 16, 17] and, thus, create a global knowledge store to share summary statistics of the features. In [16], it is proposed that the features can be encoded as Gaussian distributions, and thus the statistics to be shared would be a mean and standard deviation per feature. We hypothesise that for many tasks, especially classification tasks with discrete categories, simple Gaussian distributions are unlikely to sufficiently characterise \(\mathbf{Q}^{s}\). We thus propose to describe the features using a Gaussian mixture model (GMM), with each feature being encoded as an independent 1D GMM, such that, for feature \(i\in N_{Q^{s}}\), where \(N_{Q^{s}}\) is the number of features in \(\mathbf{Q}^{s}\):
\[\mathbf{Q}^{s}_{i}\sim\sum_{k=1}^{K}\mathbf{\pi}^{s}_{k,i}\mathcal{N}(\mathbf{X}^{s}; \mathbf{\mu}^{s}_{k,i},\mathbf{\sigma}^{s^{2}}_{k,i}) \tag{2}\]
where \(K\) is the number of components in the GMM, \(\mathbf{\mu}^{s}_{k,i}\) and \(\mathbf{\sigma}^{s^{2}}_{k,i}\) are the mean and variance defining the \(k^{th}\) Gaus
sian component of the \(i^{th}\) feature for the source site, and \(\mathbf{\pi}^{s}_{k,i}\) is the weighting factor for this \(k^{th}\) Gaussian (which sum to one across components). Note that the features are considered before the activation function. The same number of components, \(K\), are fit for all features.
Thus, the GMM for feature \(i\) is defined by the parameters:
\[\mathbf{\Theta}^{s}_{i}=\{\mathbf{\pi}^{s}_{k,i},\mathbf{\mu}^{s}_{k,i},\mathbf{\sigma}^{s^{2}}_ {k,i}\},k=1..K \tag{3}\]
and these parameters can be determined using Expectation Maximisation (EM), by finding the maximum likelihood estimate (MLE) of the unknown parameters:
\[\mathcal{L}(\Theta_{i})=\sum_{n=1}^{N_{s,i}}\log(\sum_{k=1}^{K}\mathbf{\pi}^{s}_{k,i}\mathcal{N}(\mathbf{X}^{s}_{n};\mathbf{\mu}^{s}_{k,i},\mathbf{\sigma}^{s^{2}}_{k,i})) \tag{4}\]
for each feature \(i\) in \(\mathbf{Q}^{\mathbf{s}}\). This, therefore, produces three parameter arrays that fully define the GMMs of the source features, which are communicated alongside the source weights to target sites:
\[\mathbf{\Theta}_{Q^{s}}=\{\mathbf{\mu}^{s}\in\mathbb{R}^{K\times N_{Q^{s}}};\mathbf{\sigma }^{s^{2}}\in\mathbb{R}^{K\times N_{Q^{s}}};\mathbf{\pi}^{s}\in\mathbb{R}^{K\times N _{Q^{s}}}\}. \tag{5}\]
These parameters contain no individually identifying information, as they represent aggregate statistics across the whole population.
### Target Model Adaptation
Given that we now have a well trained source model, with parameters \(\mathbf{\Theta}^{s}_{repr}\) and \(\mathbf{\Theta}^{s}_{p}\), and the source GMM parameters, \(\mathbf{\Theta}_{Q^{s}}\), we can now adapt the model at any target site. We assume access to an unsupervised target, with only data samples \(\mathbf{X}^{t}\) and no labels available.
We initialise the target model using the source trained weights. Model adaptation only involves finetuning the feature extractor to match the learned feature distribution across the two sites. In adversarial approaches, a discriminator is added to the overall architecture that aims to distinguish between source and target samples. We could utilise this approach, following [16], by drawing feature samples, using the source GMM parameters \(\mathbf{\Theta}_{Q^{s}}\), but adversarial approaches are notoriously unstable and difficult to train. We therefore, instead, propose to minimise the difference between the source feature distribution and target feature distribution using the GMM parameters directly.
Therefore, the first step of model adaptation is to calculate the current target features, \(\textbf{Q}^{t}=f(\textbf{X}^{t},\mathbf{\Theta}^{t}_{repr})\), and then, using the same EM approach as above, we can create the parameters of the target GMM fit:
\[\mathbf{\Theta}_{Q^{t}}=\{\mathbf{\mu}^{t}\in\mathbb{R}^{K\times N_{Q^{t}}};\mathbf{\sigma }^{t^{2}}\in\mathbb{R}^{K\times N_{Q^{t}}};\mathbf{\pi}^{t}\in\mathbb{R}^{K\times N _{Q^{t}}}\}. \tag{6}\]
We propose to use a modified Bhattacharyya distance [4] as the loss function. The Bhattacharyya distance measures the similarity of two probability distributions, which for continuous probability distributions is defined as:
\[D_{B}(p,q)=-\ln(BC(p,q)) \tag{7}\]
where
\[BC(p,q)=\int_{x}\sqrt{p(x)q(x)}dx. \tag{8}\]
The Bhattacharyya distance has a simple closed form solution when the two probability distributions are both Gaussian. If \(p\sim\mathcal{N}(\mu_{p},\sigma_{p}^{2})\) and \(q\sim\mathcal{N}(\mu_{q},\sigma_{q}^{2})\) then:
\[D_{B}(p,q)=\frac{1}{4}\frac{(\mu_{p}-\mu_{q})^{2}}{\sigma_{p}^{2}+\sigma_{q}^{ 2}}+\frac{1}{2}\ln(\frac{\sigma_{p}^{2}+\sigma_{q}^{2}}{2\sigma_{p}\sigma_{q}}). \tag{9}\]
There is, however, no equivalent closed form solution for a GMM. In [46] they propose an approximation for the GMM as a sum of the Bhattacharyya distances for each pair of Gaussians in the mixture model, weighted by the associated \(\mathbf{\pi}\) values. We suggest that this is not the most appropriate reformulation: we are more interested in the corresponding pairs of Gaussians than in the cross-relationships, as we do not wish to minimise the difference between cross pairs. Rather, we wish specifically to make the target distribution match the source. Thus, if we consider our target and source GMM distributions, parameterised by \(\mathbf{\Theta}_{Q^{s}}\) and \(\mathbf{\Theta}_{Q^{s}}\), we propose to use the following approximation:
\[D_{GMM}(\mathbf{\Theta}_{Q^{s}},\mathbf{\Theta}_{Q^{t}})=\] \[\sum_{k=1}^{M}\mathbf{\pi}^{s}_{k}\mathbf{\pi}^{t}_{k}\Big{(}\frac{1}{4} \frac{(\mathbf{\mu}^{s}_{k}-\mathbf{\mu}^{t}_{k})^{2}}{\mathbf{\sigma}^{s^{2}}_{k}+\mathbf{ \sigma}^{t^{2}}_{k}}+\frac{1}{2}\ln(\frac{\mathbf{\sigma}^{s^{2}}_{k}+\mathbf{\sigma}^ {t^{2}}_{k}}{2\mathbf{\sigma}^{s}_{k}\mathbf{\sigma}^{t}_{k}})\Big{)} \tag{10}\]
such that we find the weighted sum of the Bhattacharyya distances between each corresponding pair of Gaussians, where \(k\) is the component in the GMM. It can further be seen that this approximation retains the desirable property that when \(\mathbf{\Theta}_{Q^{s}}=\mathbf{\Theta}_{Q^{t}}\), then \(D_{GMM}(\mathbf{\Theta}_{Q^{s}},\mathbf{\Theta}_{Q^{t}})=0\). The correspondence between Gaussians can be ensured by simply ordering the parameters by the mean estimates.
Thus, the feature extractor is finetuned for the target site by minimising \(D_{GMM}\) averaged across all of the features in \(\mathbf{Q}^{s}\). However, for each training iteration, only a fixed size batch is available to estimate the parameters, and for neuroimaging applications the maximum batchsize achievable is often small due to the relatively large image size [14], which affects the estimate of the GMM parameters. As EM is sensitive to initialisation, to mitigate the small batch effect, we initialise the EM algorithm only once per training epoch, using the previous batch estimate as the initialisation for the next, providing memory between batches. The EM algorithm is reinitialised for validation (needed to calculate validation loss), preventing data leakage.
### Inference Time
Finally, inference for the test data simply involves combining the finetuned feature encoder, \(\mathbf{\Theta}_{repr}^{t}\), and the frozen source label predictor, \(\mathbf{\Theta}_{p}^{s}\), such that \(\mathbf{\hat{y}}^{t}=f(\mathbf{X}^{t},\mathbf{\Theta}_{repr}^{t},\mathbf{\Theta}_{p}^{s})\). This therefore ensures that, given data from the source or target domain with the same feature embedding, the same label prediction is achieved across sites.
## 4 Experimental Results
To validate the effectiveness of our SFDA framework, we conduct a range of experiments with both simulated data with known domain shifts and real multisite MRI datasets, and we demonstrate the applicability of the method to classification, segmentation and regression tasks.
### Datasets:
Further details for each dataset and model architectures are available in the Supplementary Materials.
**OrganAMNIST**[56] (Classification): curated as part of MedMNIST [58], we use OrganAMNIST as a test dataset. All images were pre-processed to 28 \(\times\) 28 (2D) with the corresponding classification labels for 11 classes. We created simulated known domain shifts, to enable exploration of the method, with the strength of each shift designed to be such that a degradation in performance was seen across the sites. The dataset was split into 5 sets, each with 5000 samples for training and 2000 for testing and the following domain shifts applied: 1) no shift (source site), 2) decreased intensity range, 3) increased intensity range, 4) Gaussian blurring, 5) salt and pepper noise, to model shifts likely across imaging sites. The backbone architecture took the form of a small VGG-like classifier, with categorical crossentropy as the task loss. Code to reproduce the data is provided.
**CC359**[47] (Segmentation): The dataset consists of brain images of healthy adults (29-80 years) acquired on MRI scanners from three vendors: Siemens, Philips and GE, at both 1.5 and 3T, with approximately 60 subjects per vendor and magnetic field strength. A 2D UNet was trained on slices from each site, then the performance when applied to the remaining sites was compared. As a result, the Phillips 1.5T was chosen as the source site as it had the largest performance drop. No additional preprocessing was applied to the images apart from image resizing so that each subject volume was \(128\times 240\times 160\). The data were split at the subject level per site, such that 40 subjects were available for training and 20 for testing. The segmentation task was skull stripping, using masks from the original study, and Dice loss was used as the task loss function. Further details and example segmentation masks can be found in the Supplementary Material.
**ABIDE**[12] (Segmentation and Regression): Four sites (Trinity, NYU, UCLA, Yale) were used, so as to span age distributions and subject numbers. The data were split into training/test sets as 80%/20%, yielding a maximum of 127 subjects for training (NYU) and a minimum of 35 (Trinity). NYU was the largest site, spanning the age distribution of all of the other sites, and so was chosen as the source site. For segmentation, we considered tissue segmentation (grey matter (GM), white matter (WM), CSF), using labels automatically generated using FSL ANAT. We used a 2D UNet trained on slices with Dice as the main task loss function.
\begin{table}
\begin{tabular}{|l|c c c|c c|c c c|} \hline \multicolumn{2}{|c|}{Method} & S & T & C & \multicolumn{2}{c|}{Information} & \multicolumn{4}{c|}{Average Accuracy} \\ \cline{6-9} & & & & \multicolumn{2}{c|}{Communicated} & Batchsize 5 & Batchsize 50 & Batchsize 500 \\ \hline Source Model & ✓ & x & x & - & & \multicolumn{2}{c|}{80.71} \\ \hline Centralised Data & ✓ & ✓ & ✓ & All Data & 88.34 & 91.65 & 91.27 \\ Target Finetune & x & ✓ & ✓ & Model Weights & 88.15 & 88.96 & 83.26 \\ \hline DeepCORAL [48] & ✓ & x & ✓ & All Data & 82.43 & 83.85 & 83.65 \\ FADA [40] & ✓ & x & x & Model Weights + Features & 81.69 & 76.77 & 76.53 \\ FedHarmony [16] & ✓ & x & x & Model Weights + Statistics & 81.48 & 76.12 & 76.20 \\ \hline Minimise Entropy & x & x & x & Model Weights & 42.59 & 83.54 & 83.96 \\ SHOT [33] (no smoothing) & x & x & x & Model Weights & 66.86 & 83.60 & 85.40 \\ SHOT [33] (Source batchsize 5) & x & x & x & Model Weights & 72.06 & 74.44 & 74.61 \\ SHOT [33] (Source batchsize 500) & x & x & x & Model Weights & 83.10 & 84.68 & 85.27 \\ gSFDA [60] & x & x & x & Model Weights & 60.57 & 85.87 & 84.67 \\ USFAN [43] & x & x & x & Model Weights & 26.94 & 79.83 & 83.79 \\ \hline SFHarmony 1 GMM Component & x & x & x & Model Weights + Statistics & 85.47 & 85.71 & 86.16 \\ w/o EM (Direct Fit) & x & x & x & Model Weights + Statistics & 77.26 & 76.99 & 86.03 \\ w/o Batch Memory & x & x & x & Model Weights + Statistics & 80.25 & 82.13 & 84.60 \\ SFHarmony 2 GMM Components & x & x & x & Model Weights + Statistics & **86.22** & **86.25** & **86.21** \\ SFHarmony 3 GMM Components & x & x & x & Model Weights + Statistics & 86.21 & 85.70 & 85.96 \\ \hline \end{tabular}
\end{table}
Table 1: Results on the OrganAMNIST classification task. S = Source data required, T = Target labels required, C = Centralised data. The average accuracy is across all 5 sites, weighted equally, and is reported for training batchsizes of 5, 50 and 500. Best SFDA method for each batchsize is in bold, other methods are included for reference. The w/o (without) components form an ablation study.
Dice score was averaged across the three tissues. For age prediction, a separate network was trained, following the setup and architecture in [16], with MSE as the main task loss. Further details and example labels can be found in the Supplementary Material.
**Implementation Details:** All comparison methods used the same task-specific backbone architecture as the proposed method. Features were extracted in the second-to-last layer, before the activation function. Model architectures were chosen to give good source performance while allowing the use of large batchsizes, but most standard architectures could be used. Training was completed on an A10 GPU, using PyTorch 1.12.0. All models were trained with five-fold cross validation and results are presented on the hold out test set. A learning rate of \(1\times 10^{-6}\) was used for all datasets for adaptation with an AdamW optimiser.
### Classification: OrganAMNIST
**Baselines:** For the classification task, we first compare our approach to supervised oracles: source model only, centralised data, and target finetuning with frozen label predictor. We then compare to DeepCORAL [48], and two federated DA approaches: FADA [40] and FedHarmony [16], both of which require the presence of the source data. Finally, we compare to SFDA methods: entropy minimisation; SHOT [33], USFAN [43], and gSFDA [60]. We do not compare to any generative SFDA methods, as the ability to create source data would not meet privacy requirements for many applications [53], especially given that GANs often replicate training images when trained with small datasets [19]. Details are provided in the Supplementary Material.
**Methods Comparison:** We first demonstrate the method for a range of batchsizes (5, 50 and 500) because methods that minimise entropy are expected to be more stable when using large batchsizes, which are rarely achievable when working with MR images due to the memory constraints posed by large image sizes [14]. Thus, robustness to the batchsize is vital if a SFDA method is to be used for harmonisation. We use a single source model to allow fair comparison, trained with a batchsize of 50. We wish to maximise performance across all sites: as harmonisation is normally framed as a joint domain adaptation problem [15], the average performance across all sites is reported.
The results can be seen in Table 1, alongside the baseline methods. It can be seen that SFHarmony outperforms the existing SFDA methods, especially when a small batchsize was used for training (86.22% for batchsize 5). SHOT [33] showed comparable performance to SFHarmony when trained with a batchsize of 500 (85.27%), but was highly dependent on the modified source training. Interestingly, several of the SFDA approaches outperformed the adversarial approaches despite them having access to the source data, possibly due to the instability of such approaches.
The proposed \(D_{GMM}\) loss is clearly able to align the features across sites using only the GMM summary statistics. This is demonstrated by Fig. 2, which shows the source and target features for each site before and after DA. Clearly the features overlap much more after DA, which both leads to the clear improvement in performance, and shows that the approach is achieving the harmonisation goals of the model having a shared feature embedding across sites.
We tried modelling the features with \(K\in\{1,2,3\}\) GMM components: visual inspection of the features suggested that at least 2 components would be beneficial. This was confirmed by the results, with the best performance being achieved when modelling the features with 2 components, as shown in Table 1. However, the approach still performed well for 1 and 3 components, showing limited sensitivity to the number of components chosen. The number of components chosen is the only additional hyperparameter to
Figure 2: PCA of \(\mathbf{Q}_{s}\) and \(\mathbf{Q}_{t}\) for each target site, before and after domain adaptation for the OrganAMNIST data, with simulated domain shifts. Black dots are the source features which are fixed and the colour represents the features for the relevant site. (Best viewed in colour.)
be tuned with our approach, with only a single loss function to minimise. The results clearly show the robustness to the choice of batchsize, and the results were also robust to the choice of learning rate, with the accuracy staying within \(1\%\) of the best result across learning rates from \(10^{-7}\) to \(10^{-4}\). Therefore, deployment of the proposed approach requires no changes for a new site, only the choice of the number of components for a new source model. This is in contrast to many existing SFDA approaches that require balancing several loss functions (e.g. [3, 33]).
**Ablation Study:** We considered the GMM with \(K=1\), allowing us to explore the w/o EM case, where \(\mathbf{\mu}\) and \(\mathbf{\sigma}^{2}\) are calculated directly. We also considered removing the batch memory across the training loop, reinitialising the EM algorithm before each batch. From Table 1 it is clear that both aspects are contributing to the performance, especially for small batchsizes.
**Class Imbalance:** In the above experiments, the distribution of class labels was approximately equal across sites. We now consider the extreme scenario where the source site contains samples from across all classes but target sites are missing classes. This is a conceivable scenario when considering MR images, where a given clinical site specialises in a certain condition and we are trying to harmonise the data to a carefully curated research dataset. Figure 3 shows the average accuracy across sites, when the target sites had samples from an increasing number of classes removed. Each comparison method was trained using the best setting from Table 1. The proposed SFHarmony approach was more robust to the increased class imbalance than existing SFDA methods.
**Differential Privacy (DP):** Finally, we considered simulating the approach when DP is being used to further protect privacy. We simply simulated a Laplace mechanism of DP [18], by injecting noise onto the weights before communication, modelled as: \(\mathbf{w}=\mathbf{w}+Lap(|\mathbf{w}|f)\) where \(f\) was varied to create increasing levels of noise. As the GMM is fit at the local site, \(\mathbf{\Theta}_{Q^{s}}\) can be calculated before the noise is applied. The comparison methods were again all trained using the best setting from Table 1. Although this is a very simple model of DP, with many more sophisticated approaches existing, Fig. 4 demonstrates that many existing methods for SFDA are very sensitive to the applied noise. SHOT [33] is the most dramatically affected, with the pseudo-labelling approach suffering a significant degradation in performance. Our proposed approach maintained performance well across the applied noise levels, despite the frozen label predictor imposing a ceiling on performance.
### Segmentation: CC359 and ABIDE datasets
We now demonstrate our approach on two multisite MRI datasets for segmentation tasks: brain extraction (CC359) with two labels (brain/background) and tissue segmentation (ABIDE) with four labels (WM, GM, CSF, background).
**Baselines:** There are far fewer existing methods for SFDA, and we again did not compare to generative approaches. Thus, we compared to supervised oracles: source model only, centralised data and target finetuning with frozen label predictor; semisupervised approaches: DeepCORAL [48], FADA [40] and FedHarmony [16]; then for SFDA approaches we compared to minimising entropy, AdaEnt [2] and AdaMI [3], and Direct Fit (w/o EM in ablation study). We were unable to train DeepCORAL with a batchsize of more than 5 due to memory constraints.
**Methods Comparison:** Table 2 shows the results for both tasks. In the classification task there were only \(32\) features in the fully connected layer; however, now there are many more, for instance for the CC359 data there are \(65536\) features across all of the convolutional filters. Despite this increase in features, SFHarmony was able to complete the
Figure 4: Average accuracy across the sites with increasing magnitudes of noise injected into the source weights before communication. Amplitude is as a proportion of the source weights magnitude.
Figure 3: Average accuracy across the sites with increasing numbers classes removed from the target site training, creating increasingly imbalanced data distributions. The x axis shows the classes that were removed.
DA for both segmentation tasks, leading to an improved Dice score over the existing methods, across the batchsizes considered. Again, the existing SFDA methods were very sensitive to batchsize, and AdaMI [3] was also sensitive to the choice of tissue ratio prior: as we were completing the segmentation tasks on 2D slices, different slices had varying amounts of the target label present and we had to create a prior that was dependent on slice depth to achieve reasonable performance. The ABIDE tissue segmentation task was more challenging, as can be seen by the comparatively lower Dice Scores, especially due to the large imbalance in tissues, which affected the performance of AdaMI.
No changes needed to be made to the approach compared to the classification task, including the learning rate, showing the generalisability of the method across tasks.
### Regression: ABIDE dataset
**Baselines:** We could not identify any appropriate SFDA baselines. Therefore, the only comparison methods were: source model only, centralised data and target finetuning with frozen label predictor, DeepCORAL [48], FADA [40], FedHarmony [16] and Direct Fit. The maximum batchsize possible was 16, and so we tried batchsizes of 4, 8 and 16.
**Methods Comparison:** It can be seen from Table 3 that for the age prediction task FADA [40] outperformed our proposed approach for two of the three reported batchsizes, unlike in the other tasks. This may well be because the task was completed in 3D, and thus a small number of samples were available, meaning that the presence of the source data supported the model training. SFHarmony did, however, show comparable performance, especially when modelling the features with more components. We could not identify any SFDA methods in the literature that could be directly applied to regression tasks. Our method is flexible and can be directly applied to the regression task without any change to the model architecture or DA procedure.
## 5 Conclusion
We have presented SFHarmony, a method for SFDA, motivated by the need to harmonise MRI data across imag
\begin{table}
\begin{tabular}{|l|c c|c c|c c c|c c c|} \hline \multicolumn{2}{|c|}{Method} & S & T & C & \multicolumn{3}{c|}{Information} & \multicolumn{3}{c|}{CC359 Average Dice} & \multicolumn{3}{c|}{ABIDE Average Dice} \\ \cline{4-11} \multicolumn{2}{|c|}{} & & & \multicolumn{2}{c|}{Communicated} & Bs 5 & Bs 50 & Bs 500 & Bs 5 & Bs 50 & Bs 50 & Bs 500 \\ \hline Source Model & ✓ & x & x & - & & 0.832 & & & 0.775 & \\ \hline Centralised Training & ✓ & ✓ & ✓ & All Data & 0.983 & 0.985 & 0.983 & 0.884 & 0.885 & 0.875 \\ Target Finetune & x & ✓ & x & Model Weights & 0.981 & 0.982 & 0.982 & 0.883 & 0.884 & 0.885 \\ \hline DeepCORAL [48] & ✓ & x & ✓ & All Data & 0.768 & - & - & 0.523 & - & - \\ FADA [40] & ✓ & x & x & Model Weights + Features & 0.967 & 0.964 & 0.959 & 0.830 & 0.827 & 0.825 \\ FedHarmony [16] & ✓ & x & x & Model Weights + Statistics & 0.965 & 0.962 & 0.950 & 0.825 & 0.810 & 0.822 \\ \hline Minimise Entropy & x & x & x & Model Weights & 0.767 & 0.849 & 0.951 & 0.570 & 0.542 & 0.659 \\ AdaEnt [2] & x & x & x & Model Weights & 0.827 & 0.817 & 0.962 & 0.625 & 0.656 & 0.682 \\ AdaMI [3] & x & x & x & Model Weights & 0.820 & 0.835 & 0.965 & 0.606 & 0.657 & 0.660 \\ Direct Fit & x & x & x & Model Weights + Statistics & 0.648 & 0.696 & 0.873 & 0.615 & 0.803 & 0.830 \\ \hline SFHarmony 1 GMM Component & x & x & x & Model Weights + Statistics & 0.950 & 0.949 & 0.959 & 0.831 & **0.832** & 0.831 \\ SFHarmony 2 GMM Components & x & x & x & Model Weights + Statistics & 0.970 & **0.970** & **0.970** & 0.832 & **0.832** & **0.832** \\ SFHarmony 3 GMM Components & x & x & x & Model Weights + Statistics & **0.972** & 0.968 & **0.970** & **0.833** & **0.832** & **0.832** \\ \hline \end{tabular}
\end{table}
Table 2: Results on the CC359 dataset for brain extraction, and the ABIDE dataset for the tissue segmentation. S = Source data required, T = Target labels required, C = Centralised data, Bs = batchsize. The average Dice score is the performance across all 5 (CC359) / 4 (ABIDE) sites, weighted equally, and is reported for training batchsizes of 5, 50 and 500. The best performing SFDA method for each batchsize for each segmentation task is in bold.
\begin{table}
\begin{tabular}{|l|c c|c c|c c|} \hline \multicolumn{2}{|c|}{Method} & S & T & C & \multicolumn{3}{c|}{Information} & \multicolumn{3}{c|}{Average MAE} \\ \cline{4-7} \multicolumn{2}{|c|}{} & & & \multicolumn{2}{c|}{Communicated} & Bs 4 & Bs 8 & Bs 16 \\ \hline Source Model & ✓ & x & x & - & - & 4.38 & \\ \hline Centralised Training & ✓ & ✓ & ✓ & All Data & 3.52 & 3.38 & 3.36 \\ Target Finetune & x & ✓ & x & Model Weights & 3.57 & 3.60 & 3.58 \\ \hline DeepCORAL [48] & ✓ & x & ✓ & All Data & 4.58 & 4.41 & 4.12 \\ FADA [40] & ✓ & x & x & Model Weights + Features & 3.55 & 3.42 & 3.78 \\ FedHarmony [16] & ✓ & x & x & Model Weights + Statistics & 3.61 & 3.50 & 3.79 \\ Direct Fit & x & x & x & Model Weights + Statistics & 4.70 & 4.31 & 4.05 \\ \hline SFHarmony 1 GMM Component & x & x & x & Model Weights + Statistics & 4.21 & 4.13 & 3.71 \\ SFHarmony 2 GMM Components & x & x & x & Model Weights + Statistics & 3.87 & **3.72** & **3.69** \\ SFHarmony 3 GMM Components & x & x & x & Model Weights + Statistics & **3.64** & **3.72** & 3.73 \\ \hline \end{tabular}
\end{table}
Table 3: Results on the ABIDE dataset for the age prediction task. S = Source data required, T = Target labels required, C = Centralised data, Bs = Batchsize. The average MAE is the performance across all 4 sites, weighted equally, and is reported for training batchsizes of 4, 8 and 16: 16 was the largest batch achievable. The best SFDA method for each batchsize is in bold.
ing sites while relaxing assumptions about the availability of source data. We have demonstrated the applicability of the method to classification, regression, and segmentation tasks, and have shown that it outperforms existing SFDA approaches when applied to MR imaging data. The approach is general, allowing it to be applied across architectures and tasks. Issues may arise due the increase in features when applying the approach to 3D volumes. Currently, the approach models each feature as an independent GMM, but features will be highly related within a filter and approaches to utilise these relations should be explored.
## 6 Acknowledgements
ND is supported by a Academy of Medical Sciences Springboard Award. MJ is supported by the National Institute for Health Research, Oxford Biomedical Research Centre, and this research was funded by the Wellcome Trust [215573/Z/19/Z]. WIN is supported by core funding from the Wellcome Trust [203139/Z/16/Z]. AN is grateful for support from the Academy of Medical Sciences under the Springboard Awards scheme (SBF005/1136), and the Bill and Melinda Gates Foundation.
|
2302.04599 | Principled and Efficient Motif Finding for Structure Learning of Lifted
Graphical Models | Structure learning is a core problem in AI central to the fields of
neuro-symbolic AI and statistical relational learning. It consists in
automatically learning a logical theory from data. The basis for structure
learning is mining repeating patterns in the data, known as structural motifs.
Finding these patterns reduces the exponential search space and therefore
guides the learning of formulas. Despite the importance of motif learning, it
is still not well understood. We present the first principled approach for
mining structural motifs in lifted graphical models, languages that blend
first-order logic with probabilistic models, which uses a stochastic process to
measure the similarity of entities in the data. Our first contribution is an
algorithm, which depends on two intuitive hyperparameters: one controlling the
uncertainty in the entity similarity measure, and one controlling the softness
of the resulting rules. Our second contribution is a preprocessing step where
we perform hierarchical clustering on the data to reduce the search space to
the most relevant data. Our third contribution is to introduce an O(n ln n) (in
the size of the entities in the data) algorithm for clustering
structurally-related data. We evaluate our approach using standard benchmarks
and show that we outperform state-of-the-art structure learning approaches by
up to 6% in terms of accuracy and up to 80% in terms of runtime. | Jonathan Feldstein, Dominic Phillips, Efthymia Tsamoura | 2023-02-09T12:21:55Z | http://arxiv.org/abs/2302.04599v3 | # Principled and Efficient Motif Finding for
###### Abstract
_Structure learning_ is a core problem in AI central to the fields of _neuro-symbolic AI_ and _statistical relational learning_. It consists in automatically learning a logical theory from data. The basis for structure learning is mining repeating patterns in the data, known as _structural motifs_. Finding these patterns reduces the exponential search space and therefore guides the learning of formulas. Despite the importance of motif learning, it is still not well understood. We present the first principled approach for mining structural motifs in _lifted graphical models_, languages that blend first-order logic with probabilistic models, which uses a stochastic process to measure the similarity of entities in the data.
Our first contribution is an algorithm, which depends on two intuitive hyperparameters: one controlling the uncertainty in the entity similarity measure, and one controlling the softness of the resulting rules. Our second contribution is a preprocessing step where we perform hierarchical clustering on the data to reduce the search space to the most relevant data. Our third contribution is to introduce an \(\mathcal{O}(n\ln n)\) (in the size of the entities in the data) algorithm for clustering structurally-related data. We evaluate our approach using standard benchmarks and show that we outperform state-of-the-art structure learning approaches by up to 6% in terms of accuracy and up to 80% in terms of runtime.
## 1 Introduction
**Motivation** In artificial intelligence, combining statistical and logical representations is a long-standing and challenging aim. The motivation behind combining the two is that logical models can represent heterogenous data and capture causality, while statistical models handle uncertainty [12, 13]. General approaches to represent structural information are _lifted graphical models_ (LGMs), such as _Markov logic networks_ (MLNs) [1] and _probabilistic soft logic_ (PSL) [1]. These are languages that define Markov random fields in a declarative fashion and are represented as theories of weighted formulas in first-order logic. The versatility of LGMs is reflected in their variety of applications, including bioinformatics [10], natural language understanding[2], entity linking [20] and others [21, 14, 15]. Recently, they have also been adopted in neurosymbolic frameworks [20, 21].
Unsurprisingly, the quality of a logical theory, that is, the extent to which it models the task it is supposed to solve, has a strong impact on the performance of the downstream applications. Manually optimising formulae to boost performance is a costly, time-consuming and error-prone process that restricts the scope of application. This can raise fundamental criticism against frameworks that require such theories as part of their input [1, 20]. An alternative is the automated learning of LGMs from data, a problem known as _structure learning_. The ultimate goal is to design a general framework that can efficiently learn high-quality models on large datasets in a principled fashion. Several pioneering structure learning algorithms have been developed for MLNs [16, 17, 18].
**Problem** Generally, structure learning consists in searching for formulae in an exponential search space. The naive approach would consist in trying every possible combination of predicates and logical connectives, which is computationally expensive [16]. Therefore, to reduce computational complexity, every sophisticated structure learner proceeds by searching for formulae within templates. These templates can be user-defined or learnt automatically [15, 16]. Every sophisticated learner can thus be summarized in three main steps: S1 - Apply heuristics to abstract-out common, recurrent patterns within the data to be used as templates. S2 - Iteratively generate formulae based on the previously found patterns and evaluate candidate formulae based on how well they generalize to the training data. S3 - Learn the collective weights of the optimal formulae. Remark that finding good templates is the basis for successful structural learning, as it not only reduces the search space but also forms the starting point of the structure learning algorithm and constrains the shape of logical formulae generated in later stages.
For example, the state-of-the-art structure learning algorithm, _Learning using Structural Motifs_ (LSM), reduces
the search space for formulae by focusing within recurring patterns of commonly connected entities in the relational database [10]. The task of mining these patterns involves repeatedly partitioning the entities of a database into symmetrically-equivalent sets relative to a reference entity. These sets are called _(structural) motifs_. Since the entities in a structural motif are symmetric, formula learning only needs to be performed on one entity instead of each separately. Therefore, structural motifs guide the groundings of potential logical formulas of the LGM (S1).
The key difference between structure learners that do not require user input is how the templates are found. Still, the state-of-the-art suffers from several shortcomings that have a negative impact on the scalability and effectiveness of the full pipeline (S1-S3). Firstly, the symmetry-partitioning algorithm has six unintuitive hyperparameters that need to be calibrated to each dataset. The difficulty of finding these parameters can lead to inaccurate partitioning. Secondly, the main clustering algorithm, a core step to obtain symmetric partitions, has complexity in the number of entities to partition. This can result in significant slowdowns on databases that are densely connected.
**Contributions** In this work, we design a more principled and scalable algorithm for extracting motifs through symmetry-partitioning (stage S1 of structure learning). In our algorithm, we make three key contributions that overcome the limitations of prior art. In Section 3, we address the first limitation and propose a _principled_ algorithm using the theoretic properties of hypergraphs to design an approach that uses just two, intuitive hyperparameters: one that controls the uncertainty of the similarity measure of entities in the data and one that controls the softness of the resulting formulae. In Section 4, we tackle the issue of _efficiency_. Firstly, we design an alternative \(\mathcal{O}(n\ln n)\) symmetry-partitioning algorithm. Secondly, we propose a pre-processing step where we hierarchically cluster the relational database to reduce the required computation and further improve the guiding of the formulae finding. Beyond the above contributions, we present PRISM (PRincipled Identification of Structural Motifs) a parallelized, flexible, and optimized C++ implementation of the entire algorithm1. In Section 6, we assess the performance of the developed techniques against LSM and BOOSTR on datasets used as standard benchmarks in the literature.
Footnote 1: [https://github.com/jonathanfeldstein/PRISM](https://github.com/jonathanfeldstein/PRISM)
## 2 Preliminaries
A _hypergraph_\(\mathcal{H}=(V,E)\) is a pair of sets of nodes \(V=\{v_{i}\}_{i=0}^{|V|}\) and hyperedges \(E=\{e_{i}\}_{i=0}\). A hyperedge \(e_{i}\in E\) is a non-empty subset of the nodes in \(\mathcal{H}\). A hypergraph \(\mathcal{H}\) is labelled, if each hyperedge in \(\mathcal{H}\) is labelled with a categorical value. We use \(\mathsf{label}(e_{i})\) to denote the label of the hyperedge \(e_{i}\). A _path_\(\pi\) of length \(L\) in \(\mathcal{H}\) is an alternating sequence of nodes \(v_{i}\) and hyperedges \(e_{i}\), such that \(v_{i},v_{i+1}\in e_{i}\), of the form \((v_{0},e_{0},v_{1},\ldots,v_{L-1},e_{L-1},v_{L})\) for \(0\leq i\leq L-1\). The _diameter_ of \(\mathcal{H}\) is the maximum length of the shortest path between any two nodes \(v_{i},v_{j}\) in \(\mathcal{H}\). The _signature_ of a path \(\pi\) is the sequence of the labels of the edges occurring in \(\pi\), i.e., \((\mathsf{label}(e_{0}),\ldots,\mathsf{label}(e_{L-1}))\).
A _relational database_\(\mathcal{D}\) can be represented by a hypergraph \(\mathcal{H}=(V,E)\) by defining \(V\) to be the union of the constants in \(\mathcal{D}\), and defining \(E\) such that every \(k\)-ary ground atom \(R(c_{1},\ldots,c_{k})\) in \(\mathcal{D}\) becomes a hyperedge \(e\in E\), with label \(R\), whose elements are the nodes corresponding to the constants \(c_{1},\ldots,c_{n}\).
A _random walk_ on \(\mathcal{H}\) is a stochastic process that generates paths by traversing edges in \(\mathcal{H}\). The _length_ of a random walk is defined as the number of edges traversed in the path. Let \(v_{i}\) and \(v_{j}\) be two nodes in \(\mathcal{H}\). The _hitting time_\(h_{i,j}\) from \(v_{i}\) to \(v_{j}\) is the average number of steps required to reach \(v_{j}\) for the first time with random walks starting from \(v_{i}\).
The _L-truncated hitting time_\(h_{ij}^{L}\) (THT) is the hitting time where the length of the random walk is limited to at most \(L\) steps. It is defined recursively as \(h_{ij}^{L}=1+\sum_{k}p_{ik}h_{kj}^{L-1}\), where \(p_{ij}\) is the transition matrix of the random walk, with \(h_{ij}^{L}=0\) if \(i=j\), and \(h_{ij}^{L}=L\) if \(j\) is not reached in \(L\) steps. The more short paths that exist between \(v_{i}\) and \(v_{j}\), the shorter the THT. The THT is therefore a measure of the connectedness of nodes.
We denote by \(\mathcal{S}_{i,j}^{L}\) the set of path signatures of lengths up to \(L\) that start at \(v_{i}\) and end at \(v_{j}\). The _L-path signature distribution_\(P_{i,j}^{L}\) is then the probability distribution over the elements of \(\mathcal{S}_{i,j}^{L}\) under a given random walk process. The _marginal \(L\)-path signature distribution_\(P_{i,j}^{L}|_{l}\) is the marginal probability distribution when only paths of length _exactly_\(l\in\{1,2,\ldots,L\}\) are considered. The quantities \(P_{i,j}^{L}(\sigma)\) and \(P_{i,j}^{L}|_{l}(\sigma)\) respectively denote the probability and marginal probability of path signature \(\sigma\). With this, we now introduce the important notion of _path-symmetry_.
**Definition 1** (Path-Symmetry).: _Nodes \(v_{j}\) and \(v_{k}\) are order-\(L\) path symmetric with respect to \(v_{i}\) if \(P_{i,j}^{L}=P_{i,k}^{L}\) and are exact order-\(L\) path symmetric w.r.t. \(v_{i}\) if \(P_{i,j}^{L}|_{l}=P_{i,k}^{L}|_{l}\). A set of nodes is (exact) path-symmetric w.r.t. \(v_{i}\) if each node in the set is (exact) path-symmetric w.r.t. \(v_{i}\)._
Within the context of structure learning, path-symmetric sets of nodes correspond to what we denote as _abstract concepts_ and correspond to collections of entities that have similar neighbourhoods in the hypergraph.
**Remark 1**.: _A necessary condition for nodes \(v_{j}\) and \(v_{k}\) to be order-\(L\) path-symmetric w.r.t. \(v_{i}\) is that they are_ **order-\(L\) distance symmetric** _w.r.t. \(v_{i}\), i.e. \(h_{i,j}^{L}=h_{i,k}^{L}\)._
It is computationally infeasible to compute \(h_{i,j}^{L}\) and \(P_{i,j}^{L}\) exactly for large hypergraphs. However, they can both be well-approximated by sampling by running \(N\) random walks of length \(L\) from node \(v_{i}\) and recording the number of times \(v_{j}\) is hit [11]. We denote by \(\hat{h}_{i,j}^{L,N}\), and by \(\hat{P}_{i,j}^{L,N}\), the obtained estimates and refer to them as \((L,N)\) estimates. Finally, we denote by \(\hat{C}_{i,j}^{L,N}(\sigma)\) the function from a signature \(\sigma\) in \(\mathcal{S}_{i,j}^{L}\) to the number of occurrences of \(\sigma\) in the paths from \(v_{i}\) to \(v_{j}\) that are encountered while running \(N\) random walks of length \(L\). We refer to \(\hat{C}_{i,j}^{L,N}(\sigma)\) as the _\(L\)-path signature counts_.
We denote by \(\hat{C}_{i,j}^{L,N}|_{l}(\sigma)\) the marginal count when only contributions from paths of length exactly \(l\in\{1,2,\ldots,L\}\) are considered. For readability purposes, we will drop the term signature and refer simply to \(L\)_-path distributions_ and to \(L\)_-path counts_. By path, we will refer to a path signature, unless stated otherwise.
### Example of a Structure Learner: LSM
To illustrate structure learning, we present an overview of the LSM algorithm [10]. The algorithm proceeds in three main steps (denoted S1, S2 and S3 below). The resulting pipeline is summarised in Fig 1.
**S1: Finding Structural Motifs** Nodes with similar environments in the database hypergraph are first clustered together into _abstract concepts_. Clustering is achieved by running many random walks from each node in the hypergraph. Nodes are then partitioned into sets of path-symmetric nodes based on the similarity of their \(L\)-path counts. Each path-symmetric sets then corresponds to an abstract concept.
**Example 1** (Abstract Concepts).: _In Fig 1, we see that \(\texttt{P}_{1}\) and \(\texttt{P}_{2}\) are both teaching \(\texttt{P}_{3}\), \(\texttt{P}_{4}\), \(\texttt{P}_{5}\) and \(\texttt{P}_{6}\). Furthermore, \(\texttt{P}_{3}\), \(\texttt{P}_{4}\), \(\texttt{P}_{5}\) and \(\texttt{P}_{6}\) are all reading \(\texttt{B}_{1}\), \(\texttt{B}_{2}\) and \(\texttt{B}_{3}\). Even though we have not explicitly defined the notion of student, teacher, and book we have that \(\texttt{P}_{3}\), \(\texttt{P}_{4}\), \(\texttt{P}_{5}\) and \(\texttt{P}_{6}\) are all path-symmetric w.r.t to \(\texttt{P}_{1}\) and w.r.t \(\texttt{P}_{2}\), as are \(\texttt{B}_{1}\), \(\texttt{B}_{2}\) and \(\texttt{B}_{3}\). The abstract concepts that we obtain are thus \(\{\texttt{P}_{3},\texttt{P}_{4},\texttt{P}_{5},\texttt{P}_{6}\}\), \(\{\texttt{P}_{1},\texttt{P}_{2}\}\), and \(\{\texttt{B}_{1},\texttt{B}_{2},\texttt{B}_{3}\}\), which intuitively represent the idea of students, teachers and books, respectively._
Once abstract concepts are found, they are then joined by the edges that connect them to form _structural motifs_, see Fig 1 (ii). It is the identification of these structural motifs that effectively speeds up the subsequent rule-finding by reducing the search for candidate clauses (c.f. S2). In LSM, computing motifs requires setting six independent hyper-parameters: \(N\) the number of random walks ran, \(L\) the length of each random walk, \(\theta_{hit}\) a threshold to select only 'nearby' nodes to the source node of the random walk (those with \(\hat{h}_{i,j}^{L,N}\leq\theta_{hit}\)), \(\theta_{sym}\) a threshold for merging nodes based on the similarity of their THTs (all nodes \(v_{j}\) and \(v_{k}\) with \(|\hat{h}_{i,j}^{L,N}-\hat{h}_{i,k}^{L,N}|<\theta_{sym}\) are merged), \(\theta_{JS}\) a threshold for merging nodes by path similarity based on the Jensen-Shannon divergence of their path distributions, and \(n_{top}\) the number of paths to consider (in order of descending frequency) when computing the Jensen-Shannon divergence.
**S2a: Finding Paths in Motifs** Based on the found motifs, sequences (paths) of ground literals that often appear together in the data are generated, see Fig 1 (iii). The fact that the literals appear often together points to the fact that they are likely to be logically dependent on one another.
**S2b: Evaluating Candidate Clauses** The sequences of ground literals are used to generate candidate clauses. Each clause is evaluated using a likelihood function. The best clauses are then added to the structure-learnt MLN.
**S3: Learning the Weights of Candidate Clauses** Finally, the algorithm finds the weights of the chosen clauses by maximum-likelihood estimation. This yields a set of formula-weight pairs which define the final MLN.
## 3 Principled Motif-Finding
Hyperparameter tuning can be one of the most time-costly stages when applying algorithms to real problems. This holds particularly in the case of LSM, where we have six heuristic hyperparameters, as detailed in Section 2.1. In our work, we devise an alternative motif-finding algorithm (PRISM) that depends on only two _intuitive_ hyperparameters, thus greatly speeding up the workflow.
### Introducing PRISM
In overview, the steps taken by PRISM are:
For each node \(v_{i}\) in \(\mathcal{H}\): (i) run an _optimal_ number of random walks originating from \(v_{i}\) and compute, for each \(v_{j}\neq v_{i}\), the THT estimate \(\hat{h}_{i,j}^{L,N}\) and path distribution estimate \(\hat{P}_{i,j}^{L,N}\); (ii) partition the nodes \(V\in\mathcal{H}\) into sets \(A_{1},A_{2},\ldots,A_{M}\), that are _statistically significant_ order-\(L\) distance-symmetric w.r.t. \(v_{i}\), by merging nodes if the difference in their THTs is below a statistical threshold \(\theta_{sym}\)
Figure 1: **Example Structure-Learning Pipeline:** The above shows a dataset about a university class. Nodes \(\texttt{P}_{i}\) are entities of type person, while \(\texttt{B}_{i}\) are entities of type book. Black edges represent \(\texttt{teaches}(\texttt{person},\texttt{person})\), and red edges represent \(\texttt{reads}(\texttt{person},\texttt{book})\): (i) The resulting abstract concepts when random walks are run from source node \(\texttt{P}_{1}\). Dashed boxes represent concepts, which intuitively are teachers, \(\{\texttt{P}_{1}\}\), colleagues \(\{\texttt{P}_{2}\}\), students \(\{\texttt{P}_{3},\texttt{P}_{4},\texttt{P}_{5},\texttt{P}_{6}\}\) and books \(\{\texttt{B}_{1},\texttt{B}_{2},\texttt{B}_{3}\}\) (ii) the resulting structural motif, (iii) paths found in the motif (iv) mined candidate clauses.
We describe how to set \(\theta_{sym}\) in Section 3.3; (iii) further partition the nodes within each \(A_{m}\) into _statistically significant_ order-\(L\) path-symmetric sets. An algorithm achieving this in \(\mathcal{O}(n\ln n)\) (vs \(\mathcal{O}(n^{3})\) in SOTA) is presented later. Notice that step (ii) serves to reduce the computational cost of step (iii) by applying heuristics that classify the nodes into sets that are most likely to be path-symmetric.
The question remains how to define 'optimal' and'statistically significant' in the above pipeline. To this end, we introduce two independent parameters, \(\varepsilon\) to optimise the number of random walks, and \(\alpha\) to control the statistical significance threshold of the similarity measure.
### \(\varepsilon\)-uncertainty: Controlled Path Sampling
**Motivation** To find good motifs we need to identify abstract concepts. To do this, we compare the path distributions of nodes in the hypergraph representation of the database. However, in real-world applications, computing these distributions exactly is infeasible, so we resort to approximating them through sampling by running random walks. The uncertainty in these approximations will depend on the length \(L\), and number \(N\) of random walks. Here we formally define a measure of uncertainty and show how it can be used to set an optimal number of random walks.
**Definition 2** (\(\varepsilon\)-uncertainty).: _The uncertainty of the \((L,N)\)-estimate of \(h_{i,j}\) is defined by \(|h_{i,j}^{L}-\hat{h}_{i,j}^{L,N}|/h_{i,j}^{L}\). The uncertainty of the \((L,N)\)-estimate of \(P_{i,j}^{L}\) is defined as the maximum of \(|P_{i,j}^{L}(\sigma)-\hat{P}_{i,j}^{L,N}(\sigma)|/P_{i,j}^{L}(\sigma)\) among all paths \(\sigma\) in the domain of \(P_{i,j}^{L}\)._
\(\varepsilon\)-uncertainty is of major importance to the overall theory-induction pipeline as it determines the confidence in the similarity measure between nodes and, ultimately, of the induced theories; the lower the \(\varepsilon\), the higher the confidence in the relatedness of nodes. However, naturally, there is a trade-off, as we show below (Thm. 1), as lower uncertainty implies a polynomially higher computational cost. For a given \(\varepsilon\) we thus seek to find the least number of random walks \(N\) that guarantees this uncertainty level. We say that such an \(N\) is \(\varepsilon\)_-optimal_:
**Definition 3** (\(\varepsilon\)-optimality).: _N is \(\varepsilon\)-optimal on \(\mathcal{H}\) under \(L\) if it is the smallest integer so that for any pair of nodes \(v_{i},v_{j}\) in \(\mathcal{H}\), the expectation of the uncertainties of \((L,N)\)-estimates of \(h_{i,j}\) and \(P_{i,j}\) are upper bounded by \(\varepsilon\)._
Minimising \(N\) is crucial as running random walks is computationally intensive, especially in large hypergraphs.
**Usage** In Theorem 1 below, we state how to set \(N\) to guarantee \(\varepsilon\)-optimality (for all theorem proofs, see the Appendix on arXiv2).
Footnote 2: [https://arxiv.org/pdf/2302.04599](https://arxiv.org/pdf/2302.04599)
**Theorem 1**.: _An upper bound on the \(\varepsilon\)-optimal number of random walks \(N\) on \(\mathcal{H}\) under \(L\) is given by_
\[\max\{(L-1)^{2}/4\varepsilon^{2},P^{*}\,(\gamma+\ln P^{*})/\varepsilon^{2}\} \tag{1}\]
_where \(P^{*}=1+e\left(e^{L}-1\right)/(e-1)\gg 1\), \(e\) is the number of unique edge labels in \(\mathcal{H}\), and \(\gamma\approx 0.577\) is the Euler-Mascheroni constant._
In PRISM, \(N\) is automatically computed according to Theorem 1 based on a user-specified \(\varepsilon\). In the above, we assumed a fixed \(L\). A good value for \(L\) is the diameter of the hypergraph to guarantee that every node can be hit during random walks. In Section 4.2 we revise this assumption and show how \(L\) can be reduced based on the properties of \(\mathcal{H}\).
### \(\alpha\)-significance: Controlled Softness of Formulae
**Motivation** Theorem 1 allows us to run the minimum number of walks needed to compute good enough estimates of the truncated hitting times and the path distributions. Based on these estimates, the next step is to partition our data into path-symmetric sets. If we choose very strict standards for clustering nodes together, i.e. only clustering if they have identical truncated hitting times and path distributions, the formulae we find will be very stringent and will not generalize well on unseen data (overfitting). However, if we are too loose on the criteria to merge nodes, the rules we obtain will be too soft and not informative enough (underfitting).
Our approach to controlling rule softness is to introduce statistical tests to decide when two nodes are distance- and path-symmetric and a user-specified parameter \(0<\alpha<1\). \(\alpha\) is the statistical significance level at which two nodes are considered distance- and path-symmetric. \(\alpha\), therefore, measures how lenient we are in merging entities into abstract concepts and by extension an indirect measure of the softness of rules. The effect of changing \(\alpha\) for path-symmetry clustering on an example hypergraph is shown in Fig 2.
This approach results in three major benefits:
(i) \(\alpha\) is the only clustering parameter compared to the four parameters in SOTA. (ii) \(\alpha\) is by construction dataset independent, thus simplifying hyperparameter tuning compared to SOTA. (iii) the \(\alpha\) parameter has a direct and intuitive effect on the size of the path-symmetric clusters, with smaller \(\alpha\) leading to less-strict statistical tests that ultimately favour finding fewer, but larger path-symmetric sets and thus fewer, more-approximate abstract concepts.
Figure 2: **Influence of \(\alpha\):** As \(\alpha\) increases, the criterion for clustering nodes by path similarity becomes stricter. The source node, \(\mathbb{B}_{1}\), is shaded in grey. Nodes that were previously clustered become partitioned. For example: \(\{P_{1},P_{2},P_{3}\}\rightarrow\{P_{1},P_{2}\}\{P_{3}\}\rightarrow\{P_{1}\}\{P_{2}\}\{P_{3}\}\).
**Usage** Given truncated hitting times \(\hat{h}_{i,j}^{L,N}\), we merge nodes if the difference between their THTs is below a threshold \(\theta_{sym}(\alpha)\). Next, given path distributions \(\hat{P}_{i,j}^{L,N}\), we propose a hypothesis test to validate whether a set of sampled distributions are statistically similar. We show that both tests can be performed to a specified level of statistical significance given by just one parameter: \(\alpha\).
First, we consider the null hypothesis that nodes \(v_{j}\) and \(v_{k}\) are order-\(L\) distance-symmetric w.r.t. \(v_{i}\), using \(|\hat{h}_{i,j}^{L,N}-\hat{h}_{i,k}^{L,N}|\) as a test statistic:
**Theorem 2** (Distance-Symmetric Hypothesis Test).: _The null hypothesis is rejected at significance level \(\alpha\), i.e. nodes \(v_{j}\) and \(v_{k}\) are not order-\(L\) distance-symmetric, if \(|\hat{h}_{i,j}^{L,N}-\hat{h}_{i,k}^{L,N}|>((L-1)/\sqrt{2N})t_{\alpha/2,N-1}\), where \(t_{\alpha/2,N-1}\) is the inverse-survival function of an \(N-1\) degrees of freedom student-t distribution evaluated at \(\alpha/2\)._
Theorem 2 allows us to set parameter \(\theta_{sym}\) dynamically for each pair of nodes whose hitting times are being compared, such that nodes are merged only if they are distance-symmetric at significance level \(\alpha\):
\[\theta_{sym}=\frac{L-1}{\sqrt{2N}}t_{\alpha/2,N-1}. \tag{2}\]
To measure the degree of _path_ symmetry (as opposed to _distance_ symmetry), a threshold can be set using a different hypothesis test but based on the same \(\alpha\) used to set \(\theta_{sym}\) above. In the next section, we detail this hypothesis test.
## 4 Efficient Structural Motif Finding
Above we have discussed how to set our parameters in a principled fashion. In this section, we discuss how to use these parameters in an efficient algorithm (Sec. 4.1), and then further improve speed by reducing the required length (Sec. 4.2) and number of random walks (Sec. 4.3).
### An Improved Path-Symmetry Clustering Algorithm
In this section, we outline an efficient algorithm, which we refer to as PathSymmetricClustering, for partitioning nodes into sets that are path-symmetric at significance level \(\alpha\). This algorithm has \(\mathcal{O}(n\ln n)\) complexity in the number of nodes to cluster, which offers a significant improvement over the \(\mathcal{O}(n^{3})\) complexity of SOTA.
Using the notation introduced in Section 3, we partition each distance-symmetric node set \(A_{m}\in\{A_{1},\ldots,A_{M}\}\) into path-symmetric sets w.r.t. a node \(v_{i}\). PathSymmetricClustering treats the path counts of the nodes within each \(A_{m}\) as points in a multi-dimensional space of the path signatures. For each \(A_{m}\), PathSymmetricClustering then clusters nodes into path-symmetric sets as follows: First, we run a hypothesis test (Thm. 3, discussed below) on \(A_{m}\) to check whether the entire set of nodes is path-symmetric at significance level \(\alpha\). If the test passes, all nodes are clustered together. If the test fails, we proceed with recursive clustering (see Alg. 1):
1. Standardize (zero mean, unit variance) the path counts and use PCA to map these standardized counts to a two-dimensional space.
2. Cluster nodes in the reduced space into two sets using unsupervised BIRCH clustering [22].
3. Perform a path-symmetry hypothesis test (Thm. 3) separately on the two identified sets.
4. Clusters failing the test have their nodes repeatedly repartitioned into two new clusters using steps 2 and 3 until all sets of clusters pass the hypothesis test. The output is a set of clusters \(\{B_{1},B_{2},...,B_{k}\}\) partitioning the set \(A_{m}\).
```
1Input:\(A\), nodes to partition into order-\(L\) path-symmetric sets w.r.t. \(v_{i}\), where \(v_{i}\not\in A\)
2Output:\(B_{1},\ldots,B_{K}\), path-symmetric sets
3Parameters:\(\alpha\), \(\delta=2\) (parameters \(N,L\) are implicit)
4if\(A\) is path symmetric at significance level \(\alpha\) for each \(l\in\{L,L-1,\ldots,1\}\)then
5return\(\{A\}\) ; // Thm. 3
6
7
8else
9for each \(v_{\ell^{\prime}}\in A\) compute \(\&\) standardise\(\hat{C}_{i,\ell^{\prime}}^{L,N}\)
10reduce all the \(\hat{C}_{i,\ell^{\prime}}^{l,N}\)'s into \(\delta\)-dimensional feature vectors using PCA
11\(Partition\leftarrow\emptyset\)
12\(RemainingSets\leftarrow\{A\}\)
13while\(RemainingSets\) not emptydo
14\(S\leftarrow RemainingSets.\)pop
15partition\(S\) into \(\{B_{1},B_{2}\}\) via unsupervised clustering of the \(\hat{C}_{i,\ell^{\prime}}^{l,N}\)'s
16for\(B_{i}\in\{B_{1},B_{2}\}\)do
17if\(B_{i}\) is path symmetric at significance level \(\alpha\) for each \(l\in\{L,L-1,\ldots,1\}\)then
18\(Partition.\)append(\(B_{i}\))
19else
20\(RemainingSets.\)append(\(B_{i}\))
21
22
23return\(Partition\)
```
**Algorithm 1**PathSymmetricClustering
Similar to Section 3.3, the partitioning process in PathSymmetricClustering is driven by a statistical hypothesis test. This time, the desired null hypothesis is that _all the nodes_ in a cluster \(B_{k}\) are order-\(L\) path-symmetric. Specifically, we test for each cluster the following: that for each \(l\in\{L,L-1,\ldots,1\}\) the cluster \(B_{k}\) is _exact_ order-\(l\) path-symmetric w.r.t. \(v_{i}\) at significance level \(\alpha\). We denote this null hypothesis \(H_{0}\).
If \(H_{0}\) is true, then at significance level \(\alpha\) there exists a multinomial distribution, common to all nodes in \(B_{k}\), from which the empirical exact path counts \(\hat{C}_{i,j}^{L,N}|_{l}\) are drawn. Extending a version of the \(\chi^{2}\) test, we show the following:
**Theorem 3** (Path-Symmetric Hypothesis Test).: _Let \(\Lambda_{l}\) be the total number of different paths of length \(l\) over all \(\hat{S}_{i,j}^{L}\)'s.
_The null hypothesis that the nodes \(B_{k}\) are order-\(l\) exact path symmetric is rejected at significance level \(\alpha\) if the statistic_
\[Q(B_{k}):=\sum_{\lambda=0}^{\Lambda_{l}}\sum_{v_{j}\in B_{k}}\left(c_{\lambda}-c _{\lambda}^{(j)}\right)^{2} \tag{3}\]
_exceeds \(\chi^{2}_{\mathbf{w},\mathbf{\nu}}(\alpha)\), where_
\[c_{\lambda}^{(j)}:=\hat{C}_{i,j}^{L,N}(\lambda),\quad c_{0}^{(j)} :=N-\sum_{\lambda=1}^{\Lambda_{l}}c_{\lambda}^{(j)}, \tag{4}\] \[c_{\lambda}:=\frac{1}{|B_{k}|}\sum_{v_{j}\in B_{k}}c_{\lambda}^{( j)},\]
_and \(\chi^{2}_{\mathbf{w},\mathbf{\nu}}(\alpha)\) is a generalised chi-squared distribution, weight parameters \(\mathbf{w}\) and degree of freedom parameters \(\mathbf{\nu}=(1,1,...,1)\), evaluated at significance level \(\alpha\). Above \(\mathbf{w}\) are the eigenvalues of the block matrix \(\mathbf{\tilde{\Sigma}}\) with components_
\[\tilde{\Sigma}_{\lambda,\lambda^{\prime}}^{(b,b^{\prime})}=N \left(\delta_{b,b^{\prime}}-\frac{1}{|B_{k}|}\right).\] \[\left(\delta_{\lambda,\lambda^{\prime}}\frac{c_{\lambda}}{N} \left(1-\frac{c_{\lambda}}{N}\right)-\left(1-\delta_{\lambda,\lambda^{\prime} }\right)\frac{c_{\lambda}}{N}\frac{c_{\lambda}^{\prime}}{N}\right)\,,\]
_where \(b,b^{\prime}\in(1,\ldots,|B_{k}|)\) index blocks, \(\lambda,\lambda^{\prime}\in(0,\ldots,\Lambda_{l})\) index within blocks and \(\delta_{\lambda,\lambda^{\prime}}\) is the Kronecker delta._
**Remark 2**.: _By requiring knowledge of the eigenvalues \(\mathbf{w}\), Theorem 3 suggests that an eigendecomposition of \(\mathbf{\tilde{\Sigma}}\) is necessary. However, we show in the Appendix how this calculation can be avoided by approximating the generalised chi-squared distribution by a gamma distribution._
### Running Shorter Random Walks
We now show how to set a minimal \(L\) needed for the random walks to span areas of interest in the hypergraph. We do this by hierarchical clustering.
**Motivation** We implement hierarchical clustering by iteratively cutting the hypergraph along sparse cuts until no sparse cuts remain. This algorithm results in three benefits: (i) Splitting the hypergraph into smaller sub-graphs leads to smaller diameters and therefore smaller \(L\). This, by extension, also reduces \(N\) (Thm. 1). (ii) Having fewer nearby nodes means that the subsequent partitioning in PathSymmetricClustering is faster. (iii) Hierarchical clustering identifies groups of densely-connected nodes, which helps us to ignore spurious links. Spuriously-connected nodes appear rarely in the path signatures and therefore only add noise to the path signature counts. By focusing random walks on areas of interest, we are hitting nodes that are densely connected more often and gaining more accurate statistics of truncated hitting times and empirical path distributions.
**Hierarchical Clustering Algorithm** The algorithm \(\mathsf{HClustering}\) is based on spectral clustering, a standard approach for cutting graphs along sparse cuts. A discussion of spectral clustering is beyond the scope of this paper. Note that the is no equivalent approach for hypergraphs, so we propose to translate a hypergraph into a graph and then perform spectral clustering as follows:
```
1Input:\(\mathcal{H}\), the hypergraph representation of the input relational database
2Output:Path-symmetric sets (abstract concepts \(\mathcal{C}\)) of nodes w.r.t. each \(v_{i}\) in \(\mathcal{H}\)
3Parameters:\(\varepsilon,\alpha\)
4\(\mathcal{H}_{1},\ldots,\mathcal{H}_{K}:=\mathsf{HClustering}(\mathcal{H})\) ; // Sec. 4.2
5 Let \(V_{k}\) denote the set of nodes in \(\mathcal{H}_{k}\)
6for\(1\leq k\leq K\)do
7set\(L\) to the diameter of \(\mathcal{H}_{k}\)
8compute\(\varepsilon\)-optimal \(N\) on \(\mathcal{H}_{k}\) under \(L\) ; // Th.1
9foreachnode\(v_{i}\) in \(\mathcal{H}_{k}\)do
10foreach\(v_{j}\neq v_{i}\) in \(\mathcal{H}_{k}\) compute \(\hat{P}_{i,j}^{L,N}\) and \(\hat{h}_{i,j}^{L,N}\) ; // Sec. 2
11partition\(V_{k}\) into distance-symmetric sets \(\{A_{1},A_{2},...,A_{M}\}\) using \(\alpha\)-significance ; // Th. 2, Sec. 2
12for\(1\leq m\leq M\)do
13\(\mathcal{C}_{m}:=\) PathSymmetricClustering\((A_{m},\alpha)\) ; // Th. 3, Sec. 4
14returnall\(\mathcal{C}_{m}\)'s
```
**Algorithm 2**PRISM
In overview, \(\mathsf{HClustering}\) begins by converting a hypergraph \(\mathcal{H}=(V,E)\) into a weighted graph \(\mathcal{G}\) by expanding cliques over each hyperedge. Next, \(\mathcal{G}\) is recursively bipartitioned using the sweep set approximation algorithm for the Cheeger-cut [12]. The result of the partitioning is a set of subgraphs \(\mathcal{G}:=\{\mathcal{G}_{1},\mathcal{G}_{2},...,\mathcal{G}_{k}\}\). The partitioning terminates whenever the second-smallest eigenvalue of the symmetric Laplacian matrix \(\lambda_{2}\) exceeds a threshold value \(\lambda_{2}^{max}\). \(\lambda_{2}^{max}\) is dataset independent and thus fixed in our implementation. Finally, each subgraph \(\mathcal{G}_{i}\) is then converted into a hypergraph \(\mathcal{H}_{i}=(V_{i},E_{i})\) such that the vertex set \(V_{i}\) of the hypergraph is initialised to be the vertex set of \(\mathcal{G}_{i}\). The edge set \(E_{i}\) is then constructed by adding all hyperedges \(e\in E\) whose strict majority of element vertices appear in \(V_{i}\), i.e. \(E_{i}:=\{e\in E\mid|e\cap V_{i}|>|e|/2\}\). As a consequence, no nodes nor edges are lost during clustering. \(\mathsf{HClustering}\) returns the set of sub-hypergraphs \(\{\mathcal{H}_{1},\mathcal{H}_{2},...,\mathcal{H}_{k}\}\). After partitioning, we run the rest of the pipeline with \(L\) set to the diameter of each \(\mathcal{H}_{i}\).
Our entire pipeline for learning abstract concepts from a relational database is summarised in Algorithm 2.
```
1Input:\(\mathcal{H}\), the hypergraph representation of the input relational database
2Output:Path-symmetric sets (abstract concepts \(\mathcal{C}\)) of nodes w.r.t. each \(v_{i}\) in \(\mathcal{H}\)
3Parameters:\(\varepsilon,\alpha\)
4\(\mathcal{H}_{1},\ldots,\mathcal{H}_{K}:=\mathsf{HClustering}(\mathcal{H})\) ; // Sec. 4.2
5 Let \(V_{k}\) denote the set of nodes in \(\mathcal{H}_{k}\)
6for\(1\leq k\leq K\)do
7set\(L\) to the diameter of \(\mathcal{H}_{k}\)
8compute\(\varepsilon\)-optimal \(N\) on \(\mathcal{H}_{k}\) under \(L\) ; // Th.1
9foreachnode\(v_{i}\) in \(\mathcal{H}_{k}\)do
10foreach\(v_{j}\neq v_{i}\) in \(\mathcal{H}_{k}\) compute \(\hat{P}_{i,j}^{L,N}\) and \(\hat{h}_{i,j}^{L,N}\) ; // Sec. 2
11partition\(V_{k}\) into distance-symmetric sets \(\{A_{1},A_{2},...,A_{M}\}\) using \(\alpha\)-significance ; // Th. 2, Sec. 2
12for\(1\leq m\leq M\)do
13\(\mathcal{C}_{m}:=\) PathSymmetricClustering\((A_{m},\alpha)\) ; // Th. 3, Sec. 4
14returnall\(\mathcal{C}_{m}\)'s
```
**Algorithm 3**Hierarchical Clustering
### Running Fewer Random Walks
As a final optimization step, we comment on how the number of random walks can be further reduced. The number of walks as implied by Theorem 1 can be very large since \(P^{*}\) grows exponentially with \(L\). Therefore in practice, rather than running enough walks to guarantee \(\varepsilon\)-boundedness for all path signatures, we only run enough walks to guarantee \(\varepsilon\)-boundedness for the top \(k\) most common path signatures.
**Theorem 4** (Fewer Random Walks).: _An upper bound on \(N\) sufficient for the \(k^{th}\) most probable path to have uncertainty
less than or equal to \(\varepsilon\) is_
\[N=\frac{\left(k+1\right)\left(\gamma+\ln P^{*}\right)-1}{\varepsilon^{2}}.\]
In our implementation, we use \(k=3\) since we deem it to be the smallest value (and therefore requiring the fewest random walks) that still allows for meaningful comparison between path distributions.
## 5 Extended Example
We now illustrate the entire PRISM pipeline through an extended example, shown in Fig 3. In the figure, we consider the hypergraph representation of a dataset describing a physics and history department in a university, containing two types of entities (person and book) and two relations (teaches(person,person) and reads(person,book)).
The first stage of PRISM applies hierarchical clustering to the dataset to identify densely-connected nodes. In this example, the physics and history departments are only connected by a single, spurious link, and the hierarchical clustering stops after one iteration, cutting along the departments. In addition, we can verify here that the hierarchical clustering results in an almost two-fold computational speed-up: The original hypergraph in Fig 3(i) has diameter 9. If we set \(\varepsilon=0.1\), then Theorem 4 gives an upper bound on the \(\varepsilon\)-optimal number of random walks for accurate path distributions of \(N=3.0\times 10^{3}\). After hierarchical clustering, the hypergraphs have diameter 4, and Theorem 4 for the same \(\varepsilon\) gives an upper bound of \(N=1.6\times 10^{3}\).
After hierarchical clustering, PRISM applies symmetry clustering in two stages: first by identifying distance-symmetric sets based on their truncated hitting times, then by identifying path-symmetric nodes within these sets based on path distributions. The first stage only serves to speed up the subsequent path-symmetric clustering since path-symmetry implies distance symmetry (Rmk. 1), but check
Figure 4: **Changing the Source Node** In this case, the source node is P\({}_{4}\) (a professor) and we obtain a different, although intuitive partitioning: P\({}_{5}\) is a colleague of P\({}_{4}\), {P\({}_{1}\),P\({}_{2}\),P\({}_{6}\),P\({}_{7}\),P\({}_{8}\)} are P\({}_{4}\)’s students, P\({}_{3}\) is a student that P\({}_{4}\) is not teaching and {B\({}_{1}\),B\({}_{2}\)} are the academic books of the department. Note how it is possible to extract these abstract concepts, even though the only explicit information we initially provided was that entities are either books or people.
Figure 3: **PRISM Pipeline**: A visual example of the PRISM algorithm applied to an academic departments toy dataset. Nodes P\({}_{i}\) are entities of type person, while B\({}_{i}\) are entities of type book. Black edges represent teaches(person,person), and red edges represent reads(person,book) predicates. Although not explicitly annotated in the data, entities {P\({}_{4}\),P\({}_{5}\),P\({}_{10}\),P\({}_{11}\)} are in fact professors and the remaining person entities are students. _Hierarchical Clustering Preprocessing_: The physics and history departments are connected by a single spurious link. In this example, hierarchical clustering therefore stops after one iteration, cutting along the departments (dotted line). To avoid information loss, the spurious link between P\({}_{8}\) and B\({}_{4}\) will be preserved in one of the clusters (Sec. 4.2). _Symmetry Clustering_: Here we focus on the left sub-graph obtained from hierarchical clustering in (i). Running random walks from B\({}_{1}\), we show examples of the distance-symmetric and path-symmetric clusters that we obtain in (ii) and (iii), respectively. Note how {P\({}_{1}\),P\({}_{2}\),P\({}_{3}\)} in (ii) is partitioned into {P\({}_{1}\),P\({}_{2}\)} and {P\({}_{3}\)} in (iii) since path-symmetry is more stringent than distance symmetry.
ing distance symmetry is quicker (\(\mathcal{O}(n)\) vs \(\mathcal{O}(n\ln n)\) for PathSymmetricClustering). Note that, in this example, the source node was chosen as \(\texttt{B}_{1}\) and the hypergraph has a high degree of symmetry relative to the source node, which explains why the distance-symmetric and path-symmetric sets are almost identical (Fig. 3 (ii) and (iii)). For more realistic datasets, where global symmetries in a hypergraph are rare, the differences between distance-symmetric and path-symmetric clustering will be more pronounced.
We finish this section by illustrating the effect of changing the source node of the random walks. Recall that sets of symmetric nodes, i.e., the abstract concepts, are always found with respect to a specific source node. Changing the source node, therefore, changes the learnt concepts. This idea is illustrated in Fig 4, where the source node is changed from \(\texttt{B}_{1}\) to \(\texttt{P}_{4}\), resulting in different clusterings. When random walks were run from \(\texttt{B}_{1}\) we obtained the familiar concepts of teachers, colleagues, students and books. However, Fig 4 illustrates how abstract concepts can often be less intuitive, but still illustrate subtle relationships in the data. In PRISM we run random walks from each node in the hypergraph in turn. This helps to identify a wide range of abstract concepts.
## 6 Experiments
We compare our motif-finding algorithm, PRISM, against the current state-of-the-art, LSM [10] and BOOSTR [10], in terms of speed and accuracy of the mined MLNs.
**Datasets** We used benchmark datasets adopted by the structure learning literature: UW-CSE [11], IMDB, and WEBKB. The IMDB dataset is subsampled from the IMDB.com database and describes relationships among movies, actors and directors. The UW-CSE dataset describes an academic department and the relationships between professors, students and courses. The WEBKB consists of Web pages and hyperlinks collected from four computer science departments. Each dataset has five splits. Each time we used one split to test accuracy and the remaining splits for training. The reported results are the average over all five permutations.
**Problem** Given a dataset with partial observations, we want to predict the truth values for unobserved data. For example, for the IMDB dataset we might not know every actor who starred in a movie. We then predict, for each actor in the database, the likelihood of an actor to have starred in a given movie. We remark that our unobserved data spans across every possible predicate in the database, e.g. for IMDB this would include StarringIn(movie,person), Actor(person)... This problem thus reduces to predicting missing edges in the hypergraph.
**Baseline and Evaluation** We used the entire LSM pipeline [10] as a baseline. We used the lifted belief propagation inference tool of Alchemy [10] to calculate the averaged conditional log-likelihood on each entity (ground atom) in the test split. For LSM, we used the same hyperparameters as originally adopted by the authors [10]. In addition, we compared our work to the authors' publically available implementation of BOOSTR [10].
**Experiment Setup** Firstly, we ran PRISM and then, the remainder of the unmodified LSM pipeline. We used \(\varepsilon=0.1\) and \(\alpha=0.01\) throughout, as both are dataset-independent. We ran all experiments on a desktop with 32Gb of RAM and a 12-core 2.60GHz i7-10750H CPU.
**Metrics** We used standard measures from the structure learning literature. In particular, we measured accuracy and conditional log-likelihood for all datasets, as well as the area under the precision-recall curve (AUC) as it provides a more robust measure. The runtimes of the motif-finding step and the overall structure learning time are also reported.
**Results** In Table 1, we see that compared to LSM, we improve in accuracy on the IMDB dataset by 6%, while on UW-CSE and WEBKB the improvement is negligible. This is because LSM already found rules that generalized the data extremely well. However, as expected, the runtime of our algorithm is significantly reduced for all datasets. For motif finding, we see that our optimised algorithm is 10x-20x faster than LSM's motif-finding time. The overall structure learning computation is up to 5x faster than LSM. This is despite the main computational bottleneck for structure learning occurring during rule induction and evaluation - parts of the pipeline that were left unmodified. This suggests that our algorithm more tightly constrains the subsequent rule induction by finding more accurate motifs, thereby giving a speed
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & Algorithm & AUC & CLL & ACC & MF TIME (s) & SL TIME (s) \\ \hline \multirow{3}{*}{IMDB} & PRISM & **0.141 \(\pm\) 0.027** & **-0.18 \(\pm\) 0.03** & **0.84 \(\pm\) 0.02** & **0.086 \(\pm\) 0.018** & 320 \(\pm\) 40 \\ & LSM & 0.12 \(\pm\) 0.03 & -0.25 \(\pm\) 0.06 & 0.78 \(\pm\) 0.04 & 1.25 \(\pm\) 0.10 & 430 \(\pm\) 20 \\ & BOOSTR & 0.062 \(\pm\) 0.013 & -0.69 \(\pm\) 0.006 & 0.504 \(\pm\) 0.004 & N/A & **165.7 \(\pm\) 129** \\ \hline \multirow{3}{*}{UWCSE} & PRISM & **0.402 \(\pm\) 0.028** & **-0.0098 \(\pm\) 0.0009** & **0.993 \(\pm\) 0.002** & **0.40 \(\pm\) 0.06** & 640 \(\pm\) 350 \\ & LSM & 0.392 \(\pm\) 0.023 & **-0.0098 \(\pm\) 0.0009** & 0.992 \(\pm\) 0.002 & 4.23 \(\pm\) 0.80 & 3140 \(\pm\) 270 \\ & BOOSTR & 0.0098 \(\pm\) 0.003 & -2.114 \(\pm\) 0.004 & 0.121 \(\pm\) 0.001 & N/A & **30.5 \(\pm\) 3.9** \\ \hline \multirow{3}{*}{WEBKB} & PRISM & **0.57 \(\pm\) 0.04** & **-0.0092 \(\pm\) 0.0011** & **0.991 \(\pm\) 0.002** & **0.118 \(\pm\) 0.038** & 102 \(\pm\) 5 \\ & LSM & **0.57 \(\pm\) 0.04** & **-0.0092 \(\pm\) 0.0011** & **0.991 \(\pm\) 0.002** & 2.5 \(\pm\) 0.4 & 220 \(\pm\) 10 \\ \cline{1-1} & BOOSTR & 0.0335 \(\pm\) 0.0021 & -2.14 \(\pm\) 0.09 & 0.118 \(\pm\) 0.010 & N/A & **9.3 \(\pm\) 0.4** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Area Under the Precision Recall Curve (AUC), Conditional Log Likelihood (CLL), Accuracy (ACC), Motif Finding (MF) time, and Structure Learning (SL) time comparisons of PRISM, LSM and BOOSTR on three datasets.
improvement in these areas of the pipeline too.
We are slower compared to BOOSTR. However, PRISM's accuracy drastically improves over BOOSTR. We believe the differences in time and accuracy between datasets for BOOSTR stem from the quality of the background knowledge: while on IMDB and UW-CSE background knowledge was given, on WEBKB it was not. No background knowledge was provided to LSM or PRISM.
## 7 Related Work
In this section, we will review prior art in structure learning across a variety of logical languages. As we show below, every one of these approaches is based on learnt or user-defined templates to restrict the search space of candidate formulae. These templates are exactly the motifs that we are finding automatically and efficiently with the proposed framework.
**ILP** To alleviate the need to manually provide logical theories, several communities have developed techniques for inducing logical theories. One of the most influential family techniques for mining Horn clauses is that of _Inductive Logic Programming_ (ILP), e.g., FOIL [12], MDIE [13] and Inspire [14]. Recently, Evans and Grefenstette proposed a differentiable variant of ILP [1] to support the mining of theories in noisy settings. ILP techniques require users to provide in advance the patterns of the formulas to mine, as well as to provide both positive and negative examples. The above requirements, along with issues regarding scalability [10], restrict the application of ILP techniques in large and complex scenarios. Our work specifically focuses on finding these patterns automatically and are not restricted to Horn clauses.
Recently, several techniques aim to mine rules in a differentiable fashion. One of them is Neural LP [15], a differentiable rule mining technique based on TensorFlowLog [1]. The authors in [12] presented a RESCAL-based model to learn from paths in knowledge graphs, while Sadeghian et al. proposed DRUM, a differentiable technique for learning uncertain rules in first-order logic [1]. A limitation of the above line of research is that they mainly focus on rules of a specific transitive form only. Other techniques for differentiable rule mining have been proposed in [16, 17]. In contrast to this line of work, our motif-finding algorithm helps in pipelines that can find more general first-order logic rules.
**MLN** The first and somewhat naive structure learning algorithm proposed for MLNs is called _top-down structure learning_ (TDSL) [11]. The idea is to perform a near-exhaustive search for candidate logical rules and then construct an MLN by recursively retaining rules that lead to the best improvement in the pseudo-likelihood approximation. That means that the algorithm starts with S2 directly. Due to the exponential search space, the algorithm is not able to find long rules in large datasets and fails to find rules that truly generalize and capture the underlying data. The following approaches in this line of research all prepended the rule generation with a pattern-finding step (S1) - the core of our research. The first paper that proposed such an approach was _bottom-up structure learning_ (BUSL) [15], where the idea was to pre-specify template networks akin to our motifs, that would be good candidates for potential rules and iteratively build on these templates to find more complex rules.
To tackle the high computational overhead of structure learning, [11] introduce BOOSTR, a technique that simultaneously learns the weights and the clauses of an MLN. The key idea is to transform the problem of learning MLNs by translating MLNs into regression trees and then uses functional gradient boosting [10] along those trees to find clauses. Further, it tries to learn under unobserved data. To this end, they introduced an EM-based boosting algorithm for MLNs. This approach also requires templates, however, they must be user-defined, which requires additional effort and can restrict applications. While showing promising results in terms of runtime, the technique supports only Horn clauses, and its performance drastically decreases in the absence of background knowledge, as we later show in our empirical results (Sec. 6).
The current SOTA in this line of research is _learning through structural motifs_ (LSM) [11], where similar to the template-networks, motifs are identified in the hypergraph representation of the data, by running random walks on the graph and identify symmetric patterns through path signature symmetry of the random walks. The finding of good motifs or templates is the differentiating point between the different algorithms and has been shown to have the most significant impact on the quality of the ultimate rules. A principled, robust and efficient algorithm for finding such motifs could therefore improve these and future algorithms. We believe that we are the first to propose such a principled and efficient algorithm for finding motifs.
## 8 Conclusion
We made a key step toward learning the structure of logical theories - mining structural motifs. We presented the first principled mining motif technique in which users can control the uncertainty of mined motifs and the softness of the resulting rules. Furthermore, we reduced the overall complexity of motif mining through a novel \(\mathcal{O}(n\ln n)\) clustering algorithm. Our empirical results against the state-of-the-art show improvements in runtime and accuracy by up to 80% and 6%, respectively, on standard benchmarks. While we focused on lifted graphical models, our work can be used to learn the formulas of other types of logical theories as well. One interesting direction of future work is to integrate our motif-mining technique with differential rule mining approaches as our empirical analysis shows that purely-symbolic based approaches for that task can sometimes be the bottleneck. A second direction is to integrate our motif-mining approach with Graph Neural Networks and provide a similar formal analysis. |
2305.08304 | Superconductivity at epitaxial LaTiO3-KTaO3 interfaces | Design of epitaxial interfaces is a pivotal way to engineer artificial
structures where new electronic phases can emerge. Here we report a systematic
emergence of interfacial superconducting state in epitaxial heterostructures of
LaTiO3 and KTaO3. The superconductivity transition temperature increases with
decreasing the thickness of LaTiO3. Such behavior is observed for both (110)
and (111) crystal oriented structures. For thick samples, the finite resistance
developing below the superconducting transition temperature increases with
increasing LaTiO3 thickness. Consistent with previous reports, the (001)
oriented heterointerface features high electron mobility of 250 cm2/Vs and
shows no superconducting transition down to 40 mK. Our results imply a
non-trivial impact of LaTiO3 on the superconducting state and indicate how
superconducting KTaO3 interfaces can be integrated with other oxide materials. | D. Maryenko, I. V. Maznichenko, S. Ostanin, M. Kawamura, K. S. Takahashi, M. Nakamura, V. K. Dugaev, E. Ya. Sherman, A. Ernst, M. Kawasaki | 2023-05-15T02:19:22Z | http://arxiv.org/abs/2305.08304v1 | # Superconductivity at epitaxial LaTiO\({}_{3}\)-KTaO\({}_{3}\) interfaces
###### Abstract
Design of epitaxial interfaces is a pivotal way to engineer artificial structures where new electronic phases can emerge. Here we report a systematic emergence of interfacial superconducting state in epitaxial heterostructures of LaTiO\({}_{3}\) and KTaO\({}_{3}\). The superconductivity transition temperature increases with decreasing the thickness of LaTiO\({}_{3}\). Such behavior is observed for both (110) and (111) crystal oriented structures. For thick samples, the finite resistance developing below the superconducting transition temperature increases with increasing LaTiO\({}_{3}\) thickness. Consistent with previous reports, the (001) oriented heterointerface features high electron mobility of 250 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) and shows no superconducting transition down to 40 mK. Our results imply a non-trivial impact of LaTiO\({}_{3}\) on the superconducting state and indicate how superconducting KTaO\({}_{3}\) interfaces can be integrated with other oxide materials.
+
Footnote †: preprint: AIP/123-QED
## I Introduction
Interfaces between materials can harbor electronic structures distinct from the bulk constituents. One instance is the formation of a metallic layer at the junction of two insulators. A broadly celebrated example is LaAlO\({}_{3}\)/SrTiO\({}_{3}\) interface, that harbors not only high mobility carriers but can also become superconducting at around 300 mK [1; 2; 3]. This rather well controlled system became a fertile testbed to explore two-dimensional superconductivity. In such a strongly asymmetric heterostructure, it was straightforward to assay the role of spin-orbit coupling (SOC) for the superconducting phase, albeit the conduction band is formed by \(3d\)-orbitals of titanium with a moderate SOC energy on the order of 40 meV [4; 5; 6; 7]. In fact, it is anticipated that a sizable spin-orbit coupling can be favorable for unconventional Cooper pairing and for realization of Majorana states [8; 9; 10; 11; 12]. Therefore the recent observation of superconductivity in KTaO\({}_{3}\), whose conduction band is formed by \(5d\) Ta orbitals with a much larger SOC energy of about 300 meV, may provide a new twist in the formation of superconducting phase in two dimensions. Furthermore, by taking into consideration that bulk KTaO\({}_{3}\) has not still been demonstrated to become superconducting, the emergence of interfacial superconductivity in such a system can provide a distinct insight into Cooper pair formation mechanism [13]. Being isostructural to SrTiO\({}_{3}\), perovskite oxide KTaO\({}_{3}\) is a quantum paraelectric and has a band gap of about 3.6 eV. The conduction band around \(\Gamma\) point is split by a large spin-orbit coupling in well separated bands with an effective total angular momentum \(J=1/2\) (higher energy) and \(J=3/2\) states (lower energy).
The first observation of interfacial KTaO\({}_{3}\) superconductivity dates back to experiments with the ionic liquid gating technique, that has revealed a superconducting transition at 50 mK for (001)-oriented KTaO\({}_{3}\) surface [14]. Recently the emergence of superconductivity at (110)- and (111)-oriented KTaO\({}_{3}\) surfaces is demonstrated in the majority of cases by growing a EuO layer or depositing amorphous LaAlO\({}_{3}\) layer [15; 16; 17; 18; 19; 20; 21]. The cubic lattice structure of EuO with a lattice constant \(a=5.145\) A matches neither (110) nor (111) orientation of KTaO\({}_{3}\) crystal structure, resulting in the formation of either polycrystalline or defective layers at the interface [16; 17]. Superconductivity was also observed in (111)-oriented KTaO\({}_{3}\) heterostructure with a 10 nm thick La\({}_{2/3}\)Sr\({}_{1/3}\)MnO\({}_{3}\) top layer [22]. To have full control over the emergent superconducting state, it is important to have excellent control over the interface's electronic properties, which also includes understanding of the role of the top layer for the emergent phenomena. This control paves the way for integrating superconducting KTaO\({}_{3}\) interfaces with other oxide materials.
Here, we report the emergence of superconductivity in epitaxial grown structures of LaTiO\({}_{3}\) on (110) and (111) oriented KTaO\({}_{3}\). We observe that the superconducting transition temperature increases with decreasing thickness of the LaTiO\({}_{3}\) layer. For thick samples, the resistance \(R_{\rm xx}\) remains finite below superconducting transition temperature and this \(R_{\rm xx}\) value increases with increasing LaTiO\({}_{3}\) thickness. These observations indicate a non-trivial impact of LaTiO\({}_{3}\) on the interface's electronic properties. Our finding may facilitate engineering of the superconducting phase at the interface. Bulk LaTiO\({}_{3}\) is a Mott insulator with orthorhombic crystal structure and lattice parameters \(a=b=5.595\) A and \(c=7.912\) A. Therefore,
LaTiO\({}_{3}\) can be thought as a quasi cubic structure with an effective lattice constant \(\sqrt{a^{2}+b^{2}}/2\cong c/2=3.956\) A, which thus differs by only about 0.1% from the lattice constant of cubic KTaO\({}_{3}\)\(a=b=c=3.989\) A. This facilitates the growth of LaTiO\({}_{3}\)/KTaO\({}_{3}\)heterostructure on the three main facets of a cubic crystal system, i.e. (001), (110), and (111) [24].
## Results and Discussion
### Epitaxial growth
LaTiO\({}_{3}/\)KTaO\({}_{3}\) structures are grown using pulsed laser deposition technique. A piece of KTaO\({}_{3}\) substrate with a size of about 3 mm x 3 mm was attached to the substrate holder using silver epoxy. A polycrystalline La\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) target is ablated in vacuum with a repetition rate of 2 Hz and a laser fluence 1.6 Jcm\({}^{-2}\). The growth chamber is equipped with a reflection high-energy electron diffraction (RHEED) monitor allowing
Figure 1: a) Crystal structure of LaTiO\({}_{3}\) and KTaO\({}_{3}\)[23]. b) Epitaxial growth process steps for LaTiO\({}_{3}/\)KTaO\({}_{3}\) heterostructures. Shown are RHEED patterns at various steps of (110) oriented structure growth. Similar evolution of RHEED pattern with temperature is also observed for structures grown on (001) and (111) KTaO\({}_{3}\) crystal orientations. c) X-ray diffraction patterns (110)-oriented substrate (blue trace) and film on substrate(green trace). The diffraction pattern are shifted for clarity along vertical axis. Red line is the best fit describing the position of Laue fringes.
us to observe the growth process _in-situ_. Figure 1b depicts exemplary RHEED patterns during the growth process of (110)-oriented structure. After loading the substrate in the growth chamber, the substrate is heated to 400\({}^{\circ}\)C. During this heating step no change in the RHEED pattern is observed. In fact, the atomic force microscopy measurements show that the surface morphology barely changes at 400\({}^{\circ}\)C (see Supplementary Information). To prevent the degradation of KTaO\({}_{3}\) surface upon further heating and to suppress the formation of defects, the substrate surface is covered with an amorphous layer by ablating La\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) target, which is indicated by the vanishing RHEED pattern after this process step. Upon heating to 700\({}^{\circ}\)C the amorphous layer crystallizes and the streak pattern forms gradually. This solid state epitaxial step at 700\({}^{\circ}\)C is favored due to a small lattice mismatch between LaTiO\({}_{3}\) and KTaO\({}_{3}\), which gives a clear diffraction pattern correspondence between the substrate and the crystallized layer. The crystallized layer enables successive homoepitaxial growth, which takes place at a lower temperature of 600\({}^{\circ}\)C. The heterostructures discussed in this work differ by the LaTiO\({}_{3}\) layer thickness deposited at 600\({}^{\circ}\)C. After the growth, the heterostructures are cooled to room temperature and are left to thermalize for about 12h. Subsequently, the structures are covered with a thin amorphous layer by ablating the La\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) target to prevent potential degradation of structures at ambient condition. We note that the growth conditions favor the stabilization of LaTiO\({}_{3}\) phase [25]. To check the film crystal structure, we grow a thick LaTiO\({}_{3}\) layer with 735 pulses. Figure 1c depicts its x-ray diffraction pattern (green trace) featuring Laue fringes, which indicate a high crystalline film quality. Due to the similar lattice parameters of LaTiO\({}_{3}\) and KTaO\({}_{3}\), the Bragg diffraction peak of LaTiO\({}_{3}\) is indiscernible due to the overlapped diffraction pattern of the substrate (blue trace). By fitting the position of the Laue fringes (red line) we determine the film thickness of 14 nm, which is used to estimate the thickness of thinner films from a given number of pulses as depicted in Fig. 1b for each process step. The stoichiometry of the structures checked with energy-dispersive X-ray spectroscopy was comparable with that of the target. We note however that the exact stoichiometry of the structure can have an impact on the interface conductivity [26; 27; 28].
### Electrical Transport Characteristics
The transport characteristics of heterostructures are shown in Fig. 2. We employed a Physics Properties Measurement System (PPMS, Quantum Design) down to 2 K and adiabatic demagnetization refrigerator (ADR) stage, which is compatible with PPMS platform, to characterize the superconducting transition of heterostructures down to 150 mK. The samples are directly bonded with aluminum wires as depicted in Fig. 3a, so that they can be characterized along two orthogonal directions simultaneously. Figures 2a and 2b depict exemplary temperature dependence of \(R_{\text{xx}}\) for three crystal orientations. Consistent with previous reports, the (111)-oriented heterostructure has a higher superconducting transition temperature than the (110)-oriented heterostructures [15; 16; 17; 19]. More importantly, we observe that the onset temperature of superconducting phase \(T_{\text{c}}^{\text{onset}}\) strongly depends on the thickness of LaTiO\({}_{3}\) layer. Such an impact of top layer thickness on the superconducting state hasn't been reported yet. Figure 2c depicts that \(T_{\text{c}}^{\text{onset}}\) increases with decreasing thickness of the LaTiO\({}_{3}\) layer. Such behavior is observed for both (110) and (111) oriented heterostructures. Following this finding, we measured one of the (001)-oriented heterostructures with a 1.7 nm thick LaTiO\({}_{3}\) layer in a dilution refrigerator at temperature down to 40 mK, but did not observe superconducting transition. The absence of superconducting phase for (001) oriented heterostructures is consistent with previous report [19]. To check the conductance of LaTiO\({}_{3}\) layer, we grew a 2.6 nm thin LaTiO\({}_{3}\) layer on both GdScO\({}_{3}\) and NdScO\({}_{3}\) substrates according to the growth procedure of Fig. 1. The resistance of such structures at room temperature was on the order of 10\({}^{6}\) Ohm.
Figure 3b compares the resistance values of heterostructures above (\(T=2\) K, left panel) and below (\(T=150\) mK, right panel) the superconducting transition. It is noticeable
Figure 2: a) Exemplary temperature dependence of resistance for LaTiO\({}_{3}\)/KTaO\({}_{3}\) heterostructures defined on (001), (110) and (111) crystal surfaces. b) Superconducting state is observed for (110) and (111) oriented heterostructures, while (001) structure remains metallic down to 40 mK. Shown is the definition of the superconductivity onset temperature \(T_{\text{c}}^{\text{onset}}\). c) \(T_{\text{c}}^{\text{onset}}\) decreases with increasing the thickness of LaTiO\({}_{3}\). We assign an error bar of 150 mK for (001)-oriented heterostructures, that are not measured in dilution refrigerator. Thick lines are guides to the eye.
that \(R_{\rm xx}\) at \(T=2\) K increases with increasing LaTiO\({}_{3}\) thickness for both the (111) and (110) oriented structures, while it remains almost constant for the (001)-oriented heterostructures. Such behavior points to the interface conductance rather than to the conductance in LaTiO\({}_{3}\) layer solely, in which case the resistance would decrease with increasing LaTiO\({}_{3}\) thickness. To further elucidate the properties of the heterostructures, we present in Fig. 4 the dependence of both the electron mobility and the charge carrier density on LaTiO\({}_{3}\) thickness. Charge carrier density \(n\) is estimated from the Hall effect measurements, while mobility \(\mu\) is estimated from the sample conductance in zero magnetic field. Among the three crystal orientations, the (001)-oriented heterointerface has the highest electron mobility on the order of 250 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\), which does not depend on the LaTiO\({}_{3}\) thickness. Both the electron mobility and the charge carrier density values are consistent with those obtained for LaTiO\({}_{3}\)/KTaO\({}_{3}\) (001)-oriented structures grown by molecular beam epitaxy [24]. For both the (110) and (111) oriented heterostructures the electron mobility shows a distinct behavior; it is the largest for thin structures, i.e. around 100 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\), and decreases with increasing LaTiO\({}_{3}\) thickness, reaching a saturation value of around 30 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) above a LaTiO\({}_{3}\) thickness of 2 nm. By contrast the charge carrier density shows a fast increase with LaTiO\({}_{3}\) thickness by about a factor 1.5 (lower panel in Fig. 4) and saturates above 2 nm. This seems to be a common tendency for all three crystal orientations. An increase of the sheet charge carrier density \(n\) (Fig. 4b) and, at the same time, a decrease of \(T_{\rm c}^{\rm onset}\) with LaTiO\({}_{3}\) thickness (Fig. 2c) establishes an opposite tendency to the previous observations in KTaO\({}_{3}\)-based superconducting structures, for which superconducting transition temperature increases with increasing \(n\)[19; 20]. The presented results indicate an impact of epitaxial LaTiO\({}_{3}\) layer on the electronic properties of the interface, which also affects the superconducting regime as we discuss now.
Right panel in Fig. 3b depicts the dependence of \(R_{\rm xx}\) at 150 mK (well below \(T_{\rm c}^{\rm onset}\)) on the LaTiO\({}_{3}\) thickness for all heterostructures. For the sake of comparison, it also contains data points for the high mobility (001) interface that does not become superconducting in our experiments. A well developed superconducting state characterized by \(R_{\rm xx}=0\)\(\Omega\) is
Figure 4: Mobility (panel a)) and charge carrier density (panel b)) dependence on LaTiO\({}_{3}\) layer thickness at \(T=2\) K for different heterostructure orientations. The charger carrier density is estimated from the transverse resistance \(R_{\rm xy}\) (Hall effect), which changes linearly with the magnetic field \(B\). Thick lines are guides to the eye.
Figure 3: a) Photograph of a sample with attached wires to measure temperature dependence of sample resistance. Shown also is the scheme of electrical circuit sample connection. Multi-channel source-measurement unit of PPMS is used to measure two orthogonal crystal directions simultaneously. b) Resistance at zero magnetic field at \(T=\)2 K (above superconducting transition, left panel) and at \(T=150\) mK (below superconducting transition, right panel) as a function of LaTiO\({}_{3}\) thickness. The color encodes the crystal orientation of heterostructures. Thin heterostructures with (110) and (111) crystal orientations show \(R_{\rm xx}=0\)\(\Omega\) at \(T=150\) mK. As LaTiO\({}_{3}\) thickness increases, the residual \(R_{\rm xx}\) increases. The thick lines are guides to the eye.
reached for both (110)- and (111)-oriented heterostructures but only with a thin LaTiO\({}_{3}\) layer. For thicker LaTiO\({}_{3}\) layers \(R_{\rm xx}\) attains a non-zero value, which increases with increasing LaTiO\({}_{3}\) thickness. Furthermore, we detect an anisotropy for the (110)-oriented structures. Open symbols in the right panel depict \(R_{\rm xx}\) values measured along the [1-10] direction at 150 mK. For 2.1 nm and 2.6 nm thick samples the superconductivity along [001] direction survives at 150 mK, while \(R_{\rm xx}\) along [1-10] direction has a non-zero value. (In the Supplementary Information we show the temperature dependence of \(R_{\rm xx}\) during superconducting transition for all samples.) By contrast to (110)-oriented structures, [1-10] and [11-2] crystal directions of (111)-oriented heterostructures appear to be equivalent. Since the heterostructures are grown in equivalent procedures, it allows us to conclude that a potential sample inhomogeneity cannot explain the anisotropy as observed in (110)-oriented structures. This surprising emergence of anisotropic behavior of \(R_{\rm xx}\) below superconducting transition is perhaps related to the immanent electronic structure of the interface. Anisotropy for (110)-oriented heterostructures is reported for the normal conducting state of SrTiO\({}_{3}\)-based heterostructures and is related to a different arrangement of interface atoms along [001] and [1-10] directions [29; 30]. An indication of such an anisotropy in our (110)-oriented heterostructures might also appear in the normal conducting state. At \(T=2\) K, Figure 3b (left panel) depicts that \(R_{\rm xx}\) along the [1-10] direction (open blue symbols) is larger than for the [001] direction (full blue symbols).
Beyond that, increasing the LaTiO\({}_{3}\) thickness affects the transport characteristics of both the (110)- and (111)-oriented heterostructures before superconducting transition. In fact, for structures with a thicker LaTiO\({}_{3}\), one clearly observes some increase in \(R_{\rm xx}\propto\ln T\) indicating a contribution of the weak localization correction to the sample resistance (see Supplementary Information). This has also been observed in superconducting LaTiO\({}_{3}\)/SrTiO\({}_{3}\) structures [31]. Conspicuously, when this localization behavior is strongly pronounced in our structures, \(R_{\rm xx}=0\)\(\Omega\) vanishes for (110) as well as for (111)-oriented structures, as seen in Supplementary Information. The (110) and (111) heterostructures feature a weak antilocalization behavior in magnetotransport, indicating a significance of spin-orbit coupling. Intriguingly, weak antilocalization is barely pronounced for non-superconducting (001)-oriented heterostructures (see Supplementary Information).
The observation of the superconducting transition being dependent on the KTaO\({}_{3}\) surface orientation is consistent with previous reports on superconductivity in KTaO\({}_{3}\)[15; 16; 17; 18; 19]. This allows us to conclude that the superconducting phase in our structures involves the electronic states of KTaO\({}_{3}\). At the same time the LaTiO\({}_{3}\) thickness dependence of the transport characteristics both in superconducting and normal state implies a non-trivial impact of the top layer on the electronic structure of the LaTiO\({}_{3}\)/KTaO\({}_{3}\) heterointerface. In the vicinity of the junction, the Ta atoms are in a 5+ state, whereas Ti is in a 3+ state. This charge discontinuity can lead to charge redistribution between the LaTiO\({}_{3}\) and KTaO\({}_{3}\) layers adjacent to the interface, creating an interfacial conducting layer. One such mechanisms can be related to a so called polar catastrophe, which is based on compensation of the diverging electrostatic energy at the interface [32]. This mechanism has been considered for various SrTiO\({}_{3}\) and KTaO\({}_{3}\) based heterostructures and can be effective for (001) and (111) oriented structures, but is not obvious for (110) structures [33; 34; 35; 24; 36]. Surface reconstruction and the modification of TiO\({}_{6}\) octahedra have also been considered for the emergence of conducting layers at the interface between band insulators and Mott insulators such as LaTiO\({}_{3}\)[37; 38; 39]. Moreover, oxygen defects can contribute to the emergence of a conducting layer. It would require additional experimental and theoretical efforts to elucidate how each of those mechanisms is realized in our superconducting LaTiO\({}_{3}\)/KTaO\({}_{3}\) structures. The interplay of those mechanisms will define the extension of the conducting layer, interaction between the LaTiO\({}_{3}\) and KTaO\({}_{3}\) layers, and consequently the total electronic structure.
## Conclusion
In summary, we have grown epitaxial LaTiO\({}_{3}\)/KTaO\({}_{3}\) heterostructures with (001), (110) and (111) crystal orientations and varying LaTiO\({}_{3}\) thickness. The (110)- and (111)-oriented heterostructures have a moderate electron mobility and a well developed superconducting state. The (001)-oriented heterostructures have the highest electron mobility with no indication of a superconducting transition. The LaTiO\({}_{3}\) layer has a non-trivial impact on the emergence of the superconducting phase. With increasing LaTiO\({}_{3}\) thickness the superconducting transition temperature decreases and a finite resistance remains below the transition. This behavior seems to correlate with the emergence of electron weak localization. Furthermore, for the (110)-oriented heterostructures we observe a regime when \(R_{\rm xx}=0\)\(\Omega\) along the [001] direction and non-zero for the [1-10] direction, thus establishing anisotropic superconductivity in LaTiO\({}_{3}\)/KTaO\({}_{3}\)-heterostructures. Our result may pave the way to engineer superconducting interfaces and to integrate superconducting KTaO\({}_{3}\) interfaces with oxide materials.
## Supplementary materials
See the supplementary material for additional details on superconducting transition, features of weak antilocalization behavior in the magnetic field and atomic force microscopy images.
## Acknowledgments
We would like to thank Dr. M. Kriener and Dr. M. Birch for fruitful discussion and careful reading of the manuscript. This work was supported by JSPS KAKENHI (22H04958). The work of E.S. is supported through Grants No. PGC2018-101355-B-I00 and No. PID2021-126273NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by the ERDF "A way of making Europe", and by the Basque Government
through Grant No. IT1470-22. The work of V.D. is supported by the National Science Center in Poland as a research project No. DEC-2017/27/B/ST3/02881.
|
2306.11305 | Progressive Fourier Neural Representation for Sequential Video
Compilation | Neural Implicit Representation (NIR) has recently gained significant
attention due to its remarkable ability to encode complex and high-dimensional
data into representation space and easily reconstruct it through a trainable
mapping function. However, NIR methods assume a one-to-one mapping between the
target data and representation models regardless of data relevancy or
similarity. This results in poor generalization over multiple complex data and
limits their efficiency and scalability. Motivated by continual learning, this
work investigates how to accumulate and transfer neural implicit
representations for multiple complex video data over sequential encoding
sessions. To overcome the limitation of NIR, we propose a novel method,
Progressive Fourier Neural Representation (PFNR), that aims to find an adaptive
and compact sub-module in Fourier space to encode videos in each training
session. This sparsified neural encoding allows the neural network to hold free
weights, enabling an improved adaptation for future videos. In addition, when
learning a representation for a new video, PFNR transfers the representation of
previous videos with frozen weights. This design allows the model to
continuously accumulate high-quality neural representations for multiple videos
while ensuring lossless decoding that perfectly preserves the learned
representations for previous videos. We validate our PFNR method on the UVG8/17
and DAVIS50 video sequence benchmarks and achieve impressive performance gains
over strong continual learning baselines. The PFNR code is available at
https://github.com/ihaeyong/PFNR.git. | Haeyong Kang, Jaehong Yoon, DaHyun Kim, Sung Ju Hwang, Chang D Yoo | 2023-06-20T06:02:19Z | http://arxiv.org/abs/2306.11305v3 | # Progressive Neural Representation
###### Abstract
Neural Implicit Representations (NIR) have gained significant attention recently due to their ability to represent complex and high-dimensional data. Unlike explicit representations, which require storing and manipulating individual data points, implicit representations capture information through a learned mapping function without explicitly representing the data points themselves. They often prune or quantize neural networks after training to accelerate encoding/decoding speed, yet we find that conventional methods fail to transfer learned representations to new videos. This work studies the continuous expansion of implicit video representations as videos arrive sequentially over time, where the model can only access the videos from the current session. We propose a novel neural video representation, _Progressive Neural Representation (PNR)_, that finds an adaptive substructure from the supernet for a given video based on Lottery Ticket Hypothesis. At each training session, our PNR transfers the learned knowledge of the previously obtained subnetworks to learn the representation of the current video while keeping the past subnetwork weights intact. Therefore it can almost perfectly preserve the decoding ability (i.e., catastrophic forgetting) of the NIR on previous videos. We demonstrate the effectiveness of our proposed PNR on the neural sequential video representation compilation on the novel UVGS/17 video sequence benchmarks. The public code is available at [https://github.com/ihaeyong/PNR](https://github.com/ihaeyong/PNR).
## 1 Introduction
Neural Implicit Representation (NIR) [8; 21; 7; 26] is a research direction that aims to represent complex data, such as videos or 3D objects, as continuous functions learned by neural networks. Instead of explicitly describing data points, NIR models learn to compress high-dimensional data into a low-dimensional latent space and re-map it to a high-dimensional latent space, allowing efficient data storage, compression, and synthesis. However, individual high-dimensional data must occupy a neural network as an encoding, increasing linear memory capacity when users compress multiple target data. Neural Video Representation [8; 6] variants deal with this issue by merging different videos into a single video before training. However, they have limited transferability by design as learned models are compressed via weight pruning and quantization, and thus cannot improve the representations when new videos arrive at the model over successive time sessions. In this paper, as inspired by incremental knowledge transfer and expansion in continual learning, we investigate a practical implicit representation learning scenario with video data, dubbed _video continual learning (VCL)_, which aims to accumulate neural implicit representations for multiple videos into a single model under the condition that videos are incoming in a sequential manner.
Continual Learning (CL) [40; 33; 48; 13] is a learning paradigm where a model learns over a series of sessions sequentially. It aims to mimic human cognition, characterized by the ability to learn new concepts incrementally throughout a lifetime without the degeneration of previously acquired functionality. Yet, incremental training of NIR is a challenging problem since the model detrimally loses the learned implicit representations of past session videos while encoding newly arrived ones, a phenomenon known as _catastrophic forgetting_[25]. This issue particularly matters as neural representation methods for videos encode and reconstruct the target data stream conditioned to its frame indices. Then, the model more easily ruins its generation ability while learning to continuously encode new videos due to the distributional disparities in _holistic videos_ and their _individual frames_. Furthermore, the _compression phase_ of neural representation makes it wayward to transfer the model to future tasks. Various approaches have been proposed to address catastrophic forgetting during continual learning, which are often conventionally classified as follows: (1) _Regularization-based methods_[19; 2; 16; 41; 28] aim to keep the learned information of past sessions during continual training aided by sophisticatedly designed regularization terms, (2) _Architecture-based methods_[47; 24; 36; 44; 17; 18] propose to minimize the inter-task interference via newly designed architectural components, and (3) _Rehearsal-based methods_[31; 4; 34; 46] utilize a set of real or synthesized data from the previous sessions and replay them. Note that rehearsal-based methods are often undesirable for continual learning on complex data since they need non-negligible memory to store high-dimensional samples in a buffer and revisit them to learn, suffering from embarrassingly large memory consumption and computational cost.
To enhance neural representation incrementally on sequential videos, we propose a novel video continual learning method coined **P**rogressive **N**eural **R**epresentation (**PNR**). Given a backbone architecture, our proposed method aims to learn adaptive subnetwork structure along with its weights to encode incoming videos at each training session. We leverage the idea from _Lottery Ticket Hypothesis (LTH)_, demonstrating sparse subnetworks preserving a dense network's performance. However, searching for optimal subnetworks, often called _winning tickets_, during continual learning is inefficient since they require iterative training steps with repetitive pruning and retraining in each arriving task. To this end, our proposed PNR introduces a parametric score function that learns to generate binary masks to find adaptive substructures for video encoding in each training session by directly choosing top-\(c\) percent in weight ranking scores. We emphasize that PNR can find the optimal subnetwork in an online manner through joint training of the weights and structure and bypass adrous procedures in LTH, such as iterative retraining, pruning, and rewinding. Our PNR allows overlapping subnetworks with previous sessions during training to transfer the learned representation of previous videos when relevant but keeps the weights for previous video sessions frozen. Consequently, we enable the model to expand its representation space throughout consecutive video sessions continuously, ensuring maintaining the encoding and generation quality of the previous video intact (i.e., forgetting-free) even without resorting to a replay buffer to store multiple high-dimensional frames.
Our contributions can be summarized as follows:
* We suggest a practical learning scenario for neural implicit representation where the model encodes multiple videos continually in successive training sessions. Earlier NIR methods suffer from poor transferability to new videos due to the distributional shift of holistic video and frames.
* We propose a novel progressive neural representation method for a sequential video compilation. The proposed method continuously learns a compact subnetwork for each video session given a supernet backbone while preserving the generative quality of previous videos perfectly.
* We demonstrate the effectiveness of our method on multiple sequential video sessions by achieving superior performance in average PSNR and MS-SSIM without any quantitative/qualitative degeneration in reconstructing previously encoded videos during sequential video compilation.
## 2 Related Works
Neural Implicit Representation (NIR).Neural Implicit Representations (NIR) [26] are neural network architectures for parameterizing continuous, differentiable signals. Based on coordinate information, they provide a way to represent complex, high-dimensional data with a small set of learnable parameters that can be used for various tasks such as image reconstruction [38; 39], shape regression [10; 29], and 3D view synthesis [27; 35]. Instead of using coordinate-based
methods, NeRV [8] proposes an image-wise implicit representation that takes frame indices as inputs, enabling fast and accurate video compression. NeRV has inspired further improvements in video regression by CNeRV [6], DNeRV [14], E-NeRV [21], and NIRVANA [23], and HNeRV [7]. A few recent works have explored video continual learning (VCL) scenarios for the NIR. To tackle non-physical environments, Continual Predictive Learning (CPL) [5] learns a mixture world model via predictive experience replay and performs test-time adaptation using non-parametric task inference. PIVOT [42] leverages the past knowledge present in pre-trained models from the image domain to reduce the number of trainable parameters and mitigate forgetting. CPL needs memory to replay, while PIVOT needs pre-training and fine-tuning steps. In contrast, we introduce a novel neural video representation referred to as _"Progressive Neural Representation (PNR)"_, which utilizes the Lottery Ticket Hypothesis (LTH) to identify an adaptive substructure within the dense networks that are tailored to the specific video input index. Our PNR doesn't use memory, a pre-trained model, or fine-tuning for a sequential video representation compilation.
Continual Learning.Most continual learning approaches introduce extra memory like additional model capacity [20; 45] or a replay buffer [32; 3]. But, several works have focused on building memory-efficient continual learners using pruning-based constraints to exploit initial model capability more compactly. CLNP [12] selects important neurons for a given task using \(\ell_{1}\) regularization to induce sparsity and freezes them to maintain performance. And pruned neurons are reinitialized for future task training. Piggyback [24] trains task-specific binary masks on the weights given a pre-trained model. However, it does not allow for knowledge transfer among tasks, so the performance highly depends on the quality of the backbone model. HAT [36] proposes task-specific learnable attention vectors to identify significant weights per task. The masks are formulated to layerwise cumulative attention vectors during continual learning. LL-Tickets [9] recently suggests sparse subnetworks called lifelong tickets that perform well on all tasks during continual learning. The method searches for more prominent tickets from current ones if the obtained tickets cannot sufficiently learn the new task while maintaining performance on past tasks. However, LL-Tickets require external data to maximize knowledge distillation with learned models for prior tasks, and the ticket expansion process involves retraining and pruning steps. WSN [17] jointly learns the model weights and task-adaptive binary masks during continual learning. It prevents catastrophic forgetting of previous tasks by keeping the model weights selected, called winning tickets, intact at the end of each training. However, WSN is inappropriate for sequential video compilation since it does not consider uncorrelated video contexts. To overcome the weakness of WSN, our PNR explores more appropriate weights for representing video with an additional random re-initialized step.
Figure 1: **Progressive Neural Representation (PNR) for Sequential Video Complation: Image-wise neural implicit representation taking frame and video (session) indices as input and using a sparse MLP + NeRV Blocks to output the whole image through multi-heads. We denote frozen, reused, and trainable parameters in training at session 2. Note that each video representation is colored.**
Progressive Neural Representation
This section presents our proposed continual neural implicit representation method, named _Progressive Neural Representation (PNR)_. Given a supernet backbone, where we follow a NeRV [21] architecture for video embedding and decoding, PNR aims to expand its representation space continuously by sequentially encoding multiple videos. As new videos arrive in the model, PNR jointly updates the binary masks with neural network weights searching for the adaptive subnetwork to encode given videos. After training each video session, we freeze the weights of the selected subnetwork so that future training does not hurt the quality of the learned representation and the generated output, even though the new subnetwork structure contains some weights already encoded in the previous video. While the weights learned in earlier video sessions are frozen, we enable our PNR to transfer prior knowledge to future video tasks (i.e., forward transfer). This makes the model adapt new videos effectively by leveraging the representation of past videos (Please see Figure 1).
**Problem Statement.** Let a video at \(s_{th}\) session \(\mathbf{V}_{s}=\{\mathbf{v}_{t}^{s}\}_{t=1}^{T_{s}}\in\mathbb{R}^{T_{s}\times H\times W \times 3}\) be represented by a function with the trainable parameter \(\mathbf{\theta}\), \(f_{\mathbf{\theta}}:\mathbb{R}\rightarrow\mathbb{R}^{H\times W\times 3}\), during Video Continual Learning (VCL), where \(T_{s}\) denotes the number of frames in a video at session \(s\), and \(s\in\{1\ldots,|\mathcal{S}|\}\). Given a session and frame index \(s\) and \(t\), respectively, the neural implicit representation aims to predict a corresponding RGB image \(\mathbf{v}_{t}^{s}\in\mathbb{R}^{H\times W\times 3}\) by fitting an encoding function to a neural network: \(\mathbf{v}_{t}^{s}=f_{\mathbf{\theta}}([s;t])\). Let's consider a learning scenario that \(|\mathcal{S}|=N\) sessions arrive in the model sequentially. We denote that \(\mathcal{D}_{s}=\{\mathbf{e}_{s,t},\mathbf{v}_{s,t}\}_{t=1}^{T_{s}}\) is the dataset of session \(s\), composed of \(T_{s}\) pairs of raw embeddings \(\mathbf{e}_{s,t}=[\mathbf{e}_{s};\mathbf{e}_{t}]\) and corresponding frames \(\mathbf{v}_{t}^{s}\). Here, we assume that \(\mathcal{D}_{s}\) for session \(s\) is only accessible when learning session \(s\) due to the limited hardware memory and privacy-preserving issues, and session identity is given in the training and testing stages. The training objective of the suggested video continual learning on a sequence of \(N\) sessions is to minimize the following optimization problem:
\[\mathbf{\theta}^{*}=\operatorname*{minimize}_{\mathbf{\theta}}\frac{1}{N}\frac{1}{T_ {s}}\sum_{s=1}^{N}\sum_{t=1}^{T_{s}}\mathcal{L}(f(\mathbf{e}_{s,t};\mathbf{\theta}),\bm {v}_{t}^{s}), \tag{1}\]
where the loss function \(\mathcal{L}(\mathbf{v}_{t}^{s})\) is composed of \(\ell_{1}\) loss and _SSIM loss_. The former minimizes the pixel-wise RGB gap with the original input frames evenly, and the latter maximizes the similarity between the two entire frames based on luminance, contrast, and structure, as follows:
\[\mathcal{L}(\mathbf{V}_{s})=\frac{1}{T_{s}}\sum_{t=1}^{T_{s}}\alpha||\mathbf{v}_{t}^{s }-\hat{\mathbf{v}}_{t}^{s}||_{1}+(1-\alpha)(1-\textbf{SSIM}(\mathbf{v}_{t}^{s},\hat{ \mathbf{v}}_{t}^{s})), \tag{2}\]
where \(\hat{\mathbf{v}}_{t}^{s}\) is the output generated by the model \(f\). For all experiments, we set the hyperparameter \(\alpha\) to \(0.7\), and we adapt PixelShuffle [37] for session and time positional embedding.
Continual learners frequently use over-parameterized deep neural networks to ensure enough capacity for learning future tasks. This approach often leads to the discovery of subnetworks that perform as well as or better than the original network. Given the neural network parameters \(\mathbf{\theta}\), the binary attention mask \(\mathbf{m}_{s}^{*}\) that describes the optimal subnetwork for session \(t\) such that \(|\mathbf{m}_{s}^{*}|\) is less than the model capacity \(c\) follows as:
\[\mathbf{m}_{s}^{*}=\operatorname*{minimize}_{\mathbf{m}_{s}\in\{0,1\}^{|\mathbf{ \theta}|}}\frac{1}{T_{s}}\sum_{t=1}^{T_{s}}\mathcal{L}\big{(}f(\mathbf{e}_{s,t};\bm {\theta}\odot\mathbf{m}_{s}),\mathbf{v}_{t}^{s}\big{)}-\mathcal{J},\quad\text{ subject to }|\mathbf{m}_{s}^{*}|\leq c, \tag{3}\]
where session loss \(\mathcal{J}=\mathcal{L}(\mathbf{v}_{t}^{s})\) and \(c\ll|\mathbf{\theta}|\) (used as the selected proportion \(\%\) of model parameters in the following section). In the optimization section, we describe how to obtain \(\mathbf{m}_{s}^{*}\) using a single learnable weight score \(\mathbf{\rho}\) subject to updates while minimizing task loss jointly for each video session.
### Sequential Video Representational Subnetworks
Let each weight be associated with a learnable parameter we call _weight score_\(\rho\), which numerically determines the importance of the weight associated with it; that is, a weight with a higher weight score is seen as more important. We find a sparse subnetwork \(\hat{\mathbf{\theta}}_{s}\) of the neural network and assign it as a solver of the current session \(s\). We use subnetworks instead of the dense network as solvers for two reasons: (1) Lottery Ticket Hypothesis [11] shows the existence of a competitive subnetwork
that is comparable with the dense network, and (2) the subnetwork requires less capacity than dense networks, and therefore it inherently reduces the size of the expansion of the solver.
Motivated by such benefits, we propose a novel PNR, the joint-training method for sequential video representation compilation, as shown in Algorithm 1. The pseudo-code explains how to acquire subnetworks within a dense network. We find \(\hat{\mathbf{\theta}}_{s}=\mathbf{\theta}\odot\mathbf{m}_{s}\) by selecting the top-\(c\)% weights from the weight scores \(\mathbf{\rho}\), where \(c\) is the target layer-wise capacity ratio in %; \(\mathbf{m}_{s}\) is a session-dependent binary mask. Formally, \(\mathbf{m}_{s}\) is obtained by applying a indicator function \(\mathbb{1}_{c}\) on \(\mathbf{\rho}\) where \(\mathbb{1}_{c}(\rho)=1\) if \(\mathbf{\rho}\) belongs to top-\(c\%\) scores and \(0\) otherwise. Therefore, the subnetworks \(\{\hat{\mathbf{\theta}}_{s}\}_{s=1}^{N}\) for all video session \(\mathcal{S}\) are obtained by \(\hat{\mathbf{\theta}}_{s}=\mathbf{\theta}\odot\mathbf{m}_{s}\). Straight-through estimator [1; 15; 30] is used to update \(\mathbf{\rho}\).
```
1:\(\{\mathcal{D}_{s}\}_{s=1}^{N}\), model weights \(\mathbf{\theta}\), score weights \(\mathbf{\rho}\), binary mask \(\mathbf{M}_{0}=\mathbf{0}^{|\mathbf{\theta}|}\), and layer-wise capacity \(c\%\).
2: randomly initialize \(\mathbf{\theta}\) and \(\mathbf{\rho}\).
3:for session \(s=1,\cdots,|\mathcal{S}|\)do
4:if\(\,\text{s}\,\text{>}\,\text{1}\,\)then
5: randomly re-initialize \(\mathbf{\rho}\).
6:endif
7:for batch \(\mathbf{b}_{t}\sim\mathcal{D}_{s}\)do
8: obtain mask \(\mathbf{m}_{s}\) of the top-\(c\%\) scores \(\mathbf{\rho}\) at each layer
9: compute \(\mathcal{L}\left(f(\mathbf{e}_{s,t};\mathbf{\theta}\odot\mathbf{m}_{s}),\mathbf{b}_{t}\right)\), where input embedding, \(\mathbf{e}_{s,t}=[\mathbf{e}_{s};\mathbf{e}_{t}]\).
10:\(\mathbf{\theta}\leftarrow\mathbf{\theta}-\eta\left(\frac{\partial\mathbf{\theta}}{\partial \theta}\odot(\mathbf{1}-\mathbf{M}_{s-1})\right)\)\(\triangleright\) trainable weight update
11:\(\mathbf{\rho}\leftarrow\mathbf{\rho}-\eta(\frac{\partial\mathbf{\theta}}{\partial\mathbf{ \rho}})\)\(\triangleright\) weight score update
12:endfor
13:\(\mathbf{\theta}_{s}=\mathbf{\theta}\odot\mathbf{m}_{s}\)
14:\(\mathbf{M}_{s}\leftarrow\mathbf{M}_{s-1}\vee\mathbf{m}_{s}\)\(\triangleright\) accumulate binary mask
15:endfor
16:output: \(\{\hat{\mathbf{\theta}}_{s}\}_{s=1}^{N}\)
```
**Algorithm 1** Progressive Neural Representation (PNR) for VCL
## 4 Experiments
We validate our method on video benchmark datasets against continual learning baselines on Video Task-incremental Learning (VTL). We consider continual video representation learning with a multi-head configuration for all experiments in the paper. We follow the experimental setups in NeRV [8] and HNeRV [7].
**Datasets.**_1) UVG of 8 Video Sessions_: We experiment on eight sequential videos to validate our PNR. The eight videos consist of one from the scikit-video and seven from the UVG dataset. The category index and order in UVG8 are as follows: _1.bunny_, _2.beauty_, _3.bosphorus_, _4.bee_, _5.jockey_, _6.setgo_, _7.shake_, _8.yacht_.
_2) UVG of 17 Video Sessions_: We conduct an extended experiment on 17 video sessions by adding 9 more videos to the UVG of 8 video sessions. The category index and order in UVG17 are as follows: _1.bunny_, _2.city_, _3.beauty_, _4.focus_, _5.bosphorus_, _6.kids_, _7.bee_, _8.pan_, _9.jockey_, _10.lips_, _11.setgo_, _12.race_, _13.shake_, _14.river_, _15.yacht_, _16.sunbath_, _17.twilight_. Please refer to the supplementary material.
**Architecture.** We employ NeRV as our baseline architecture and follow its details for a fair comparison. After the positional encoding, we apply four sparse MLP layers on the output of the positional encoding layer, followed by five sparse NeRV blocks with upscale factors of 5, 2, 2, 2, 2. These sparse NeRV blocks decode 1280\(\times\)720 frames from the 16\(\times\)9 feature map obtained after the sparse MLP layers. For the upscaling method in the sparse NeRV blocks, we also adopt PixelShuffle [37]. The positional encoding for the video index \(s\) and frame index \(t\) is as follows:
\[\begin{split}\mathbf{\Gamma}(s,t)=&[\ \sin(b^{0}\pi s),\cos(b^{0}\pi s ),\cdots,\sin(b^{l-1}\pi s),\cos(b^{l-1}\pi s),\\ &\sin(b^{0}\pi t),\cos(b^{0}\pi t),\cdots,\sin(b^{l-1}\pi t), \cos(b^{l-1}\pi t)\ ],\end{split} \tag{4}\]
where the hyperparameters are set to \(b=1.25\) and \(l=80\) such that \(\mathbf{\Gamma}(s,t)\in\mathbb{R}^{1\times 160}\). As differences from the previous NeRV model, the first layer of the MLP has its input size expanded from 80 to 160 to incorporate both frame and video indices, and distinct head layers after the NeRV block are utilized
for each video. For the loss objective in Equation 2, \(\alpha\) is set to \(0.7\). We evaluate the video quality, average video session quality, and backward transfer with two metrics: PSNR and MS-SSIM [43]. We implement our model in PyTorch and train it in full precision (FP32). All experiments are run with NVIDIA RTX8000. Please refer to the supplementary material for more experimental details.
**Baselines.** To show the effectiveness, we compare our PNR with strong CL baselines; Single-Task Learning (STL) which trains on single tasks independently, EWC [19], which is a regularized baseline, iCaRL [31] which is a rehearsal-based baseline, and Multi-Task Learning (MTL) which trains on multiple video sessions simultaneously, showing the upper-bound of PNR. Except for STL, all models are trained and evaluated on multi-head settings where a video session and time \((s,t)\) indices are provided.
**Training.** In all experiments, we follow the same experimental settings as NeRV [7] and HNeRV [7] for fair comparisons. We train the PNR, NeRV (STL), and MTL using Adam optimizer with a learning rate 5e-4. For the ablation study on UVG8 and UVG17, we use a cosine annealing learning rate schedule [22], batch size of 1, training epochs of 150, and warmup epochs of 30 unless otherwise denoted.
**Performance metrics of PSNR \(\&\) MS-SSIM.** We evaluate all methods based on the following continual learning metrics:
1. _Average PSNR or MS-SSIM (i.e., Ave. PSNR)_ measures the average of the final performances on all video sessions: \(\mathrm{PSNR}\) or \(\mathrm{MS}\)-\(SSIM=\frac{1}{N}\sum_{s=1}^{N}R_{S,s}\), where \(R_{S,s}\) is the test PSNR or MS-SSIM for session \(s\) after training on the final video session \(S\).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Method**}} & \multicolumn{6}{c}{**Sessions**} & \multicolumn{6}{c}{**Avg. PSNR /**} \\ \cline{2-10} & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **BWT** \\ \hline STL, NeRV [7] & 39.63 & 36.06 & 37.35 & 41.23 & 38.14 & 31.86 & 37.22 & 32.45 & 36.74 / \\ STL, NeRV [7] & 39.66 & 36.28 & 38.14 & 42.03 & 36.58 & 29.22 & 37.27 & 31.45 & 36.33 / - \\ \hline ENC [19]* & 10.9 & 11.15 & 14.47 & 8.39 & 12.21 & 10.27 & 9.97 & 23.98 & 12.58 /-17.59 \\ KCaL [31]* & 30.84 & 26.30 & 27.28 & 34.48 & 20.90 & 17.28 & 30.33 & 24.64 & 26.51 / - 3.90 \\ \hline PNR, \(<10.0\) \%, in-DXK= & 27.81 & 30.66 & 29.30 & 33.06 & 22.26 & 18.40 & 27.81 & 29.76 & 26.52 / 0.0 \\ PNR, \(<20.0\) \%, in-DXK= & 31.37 & 33.19 & 29.92 & **33.62** & **22.82** & **18.96** & **24.83** & 23.40 & 25.97 / 0.0 \\ PNR, \(<50.0\) \%, in-DXK= & 34.05 & **32.28** & **29.08** & 23.28 & 22.15 & 18.61 & **27.68** & **23.84** & **27.76** / 0.0 \\ PNR, \(<70.0\) \%, in-DXK= & **35.62** & 32.08 & 29.46 & 31.37 & 21.60 & 18.13 & 27.33 & 22.61 & 27.28 / 0.0 \\ \hline PNR, \(<10.0\) \%, in-DXK= & 28.31 & 31.31 & 29.89 & 34.83 & 23.82 & 19.56 & 29.46 & 24.58 & 28.72 / 0.0 \\ PNR, \(<20.0\) \%, in-DXK= & 31.36 & 32.91 & 31.42 & **36.39** & **24.93** & **20.88** & **30.78** & **25.35** & 29.20 / 0.0 \\ PNR, \(<50.0\) \%, in-DXK= & 34.10 & **33.45** & **34.17** & 36.09 & 24.82 & 20.25 & 30.02 & 25.01 & **29.44** / 0.0 \\ PNR, \(<70.0\) \%, in-DXK= & **35.75** & 33.04 & 30.44 & 23.11 & 23.00 & 19.02 & 28.09 & 28.32 & 25.26 / 0.0 \\ PNR, \(<10.0\) \%, in-DXK= & 27.68 & 30.29 & 28.66 & 31.62 & 21.05 & 17.85 & 27.07 & 22.98 & 25.90 / 0.0 \\ PNR, \(<30.0\) \%, in-DXK= & 31.50 & 31.00 & **29.26** & **31.96** & **22.07** & **18.34** & **27.21** & **33.09** & 26.80 / 0.0 \\ PNR, \(<50.0\) \%, in-DXK= & 34.02 & **31.84** & 28.95 & 31.26 & 21.93 & 18.22 & 26.88 & 28.72 & 27.62 / 0.0 \\ PNR, \(<70.0\) \%, in-DXK= & **36.61** & 30.26 & 27.99 & 29.88 & 28.29 & 17.63 & 26.86 & 28.24 & 26.40 / 0.0 \\ \hline MLT & 34.22 & 32.79 & 32.34 & 38.33 & 25.30 & 22.44 & 33.73 & 27.05 & 30.78 /. \\ \hline \hline \end{tabular}
\end{table}
Table 1: PSNR results of UVG8 (m-IDX) Video Sessions with average PSNR and Backward Transfer (BTW) of PSNR. Note that \(*\) denotes our reproduced results.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Method**}} & \multicolumn{6}{c}{**Sessions**} & \multicolumn{6}{c}{**Avg. MS-SSIM /**} \\ \cline{2-10} & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **BWT** \\ \hline STL, NeRV* & 0.99 & 0.95 & 0.98 & 0.99 & 0.97 & 0.96 & 0.98 & 0.96 & 0.97/ \\ \hline EWC [19]* & 0.22 & 0.23 & 0.35 & 0.10 & 0.27 & 0.19 & 0.21 & 0.79 & 0.30 / 0.62 \\ iCaRL [31]* & 0.94 & 0.80 & 0.82 & 0.97 & 0.59 & 0.57 & 0.92 & 0.81 & 0.80 / 0.11 \\ \hline PNR, \(<10.0\) \%, in-DXK= & 0.91 & 0.89 & 0.89 & 0.97 & 0.73 & 0.61 & 0.88 & 0.77 & 0.83 / 0.0 \\ PNR, \(<30.0\) \%, in-DXK= & 0.96 & 0.91 & 0.90 & 0.89 & **0.76** & **0.65** & **0.89** & **0.78** & 0.85 / 0.0 \\ PNR, \(<50.0\) \%, in-DXK= & 0.98 & **0.91** & **0.90** & 0.97 & 0.74 & 0.62 & 0.88 & 0.77 & **0.85** / 0.0 \\ PNR, \(<70.0\) \%, in-DXK= & **0.98** & **0.91** & 0.91 & 0.98 & 0.76 & 0.71 & 0.59 & 0.87 & 0.74 & 0.83 / 0.0 \\ \hline PNR, \(<10.0\) \%, in-DXK=, reinit & 0.91 & 0.90 & 0.90 & 0.98 & 0.78 & 0.68 & 0.91 & 0.81 & 0.86 / 0.0 \\ PNR, \(<30.0\) \%, in-DXK=, reinit & 0.96 & 0.99 & 0.92 & **0.99** & **0.82** & **0.74** & **0.93** & **0.84** & 0.89 / 0.0 \\ PNR, \(<50.0\) \%, in-DXK=, reinit & 0.98 & **0.92** & **0.93** & 0.98 & 0.82
2. _Backward Transfer of PSNR or MS-SSIM (BWT)_ measures the video representation forgetting during continual learning. Negative BWT means that learning new video sessions causes the video representation forgetting of past sessions: \(\mathrm{BWT}=\frac{1}{S-1}\sum_{s=1}^{S-1}R_{S,s}-R_{s,s}\).
### Comparisons with Baselines
**PSNR \(\&\) MS-SSIM.** To compare PNR with conventional representative continual learning methods such as EWC and iCaRL, we prepare the reproduced results, as shown in Tables 1, 2, 3, and 4. The PNR outperforms the conventional baselines on the UVG8 and UVG17 benchmark datasets. The sparseness does not significantly affect sequential video representation results on two sequential benchmark datasets. Moreover, our performances of PNR with reinit are better than those of PNR without reinit, comparable with those of MLT (upper-bound of PNR).
**Input Embedding.** We observe that the input embedding resolutions affect video representation as shown in Table 1 and Table 2. Even though the video sessions are the same, the performances of PSRN and MS-SSIM decrease by 0.8, 0.2, depending on input embedding resolution determined by the maximum number of input index (m-IDX). The results with m-IDX=17 are reported by the longer sequence learning in Table 3 and Table 4. From this observation, we can expect more precise video representation if we use more discriminative input embedding for PNR. Here, we do not care about the video's contextual information.
**Transfer Matrix.** We prepare the transfer matrix to prove our PNR's forget-freeness and to show video correlation among other videos, as shown in Figure 2 on the UVG17 dataset; lower triangular estimated by each session subnetwork denotes that our PNR is a forget-free method and upper triangular calculated by current session subnetwork denotes the video similarity between source and target. The reinitialized PNR proves the effectiveness from the lower triangular of Figure 2 (a) and (b). Nothing special is observable from the upper triangular since they are not correlated.
**PNR's Compression.** We follow NeRV's video quantization and compression pipeline [8], except for the model pruning step, to evaluate performance drops and backward transfer in the video sequential learning, as shown in Figure 3. Once sequential training is done, our PNR doesn't need any extra
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{1}{c}{\multirow{2}{*}{**Avg MS-SSIM**}} & \multicolumn{1}{c}{\multirow{2}{*}{**Avg MS-SSIM**}} \\ \cline{2-13} & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** & **10** & **11** & **12** & **13** & **14** & **15** & **16** & **17** & **BWT** \\ \hline STIL, NeRV’ & 0.99 & 0.99 & 0.95 & 0.98 & 0.98 & 0.96 & 0.99 & 0.98 & 0.97 & 0.95 & 0.96 & 0.96 & 0.98 & 0.96 & 0.99 & 0.99 & 0.97 /. \\ \hline EWC [109]* & 0.26 & 0.24 & 0.44 & 0.24 & 0.40 & 0.29 & 0.15 & 0.17 & 0.26 & 0.26 & 0.17 & 0.34 & 0.04 & 0.30 & 0.33 & 0.31 & 0.91 & 0.30 /. 035 \\ iCaRL [11]* & 0.74 & 0.88 & 0.67 & 0.67 & 0.64 & 0.48 & 0.51 & 0.53 & 0.37 & 0.82 & 0.35 & 0.53 & 0.75 & 0.70 & 0.61 & 0.60 & 0.87 & 0.65 /. 020 \\ \hline PNR, \(c=100\) \(\pm\) 0.9 & 0.90 & 0.94 & 0.88 & 0.92 & 0.87 & 0.75 & 0.96 & 0.74 & 0.69 & 0.91 & 0.57 & 0.72 & 0.76 & 0.80 & 0.70 & 0.69 & 0.92 & 0.82 / 0. 02 \\ PNR, \(c=200\) \(\pm\) 0.9 & 0.96 & 0.97 & 0.89 & 0.88 & 0.82 & 0.72 & 0.87 & 0.82 & 0.81 & 0.81 & 0.87 & 0.84 & 0.81 & 0.72 & 0.82 & 0.82 / 0. 02 & 0.83 / 0. 02 \\ PNR, \(c=500\) \(\pm\) 0.9 & **0.97** & **0.92** & 0.92 & 0.88 & 0.77 & 0.96 & 0.75 & 0.71 & 0.60 & 0.74 & 0.85 & 0.50 & 0.75 & 0.73 & 0.92 & 0.83 / 0. 03 \\ PNR, \(c=700\) \(\pm\) 0.9 & 0.98 & 0.97 & 0.88 & 0.91 & 0.85 & 0.75 & 0.75 & 0.70 & 0.68 & 0.91 & 0.55 & 0.72 & 0.85 & 0.80 & 0.73 & 0.70 & 0.92 & 0.82 / 0. 00 \\ \hline PNR, \(c=100\) \(\pm\) 0.9, reinit & 0.91 & 0.96 & 0.90 & 0.94 & 0.99 & 0.80 & 0.98 & 0.84 & 0.78 & 0.93 & 0.68 & 0.78 & 0.91 & 0.86 & 0.81 & 0.82 / 0. 05 & 0.96 & 0.87 / 0. 02 \\ PNR, \(c=500\) \(\pm\) 0.9, reinit & 0.96 & 0.98 & 0.91 & 0.98 & 0.91 & 0.83 & 0.98 & 0.87 & 0.81 & 0.93 & 0.72 & 0.81 & 0.92 & 0.88 & 0.83 & 0.86 & 0.97 & 0.89 / 0. 00 \\ PNR, \(c=500\) \(\pm\) 0.9, reinit & 0.98 & 0.98 & **0.91** & **0.92** & 0.88 & 0.98 & 0.87 & 0.81 & 0.93 & 0.72 & 0.81 & 0.92 & 0.88 & 0.83 & 0.86 & 0.96 & 0.89 / 0. 00 \\ PNR, \(c=500\) \(\pm\) 0.9, reinit & 0.98 & 0.98 & **0.91** & 0.92 & 0.87 & 0.95 & 0.77 & 0.76 & 0.92 & 0.94 & 0.77 & 0.88 & 0.94 & 0.79 & 0.81 & 0.95 & 0.85 / 0. 03 / 0. 03 \\ \hline MLT & 0.97 & 0.97 & 0.90 & 0.94 & 0.91 & 0.82 & 0.99 & 0.92 & 0.80 & 0.92 & 0.97 & 0.75 & 0.81 & 0.94 & 0.90 & 0.85 & 0.89 / 0. 07 & 0.90 /. \\ \hline \hline \end{tabular}
\end{table}
Table 4: MS-SSIM results of UVG17 (m-IDX) Video Sessions with average MS-SSIM and Backward Transfer (BTW) of MS-SSIM. Note that \(*\) denotes our reproduced results.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{1}{c}{\multirow{2}{*}{**Avg MS-SSIM**}} & \multicolumn{1}{c}{\multirow{2}{*}{**BWV**}} & \multicolumn{1}{c}{\multirow{2}{*}{**BWV**}} & \multicolumn{1}{c}{\multirow{2}{*}{**Avg MS-SSIM**}} \\ \cline{2-13} & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** & **10** & **11** & **12** & **13** & **14** & **15** & **16** & **17** & **BWT** \\ \hline STIL, NeRV’ & 39.63 & -36.06 & -37.35 & -37.55 & -41.23 & -36.44 & -31.86 & -37.22 & -32.45 & -3.45 & -31.41 & -36.86 & -37.1 \\ STIL, NeRV’ & 39.66 & 44.93 & 36.28 & 41.13 & 38.14 & 38.13 & -20.28 & 34.74 & -36.38 & -39.22 & 33.18 & -31.72 & -34.8 & -34.15 & -38.41 & -38.66 & -36.47 \\ \hline iCaRL [11]* & 0.15 & 0.21 & 12.71 & 11.40 & 15.58 & 0.25 & 10.26 & 12.96 & 0.44 & 13.03 & 0.55 & 13.39 & 5.36 & 8.67 & 10.93 & 10.92 & 28.29 & 11.38 / -16.13 \\ iCaRL [11]* & 24.31 & 22.25 & 22.19 & 22.74 &
prune and finetune steps, unlike NeRV. This point is our key advantage of PNR over NeRV. Figure 3 (a) shows the results of various sparsity and bit-quantization on the UVG17 datasets: the 8bit of PNR's performances are comparable with 32bit of ones without a significant video quality drop. Figure 3 (b) shows the rate-distortion curves. We compare PNR (reinit) with PNR and NeRV (STL). For a fair comparison, we take steps of pruning, fine-tuning, quantizing, and encoding NeRV. Our PNR (reinit) outperforms all baselines.
**PNR's Capacity.** We prepare a video session-wise PSNR and investigate how PNR reuses weights over sequential video sessions, as shown in Figure 4 (b). PNR tends to progressively transfer weights used for a prior session to weights for new ones compared with others, i.e., PNR with reinit. Since the reinitialized PNR explores more new weights than PNR, PNR with reinit outperforms PNR, as stated in Figure 4 (a), leading to comparable with MTL. This result might suggest that properly reused weights lead to generalization more than others in VCL with low video contextual similarity.
### PNR's Video Generation
We prepare the results of video generation as shown in Figure 5. We demonstrate that a sparse solution (PNR with \(c=30.0\%\), reinit) generates video representations sequentially without significant performance drops, compared with MLT's results. Please refer to the supplementary material for more visualizations and comparisons with baselines.
## 5 Conclusion
Neural Implicit Representations (NIR) have gained significant attention recently due to their ability to represent complex and high-dimensional data. Unlike explicit representations, which require storing
Figure 3: PSNR v.s. Bits-per-pixel (BPP) on the UVG17 datasets
Figure 2: PNR’s Transfer Matrixes of PSNR on the UVG17 dataset.
and manipulating individual data points, implicit representations capture information through a learned mapping function without explicitly representing the data points themselves. While they often compress neural networks substantially to accelerate encoding/decoding speed, yet existing methods fail to transfer learned representations to new videos. This work investigates the continuous expansion of implicit video representations as videos arrive sequentially over time, where the model can only access the videos from the current session. To tackle this problem, we propose a novel neural video representation, _Progressive Neural Representation (PNR)_, that finds an adaptive substructure from the supernet to the given video based on Lottery Ticket Hypothesis. At each training session, our PNR transfers the learned knowledge of the previously obtained subnetworks to obtain the representation of the current video without modifying past subnetwork weights. Therefore it can perfectly preserve the decoding ability (i.e., catastrophic forgetting) on previous videos. We demonstrate the effectiveness of our proposed PNR over baselines on the novel UVG8/17 video sequence benchmark datasets.
Figure 4: PNR’s Comparison of PSNR with others and layer-wise accumulated Capacities on the UVG17 dataset. Note that green represents a reused subnetwork at the current session (s) obtained at the past (s-1) video sessions in (b): reinit (solid line) v.s non-reinit (dashed line).
Figure 5: PNR’s Video Generation (from t=0 to t=2) with \(c=30.0\%\) and reinit on the UVG17 dataset. Note that GT: ground-truth and PRED: model’s predictions. The PSNR denotes each video session’s scores in Table 3. |
2307.09663 | Spectral Applications of Vertex-Clique Incidence Matrices Associated
with a Graph | In this paper, we demonstrate a useful interaction between the theory of
clique partitions, edge clique covers of a graph, and the spectra of graphs.
Using a clique partition and an edge clique cover of a graph we introduce the
notion of a vertex-clique incidence matrix for a graph and produce new lower
bounds for the negative eigenvalues and negative inertia of a graph. Moreover,
utilizing these vertex-clique incidence matrices, we generalize several notions
such as the signless Laplacian matrix, and develop bounds on the incidence
energy and the signless Laplacian energy of the graph. %The tight upper bounds
for the energies of a graph and its line graph are given. More generally, we
also consider the set $S(G)$ of all real-valued symmetric matrices whose
off-diagonal entries are nonzero precisely when the corresponding vertices of
the graph are adjacent. An important parameter in this setting is $q(G)$, and
is defined to be the minimum number of distinct eigenvalues over all matrices
in $S(G)$. For a given graph $G$ the concept of a vertex-clique incidence
matrix associated with an edge clique cover is applied to establish several
classes of graphs with $q(G)=2$. | Shaun Fallat, Seyed Ahmad Mojallal | 2023-07-18T22:05:27Z | http://arxiv.org/abs/2307.09663v1 | # Spectral Applications of Vertex-Clique Incidence Matrices Associated with a Graph
###### Abstract
In this paper, we demonstrate a useful interaction between the theory of clique partitions, edge clique covers of a graph, and the spectra of graphs. Using a clique partition and an edge clique cover of a graph we introduce the notion of a vertex-clique incidence matrix for a graph and produce new lower bounds for the negative eigenvalues and negative inertia of a graph. Moreover, utilizing these vertex-clique incidence matrices, we generalize several notions such as the signless Laplacian matrix, and develop bounds on the incidence energy and the signless Laplacian energy of the graph. More generally, we also consider the set \(S(G)\) of all real-valued symmetric matrices whose off-diagonal entries are nonzero precisely when the corresponding vertices of the graph are adjacent. An important parameter in this setting is \(q(G)\), and is defined to be the minimum number of distinct eigenvalues over all matrices in \(S(G)\). For a given graph \(G\) the concept of a vertex-clique incidence matrix associated with an edge clique cover is applied to establish several classes of graphs with \(q(G)=2\).
keywords: Clique partition, Edge clique cover, Vertex-clique incidence matrix, Eigenvalues of graphs, Graph energy, Minimum number of distinct eigenvalues +
Footnote †: journal:
AMS Subject Classification: 05C50, 15A29
## 1 Introduction
Let \(G=(V,E)\) be a simple undirected graph with \(n\) vertices and \(m\) edges. A _clique_ in \(G\) is a subset \(C\subseteq V\) such that all vertices in \(C\) are adjacent. An _edge clique cover_\(F\) of \(G\) is a set of cliques \(F=\{C_{1},C_{2},\ldots,C_{k}\}\) that together contain each edge of \(G\) at least once. The smallest size of an edge clique cover of \(G\) is called the edge clique cover number of \(G\) and is denoted by \(cc(G)\). An edge clique cover of \(G\) with size \(cc(G)\) is referred to as a _minimum edge clique cover_ of \(G\). A special case of an edge clique cover in which every edge belongs to exactly one clique is called a _clique partition_ of \(G\). The size of the smallest clique partition of \(G\) is called the _clique partition number_ of \(G\), and is denoted by \(cp(G)\). A clique partition of \(G\) with size \(cp(G)\) is referred to as a _minimum clique partition_ of \(G\). It is clear that both \(cc(G)\) and \(cp(G)\) exist as \(E\) forms a clique
partition (and hence an edge clique cover) of \(G\). Further note that any minimum clique partition does not contain any cliques of size one, and, by convention, the clique partition number of the empty graph is defined to be zero. Information concerning clique partitions and edge clique covers of a graph can be found in the works [8; 14; 27; 30].
Before defining the various matrices associated with a graph, we make note of the standard matrix notations: \(I_{n}\) to denote the \(n\times n\) identity matrix; \(O\) to denote the zero matrix (size determined by context); \(J\) to denote the all ones matrix (size determined by context); and \(I\!I\) to denote the all ones vector (size determined by context).
Given a graph \(G\) with \(V=\{1,2,\ldots,n\}\) and \(E=\{e_{1},e_{2},\ldots,e_{m}\}\), the _(vertex-edge) incidence matrix_\(M\) of \(G\) is the \(n\times m\) matrix defined as follows: the rows and the columns of \(M\) are indexed by \(V\) and \(E\), respectively; and the \((i,\,j)\)-entry of \(M\) is \(0\) if \(i\notin e_{j}\) and \(1\) otherwise. Similarly, the adjacency matrix \(\mathcal{A}=\mathcal{A}(G)=(a_{ij})\) is a \((0,1)\)-matrix of \(G\) such that \(a_{ij}=1\) if \(ij\in E(G)\) and \(0\) otherwise. It is well-known that [18]
\[MM^{T}=Q(G),\ \ \ \mbox{and}\ \ \ M^{T}M=\mathcal{A}(L_{G})+2I_{m}, \tag{1}\]
where \(D(G)\) is the diagonal matrix of vertex degrees (\(d_{i}=\deg(i)\), \(i=1,2,\ldots,n\)) and the matrix \(Q(G)=D(G)+\mathcal{A}(G)\) is known as the _signless Laplacian matrix_ of the graph \(G\); the _line graph_, \(L_{G}\), of the graph \(G\) is the graph whose vertex set is in one-to-one correspondence with the set of edges of \(G\), where two vertices of \(L_{G}\) are adjacent if and only if the corresponding edges in \(G\) have a vertex in common [22]. Finally, the equations in (1) imply an important spectral relation between the signless Laplacian matrix \(Q(G)\) and \(\mathcal{A}(L_{G})\), see Lemma 3.4.
As we are also interested in studying more general symmetric matrices associated to a graph on \(n\) vertices, we let \(S(G)\) denote the collection of real symmetric matrices \(A=(a_{ij})\) such that for \(i\neq j\), \(a_{ij}\neq 0\) if and only if \(ij\in E(G)\). The main diagonal entries of any such \(A\) in \(S(G)\) are not constrained. Observe that for any graph \(G\), both \(Q(G)\) and \(\mathcal{A}(G)\) belong to \(S(G)\).
We denote the spectrum of \(A\), i.e., the multiset of eigenvalues of \(A\), by \(\mbox{Spec}(A)\). In particular,
\[\mbox{Spec}(A)=\{\lambda_{1}^{[m_{1}]},\,\lambda_{2}^{[m_{2}]},\,\ldots,\, \lambda_{q}^{[m_{q}]}\},\]
where the distinct eigenvalues of \(A\) are given by \(\lambda_{1}<\lambda_{2}<\cdots<\lambda_{q}\) with corresponding multiplicities of these eigenvalues are \(m_{1},m_{2},\ldots,m_{q}\) respectively. Further we consider the ordered multiplicity list of \(A\) as the sequence \(m(A)=(m_{1},m_{2},\ldots,m_{q})\). For brevity, a simple eigenvalue \(\lambda_{k}^{[1]}\) is simply denoted by \(\lambda_{k}\).
Given a graph \(G\), the spectral invariant \(q(G)\) is defined as follows:
\[q(G)=\min\{q(A)\,:\,A\in S(G)\},\]
where \(q(A)\) is the number of distinct eigenvalues of \(A\) (see [2; 25]). The spectral invariant \(q(G)\) is called the _minimum number of distinct eigenvalues of the graph \(G\)_. The class of matrices \(S(G)\) has been of interest to many researchers recently (see [16; 15; 16; 17] and the references therein),
and there has been considerable development on the inverse eigenvalue problem for graphs (see [23]) which continues to receive considerable and deserved attention, as it remains one of the most interesting unresolved issues in combinatorial matrix theory. Recently, J. Ahn et al. [3] offered a complete solution to the ordered multiplicity inverse eigenvalue problem for graphs on six vertices. Using the notions of clique partitions and edge clique covers of a graph we generalize the conventional vertex-edge incidence matrix \(M\) by considering a new incidence matrix called the _vertex-clique incidence matrix_ of a graph. Suppose \(F=\{C_{1},C_{2},\ldots,C_{k}\}\) is an edge clique cover of a graph \(G\) with \(V=\{1,2,\ldots,n\}\). The vertex-clique incidence matrix \(M_{F}\) of \(G\) associated with the edge clique cover \(F\) is defined as follows: the \((i,j)\)-entry of \(M_{F}\) is real and nonzero if and only if the vertex \(i\) belongs to the clique \(C_{j}\in F\). In the particular case when \(F\) is actually a clique partition, the vertex-clique incidence matrix, in this case, is denoted by \(\mathcal{M}_{F}\), and the \((i,j)\)-entry of \(\mathcal{M}_{F}\) is equal to one if and only if the vertex \(i\) belongs to the clique \(C_{j}\in F\). We observe that for any graph \(G\) the vertex-clique incidence matrix corresponding to a clique partition \(F\), preserves several main properties of its vertex-edge incidence matrix. For instance, in Section 3, \(\mathcal{M}_{F}\mathcal{M}_{F}^{T}=\mathcal{D}_{F}+\mathcal{A}\), where \(\mathcal{D}_{F}=diag(t_{1}^{F},t_{2}^{F},\ldots,t_{n}^{F})\) with \(t_{i}^{F}\leq d_{i}\), where both sequences \(t_{i}^{F}\) and \(d_{i}\) are in non-increasing order. This fact enables us to determine new lower bounds for the negative eigenvalues of the graph.
The paper is organized as follows. In Section 2, we provide the necessary notions, notations, and known results that are needed in the sections containing our main observations. In Section 3, using the notion of a clique partition \(F\) of a graph \(G\), we define signless Laplacian matrix of the graph \(G\) associated with the clique partition \(F\). A new graph \(P_{G}\) is introduced as a generalization for the line graph of \(G\). In Subsection 3.1, applying this new theory of a vertex-clique incidence matrix, we produce lower bounds for the negative eigenvalues of the graph. Moreover, we present lower bounds for the negative inertia \(\nu^{-}(G)\) of a graph \(G\) in terms of its order \(n\) and the rank of its vertex-clique incidence matrix. We also provide a sufficient condition under which the well-known inequality \(\nu^{-}(G)\leq n-\alpha(G)\) holds with equality, where \(\alpha(G)\) is the independence number of \(G\). In Subsection 3.2, we introduce new graph energies associated with a clique partition \(F\) of the graph \(G\) and study several associated properties. Moreover, new upper bounds for the energies of the graph \(G\) and its clique partition graph and line graph are determined. In Section 4, studies on the vertex-clique incidence matrix of a graph associated with an edge clique cover lead to a derivation of some new classes of graphs with \(q(G)=2\) (see also Subsection 4.1).
## 2 Notations and preliminaries
In this section, we list some known notions, notations, and results that are needed in the remaining sections.
We start this section by introducing the notion of the eigenvalues of a graph. The eigenvalues \(\lambda_{1}\), \(\lambda_{2}\),..., \(\lambda_{n}\) of the adjacency matrix \(\mathcal{A}(G)\) (or shortened to \(\mathcal{A}\) when reference to the graph \(G\) is clear from context) of the graph \(G\) are also called the _eigenvalues of \(G\)_. The number of positive
(negative) eigenvalues in the spectrum of the graph \(G\) is called the _positive (negative) inertia_ of the graph \(G\), and is denoted by \(\nu^{+}(G)\) (\(\nu^{-}(G)\)). The _energy_ of the graph \(G\) is defined as
\[\mathcal{E}(G)=\sum_{i=1}^{n}\,|\lambda_{i}|\,. \tag{2}\]
Further details on various properties of graph energy can be found in [19, 20, 24, 28, 29]. Suppose \(q_{1}\), \(q_{2},\dots\), \(q_{n}\) be the eigenvalues of the matrix \(Q(G)\). Then the _signless Laplacian energy_ of the graph \(G\) is defined as [1]
\[LE^{+}=LE^{+}(G)=\sum_{i=1}^{n}\Big{|}q_{i}-\frac{2m}{n}\Big{|}. \tag{3}\]
More information on properties of the signless Laplacian energy can be found in [1], and the energy of a line graph and its relations with other graph energies are studied in [12, 21].
A _subgraph_\(H\) of a graph \(G\) is a graph whose vertex set and edge set are subsets of those of \(G\). If \(H\) is a subgraph of \(G\), then \(G\) is said to be a _supergraph_ of \(H\). The subgraph of \(G\) obtained by deleting either a vertex \(v\) of \(G\) or an edge \(e\) of \(G\) is denoted by \(G-v\) and \(G-e\), respectively. Suppose \(H\) is a graph on \(n\) vertices. Then we let \(K_{n}\backslash H\) denote the graph obtained from the complete graph, \(K_{n}\), by removing the edges from \(H\). An _independent set_ in the graph \(G\) is a set of vertices in \(G\), no two of which are adjacent. The _independence number_\(\alpha(G)\) of \(G\) is the number of vertices in a largest independent set of \(G\). A _matching_ in a graph \(G\), is simply a collection of independent edges from \(G\) (i.e., no two edges in a matching share a common vertex from \(G\)). Additionally, a matching is referred to as _perfect_ if each vertex from \(G\) is incident with one edge from the matching.
An \(n\times n\) symmetric real matrix \(B\) is a positive semi-definite matrix if all of its eigenvalues are nonnegative. In this case, we denote \(B\geq 0\). For real symmetric matrices \(B\) and \(C\), if \(B-C\geq 0\), then we write \(B\geq C\).
**Lemma 2.1**.: [7] _Let \(A\) and \(B\) be Hermitian matrices of order \(n\), and assume that \(A\leq B\). Then for all \(i=1,2,\dots,n\),_
\[\lambda_{i}(A)\leq\lambda_{i}(B),\]
_where \(\lambda_{i}(M)\) is the \(i\)th largest eigenvalue of a square matrix \(M\)._
The following result was obtained in [18].
**Lemma 2.2**.: [18] _If \(B\) and \(C\) are matrices such that \(BC\) and \(CB\) are both defined, then \(BC\) and \(CB\) have the same nonzero eigenvalues with the same multiplicity._
Let \(\circ\) denote the Schur (also known as the Hadamard or entry-wise) product. The \(n\times n\) symmetric matrix \(A\) has the _Strong Spectral Property_ (or \(A\) has the SSP for short) if the only symmetric matrix \(X\) satisfying \(A\circ X=O\), \(I\circ X=O\) and \([A,\,X]=AX-XA=O\) is \(X=O\) (see [4]). The following result is given in [4, Thm. 10].
**Lemma 2.3**.: [4] _If \(A\in S(G)\) has the SSP, then every supergraph of \(G\) with the same vertex set has a matrix realization that has the same spectrum as \(A\) and has the SSP._
Given two graphs \(G\) and \(H\), the join of \(G\) and \(H\), denoted by \(G\lor H\), is the graph obtained from \(G\cup H\), by adding all possible edges between \(G\) and \(H\). Suppose \(G\) is a graph with \(q(G)=2\). Then among all matrix realizations \(A\) in \(S(G)\) with two distinct eigenvalues, we define the multiplicity bi-partition \([n-k,\,k]\) associated to \(A\) if the two eigenvalues of \(A\) has respective multiplicities \(n-k\) and \(k\). Further we define the minimal multiplicity bi-partition \(MB(G)\) to be the least integer \(k\leq\lfloor\frac{n}{2}\rfloor\) such that \(G\) achieves the multiplicity bi-partition \([n-k,\,k]\). We close this section with two useful results concerning specific classes of graphs realizing two distinct eigenvalues with respect to the set \(S(G)\).
**Lemma 2.4**.: [6; 9] _Let \(G\) be a connected graph on \(n\) vertices. Then (1) \(MB(G)=1\) if and only if \(G\) is the complete graph, \(K_{n}\). (2) \(MB(G)=2\) if and only if_
\[G=(K_{p_{1}}\cup K_{q_{1}})\vee(K_{p_{2}}\cup K_{q_{2}})\vee\ldots(K_{p_{k}} \cup K_{q_{k}})\]
_for non-negative integers \(p_{1},\ldots,p_{k},q_{1},\ldots,q_{k}\) with \(k>1\), and \(G\) is not isomorphic to either one of a complete graph or \(G=(K_{p_{1}}\cup K_{q_{1}})\lor K_{1}\)._
**Lemma 2.5**.: [26] _If \(G\) is a connected graph of order \(n\in\{l,l+1,l+2\}\) and \(n_{1},\ldots,n_{l}\in\mathbb{N}\), then \(q(G\vee\cup_{j\in[l]}K_{n_{j}})=2\)._
## 3 Matrices associated with a clique partition
In this section, we use of the vertex-clique incidence matrix associated with a clique partition of a graph \(G\). Recall that for a graph \(G=(V,E)\) with the vertex set \(V=[n]=\{1,2,\ldots,n\}\) and \(m=|E|\) edges, and for a given clique partition \(F=\{C_{1},C_{2},\ldots,C_{k}\}\) of \(G\), consider the matrix \(\mathcal{M}_{F}\) with rows and columns indexed by the vertices in \(V\) and the cliques in \(F\), respectively, such that the \((i,j)\)-entry of \(\mathcal{M}_{F}\) is equal to one if and only if the vertex \(i\) belongs to the clique \(C_{j}\in F\). Observe that when \(F=E\), \(\mathcal{M}_{F}\) as simply the conventional incidence matrix of the graph \(G\). For each vertex \(i\in[n]\) of the graph \(G\), we define a new parameter \(t_{i}^{F}=t_{i}^{F}(G)\) to be the number of cliques in \(F\) containing the vertex \(i\), that is,
\[t_{i}^{F}=|\{j\in[k]\,:\,C_{j}\in F,\,i\in C_{j}\}|.\]
We call \(t_{i}^{F}(G)\)_the clique-degree_ of the vertex \(i\) in graph \(G\) associated with \(F\), and, without loss of generality, we assume that \(t_{1}^{F}\geq t_{2}^{F}\geq\ldots\geq t_{n}^{F}\). Given clique partition \(F=\{C_{1},C_{2},\ldots,C_{k}\}\) of \(G\), we consider different possible classes of graphs as follows:
\((i)\) The graph \(G\) is _\(t\) clique-regular_ if \(t_{1}^{F}=\cdots=t_{n}^{F}=t\),
\((ii)\) The graph \(G\) is _\(s\) clique-uniform_ if \(|C_{1}|=\cdots=|C_{k}|=s\),
\((iii)\) The graph \(G\) is \((s,t)\)_regular_ if \(t_{1}^{F}=\cdots=t_{n}^{F}=t\) and \(|C_{1}|=\cdots=|C_{k}|=s\).
Any graph is \(2\) clique-uniform and any \(d\)-regular graph is also \(d\) clique-regular using the trivial clique partition \(F=E\).
Let \(\mathcal{D}_{F}\) be the \(n\times n\) diagonal matrix with row and column indexed by the vertex set \(V\) with \((i,i)\)-entry equal to \(t_{i}^{F}\), that is, \(\mathcal{D}_{F}=diag(t_{1}^{F},\ldots,t_{n}^{F})\). The inner product of any two distinct rows of \(\mathcal{M}_{F}\) indexed by vertices \(i\) and \(j\) is equal to the number of cliques in \(F\) containing the vertices \(i\) and \(j\). By definition of the clique partition \(F\), if \(i\) and \(j\) are adjacent, then this number is equal to \(1\) and otherwise \(0\). This leads to the following result:
**Theorem 3.1**.: _Let \(\mathcal{M}_{F}\) be the vertex-clique incidence matrix of \(G\) associated with a given clique partition \(F\). Then \(\mathcal{M}_{F}\mathcal{M}_{F}^{T}=\mathcal{D}_{F}+\mathcal{A}\), where \(\mathcal{D}_{F}=diag(t_{1}^{F},\ldots,t_{n}^{F})\) and \(\mathcal{A}\) is the adjacency matrix of \(G\)._
As mentioned above, in the case of \(F=E\), the matrix \(\mathcal{M}_{F}\) is the incidence matrix \(M\) of \(G\) and consequently, \(\mathcal{M}_{F}\mathcal{M}_{F}^{T}=MM^{T}\) is the signless Laplacian matrix of \(G\), where we assume that the sequence of vertex degrees is ordered as \(d_{1}\geq d_{2}\geq\cdots\geq d_{n}\). Notice that in this case, \(t_{i}^{F}=d_{i}\) for \(1\leq i\leq n\). Motivated by this observation, for any clique partition \(F\) we call \(\mathcal{Q}_{F}=\mathcal{M}_{F}\mathcal{M}_{F}^{T}\) the _signless Laplacian matrix of the graph \(G\) associated with the clique partition \(F\)_. Since we always have \(D\geq\mathcal{D}_{F}\), it follows \(Q=D+\mathcal{A}\geq\mathcal{D}_{F}+\mathcal{A}=\mathcal{Q}_{F}\geq 0\). Now define _the clique partition graph_\(P_{G}\) with \(k\) vertices, where each vertex \(i\) corresponds to each clique \(C_{i}\) in \(F\) such that each pair of vertices of \(P_{G}\) are adjacent if and only if the corresponding cliques in \(F\) have a vertex in common. If \(F=E\), then \(P_{G}=L_{G}\), that is, the line graph of \(G\). The inner product of two columns of \(\mathcal{M}_{F}\) is nonzero if and only if the corresponding cliques have a common vertex. From the definition of a clique partition, this nonzero value must be \(1\). These facts immediately yield the following result:
**Theorem 3.2**.: _Let \(\mathcal{M}_{F}\) be the incidence matrix of \(G\) associated with a clique partition \(F\). Then \(\mathcal{M}_{F}^{T}\mathcal{M}_{F}=\mathcal{S}_{F}+\mathcal{A}(P_{G})\), where \(\mathcal{S}_{F}=diag(s_{1}^{F},\ldots,s_{k}^{F})\) and \(s_{i}^{F}=|C_{i}|\) and \(\mathcal{A}(P_{G})\) stands for the adjacency matrix of the graph \(P_{G}\)._
For the case of \(F=E\), we have \(\mathcal{M}_{F}^{T}\mathcal{M}_{F}=M^{T}M=2I_{m}+\mathcal{A}(L_{G})\), and \(P_{G}=L_{G}\) so \(s_{i}^{F}=2\) for \(1\leq i\leq k=m\).
### Applications of the vertex-clique incidence matrix to graph spectrum
In this section, we develop several results on the spectrum of the graph \(G\) and its clique partition graph \(P_{G}\) by the vertex-clique incidence matrix of a graph. Considering \(\mathcal{R}_{F}=\mathcal{M}_{F}^{T}\mathcal{M}_{F}\) with Lemma 2.2 we conclude that the nonzero eigenvalues of matrices \(\mathcal{Q}_{F}\) and \(\mathcal{R}_{F}\) are the same. This fact leads to the following basic results.
**Theorem 3.3**.: _We have the following._
\((i)\) _If \(1\leq i\leq\min\{n,k\}\), then \(\lambda_{i}(\mathcal{Q}_{F})=\lambda_{i}(\mathcal{R}_{F})\)._
\((ii)\): _If_ \(\min\{n,k\}=n\) _then_ \(\lambda_{i}(\mathcal{R}_{F})=0\) _for_ \(n+1\leq i\leq k\)_._ \((iii)\): _If_ \(\min\{n,k\}=k\) _then_ \(\lambda_{i}(\mathcal{Q}_{F})=0\) _for_ \(k+1\leq i\leq n\)_._
Recall that if \(F=E\), then \(\mathcal{Q}_{F}=Q\) and \(\mathcal{R}_{F}=2I_{m}+\mathcal{A}(L_{G})\). Combining these equations with Theorem 3.3 leads to the following well-known result [11, 12]:
**Lemma 3.4**.: _Let \(G\) be a graph of order \(n\) with \(m\) edges. Then_
\[q_{i}(G)=2+\lambda_{i}(L_{G})\ \ \text{for $1\leq i\leq\min\{n,m\}$}.\]
_In particular if \(m>n\) then \(\lambda_{i}(L_{G})=-2\) for \(i>n\), and if \(n>m\) then \(q_{i}(G)=0\) for \(i>m\)._
The following result is obtained by applying Theorem 3.3 for a \((s,t)\) regular graph \(G\) with the clique partition \(F\).
**Theorem 3.5**.: _Let \(G\) be a \((s,t)\) regular graph of order \(n\) with a clique partition \(F\) of size \(k\)._
\((i)\): _If_ \(1\leq i\leq\min\{n,k\}\) _then_ \(\lambda_{i}(G)-\lambda_{i}(P_{G})=s-t\)_._ \((ii)\): _If_ \(\min\{n,k\}=n\) _then_ \(\lambda_{i}(P_{G})=-s\) _for_ \(n+1\leq i\leq k\)_._ \((iii)\): _If_ \(\min\{n,k\}=k\) _then_ \(\lambda_{i}(G)=-t\) _for_ \(k+1\leq i\leq n\)_._
Proof: \((i)\) By Theorem 3.3\((i)\), if \(1\leq i\leq\min\{n,k\}\), then \(\lambda_{i}(\mathcal{Q}_{F})=\lambda_{i}(\mathcal{R}_{F})\), that is, \(\lambda_{i}(\mathcal{D}_{F}+\mathcal{A}(G))=\lambda_{i}(\mathcal{S}_{F}+ \mathcal{A}(P_{G}))\), that is, \(\lambda_{i}(I_{n}+\mathcal{A}(G))=\lambda_{i}(sI_{k}+\mathcal{A}(P_{G}))\), that is, \(t+\lambda_{i}(G)=s+\lambda_{i}(P_{G})\).
\((ii)\): By Theorem 3.3\((ii)\), if \(\min\{n,k\}=n\) then \(\lambda_{i}(\mathcal{R}_{F})=0\) for \(n+1\leq i\leq k\), that is, \(\lambda_{i}(sI_{k}+\mathcal{A}(P_{G}))=0\) for \(n+1\leq i\leq k\), that is, \(\lambda_{i}(P_{G})=-s\) for \(n+1\leq i\leq k\).
\((iii)\): By Theorem 3.3\((iii)\), if \(\min\{n,k\}=k\) then \(\lambda_{i}(\mathcal{Q}_{F})=0\) for \(k+1\leq i\leq n\), that is, \(\lambda_{i}(tI_{n}+\mathcal{A}(G))=0\) for \(k+1\leq i\leq n\), that is, \(\lambda_{i}(G)=-t\) for \(k+1\leq i\leq n\).
**Example 3.6**.: \((i)\) Considering the complete graph \(K_{n}\) and its minimum clique partition \(F\) with only one clique, we have \(\mathcal{M}_{F}=I\!I_{n}\), \(\mathcal{M}_{F}\mathcal{M}_{F}^{T}=J_{n}\) and \(\mathcal{M}_{F}^{T}\mathcal{M}_{F}=[n]\). Applying Theorem 3.5 here we have \(t_{i}^{F}=1\) for \(1\leq i\leq n\), \(k=1\) and \(s_{1}^{F}=n\), that is, \(K_{n}\) is a \((n,1)\) regular graph. From this with Theorem 3.5\((i)\) we arrive at \(1+\lambda_{1}(K_{n})=n+\lambda_{1}(K_{1})\), that is, \(\lambda_{1}(K_{n})=n-1\), and by Theorem 3.5\((iii)\), \(\lambda_{i}(K_{n})=-1\) for \(2\leq i\leq n\).
\((ii)\): Considering the clique partition
\[F=\left\{C_{1}=\{1,2,6\},C_{2}=\{2,3,4\},C_{3}=\{1,3,5\},C_{4}=\{4,5,6\}\right\}\]
for \(G\) isomorphic to the complete tripartite graph \(K_{2,2,2}\) (or \(G\cong K_{2,2,2}\)) in Figure 1, we have \(s_{i}^{F}=3\) for \(i\in[4]\) and \(t_{j}^{F}=2\) for \(j\in[6]\). Then \(G\) is a \((3,2)\) regular graph. Moreover,
\[\mathcal{M}_{F}=\left(\begin{array}{cccc}1&0&1&0\\ 1&1&0&0\\ 0&1&1&0\\ 0&1&0&1\\ 0&0&1&1\\ 1&0&0&1\end{array}\right),\ \mathcal{Q}_{F}=\left(\begin{array}{cccc}2&1&1&0&1&1 \\ 1&2&1&1&0&1\\ 0&1&1&2&1&1\\ 1&0&1&1&2&1\\ 1&1&0&1&1&2\end{array}\right),\ \mathcal{R}_{F}=\left(\begin{array}{cccc}3&1&1&1 \\ 1&3&1&1\\ 1&1&3&1\\ 1&1&1&3\end{array}\right)\]
and by Theorem 3.5, we have \(\lambda_{i}(G)=1+\lambda_{i}(P_{G})\) for \(1\leq i\leq 4\) and \(\lambda_{i}(G)=-2\) for \(i=5,6\). From these facts with \(P_{G}\cong K_{4}\), we arrive at \(\operatorname{Spec}(G)=\{4,0,0,0,-2,-2\}\).
Now applying theory of clique partitions and vertex-clique incidence matrices, we obtain a new lower bound for the smallest eigenvalue of a graph.
**Theorem 3.7**.: _Let \(G\) be a graph of order \(n\) and let \(t_{1}^{F}\) be the largest clique-degree of \(G\) with a given clique partition \(F\). Then_
\[\lambda_{n}(G)\geq-t_{1}^{F}. \tag{4}\]
_Moreover, if equality holds in (4), then \(rank(\mathcal{M}_{F})<n\) and if \(rank(\mathcal{M}_{F})<n\) and \(G\) is clique-regular, then equality holds in (4)._
Proof: Since \(\mathcal{Q}_{F}=\mathcal{D}_{F}+\mathcal{A}\) is a positive semi-definite matrix, we have \(\mathcal{D}_{F}\geq-\mathcal{A}\) and by Lemma 2.1 we arrive at
\[\lambda_{i}(\mathcal{D}_{F})\geq\lambda_{i}(-\mathcal{A})\;\;\text{for $1\leq i \leq n$}. \tag{5}\]
Considering \(i=1\) we arrive at \(-\lambda_{n}(G)=\lambda_{1}(-\mathcal{A})\leq\lambda_{1}(\mathcal{D}_{F})=t_{ 1}^{F}\), which gives the required result in (4).
For the second part of the proof, suppose that \(\lambda_{n}(G)=-t_{1}^{F}\). Then \(\lambda_{n}(t_{1}^{F}I+\mathcal{A})=0\). This with the relation \(0\leq\mathcal{Q}_{F}=\mathcal{D}_{F}+\mathcal{A}\leq t_{1}^{F}I+\mathcal{A}\), gives \(\lambda_{n}(\mathcal{Q}_{F})=0\), that is, \(rank(\mathcal{M}_{F})=rank(\mathcal{Q}_{F})<n\). Now we assume that \(t_{1}^{F}=\cdots=t_{n}^{F}\). If \(rank(\mathcal{M}_{F})<n\) then \(rank(\mathcal{Q}_{F})<n\), that is, \(\lambda_{i}(\mathcal{Q}_{F})=0\) for \(1+k\leq i\leq n\), that is, \(t_{1}^{F}+\lambda_{i}(G)=0\) as \(\mathcal{Q}_{F}=t_{1}^{F}I+\mathcal{A}\), that is, \(\lambda_{n}(G)=-t_{1}^{F}\) with the multiplicity at least \(n-k\).
**Corollary 3.8**.: _All regular bipartite graphs and all clique-regular graphs with \(n>|F|\) satisfy the equality in (4)._
Proof: First we assume that \(G\) is a regular bipartite graph. Since \(G\) is bipartite, we have \(t_{i}^{F}=d_{i}\) for \(i\in[n]\) and \(q_{n}=\lambda_{n}(Q)=0\). On the other hand, since \(G\) is regular, we have \(t_{1}^{F}=\cdots=t_{n}^{F}\). These facts with Theorem 3.7 gives the fact that all regular bipartite graphs satisfies the equality in (4).
Next assume that \(G\) is a clique-regular graph with \(n>k=|F|\). Since \(rank(\mathcal{M}_{F})\leq\min\left\{n,k\right\}\leq k<n\), the desired result is obtained by Theorem 3.7.
Theorem 3.7 holds for any clique partition \(F\) of \(G\), which leads to the following result:
**Corollary 3.9**.: _Let \(G\) be a graph of order \(n\) and let \(t_{1}^{F}\) be the largest clique-degree of \(G\) with a given clique partition \(F\). Then_
\[\lambda_{n}(G)\geq-\min_{F}t_{1}^{F},\]
_where the minimum is over all clique partitions \(F\) of \(G\)._
The following example shows that for the equality \(\lambda_{n}(G)=-t_{1}^{F}\) the graph \(G\) does not need to be clique-regular.
**Example 3.10**.: _For the graph \(G\) given in Figure 2, we have_
\[F=\Big{\{}\{1,2\},\{2,3\},\{1,3,4,6,7\},\{4,5\},\{5,6\}\Big{\}}.\]
_This gives \(t_{i}^{F}=2\) for \(i\in[6]\) and \(t_{7}^{F}=1\). The graph is the line graph of the graph \(H\cong K_{1}\vee(2K_{2}\cup K_{1})\) of order \(6\) with \(7\) edges. Then the smallest eigenvalue of \(G\) is \(\lambda_{7}(G)=\lambda_{7}(L_{H})=-2=-t_{1}^{F}\) while \(t_{1}^{F}\neq t_{7}^{F}\)._
In the following we provide a lower bound for the negative inertia of a graph \(G\) of order \(n\).
**Theorem 3.11**.: _Let \(G\) be a graph of order \(n\). Then_
\[\nu^{-}(G)\geq n-\min_{F}rank(\mathcal{M}_{F}), \tag{6}\]
_where minimum is over all clique partitions \(F\) of \(G\). Moreover, if \(\min_{F}rank(\mathcal{M}_{F})<n\), then \(-t_{1}^{F}\leq\lambda_{i}(G)\leq-t_{n}^{F}\) for \(1+\min_{F}rank(\mathcal{M}_{F})\leq i\leq n\)._
Figure 2: The Graph \(G\).
Proof: If \(\min\limits_{F}rank(\mathcal{M}_{F})=n\), then the result in (6) is obvious. Assume that \(F_{1}\) is a clique partition of \(G\) with \(rank(\mathcal{M}_{F_{1}})=\min\limits_{F}rank(\mathcal{M}_{F})<n\). In this case, since \(rank(\mathcal{Q}_{F_{1}})=rank(\mathcal{M}_{F_{1}})\) and \(\mathcal{Q}_{F_{1}}\) is positive semi-definite matrix, we have \(\lambda_{i}(\mathcal{Q}_{F_{1}})=0\) for \(1+rank(\mathcal{M}_{F_{1}})\leq i\leq n\). From this and the fact that \(t_{n}^{F}+\lambda_{i}(G)\leq\lambda_{i}(\mathcal{Q}_{F_{1}})\leq t_{1}^{F}+ \lambda_{i}(G)\), we have \(-t_{1}^{F}\leq\lambda_{i}(G)\leq-t_{n}^{F}<0\) for \(1+rank(\mathcal{M}_{F_{1}})\leq i\leq n\), which gives the desired results.
The following result is obtained by Theorem 3.11 and the fact \(rank(\mathcal{M}_{F})\leq|F|\).
**Corollary 3.12**.: _Let \(G\) be a graph of the order \(n\) and a clique partition \(F\) such that \(n>|F|\). Then \((i)\ -t_{1}^{F}\leq\lambda_{i}(G)\leq-t_{n}^{F}\) for \(|F|+1\leq i\leq n\)._
\(\nu^{-}(G)\geq n-|F|\)_._
Considering \(F\) as a minimum clique partition of \(G\), we arrive at the following result:
**Corollary 3.13**.: _Let \(G\) be a graph of the order \(n\) and clique partition number \(cp(G)\). If \(cp(G)<n\), then \((i)\ -t_{1}^{F}\leq\lambda_{i}(G)\leq-t_{n}^{F}\) for \(cp(G)+1\leq i\leq n\). \((ii)\ \nu^{-}(G)\geq n-cp(G)\)._
For any graph \(G\) of order \(n\) we have [11]
\[\alpha(G)\leq\min\{n-\nu^{-}(G),\,n-\nu^{+}(G)\}, \tag{7}\]
where \(\nu^{-}\) and \(\nu^{+}\) are the negative and positive parts of the inertia, respectively of the graph \(G\). This implies that
\[\nu^{-}(G)\leq n-\alpha(G). \tag{8}\]
In the following we give a sufficient condition under which the equality in (8) holds.
**Theorem 3.14**.: _Let \(G\) be a graph of order \(n\) with the independence number \(\alpha(G)\) and the clique partition number \(cp(G)\). If \(F\) is a clique partition with \(rank(\mathcal{M}_{F})=\alpha(G)\), then \(\nu^{-}(G)=n-\alpha(G)\). In particular, if \(cp(G)=\alpha(G)\), then \(\nu^{-}(G)=n-\alpha(G)\)._
Proof: By Theorem 3.11 we have
\[\nu^{-}(G)\geq n-rank(\mathcal{M}_{F})=n-rank(\mathcal{Q}_{F})=\eta(\mathcal{ Q}_{F}).\]
This fact along with (8) gives
\[\eta(\mathcal{Q}_{F})\leq\nu^{-}(G)\leq n-\alpha(G). \tag{9}\]
The assumption that \(\text{rank}(\mathcal{M}_{\text{F}})=\alpha(G)\) is equivalent to \(\eta(\mathcal{Q}_{F})=n-\alpha(G)\). This with (9) gives the first required result.
Without loss of generality, we may assume that the vertex set \([\alpha]\) is a maximum independent set in \(G\) and \(C_{i}\) is a clique of a minimum clique partition \(F_{m}\) containing the vertex \(i\in[\alpha]\). Now in \(\mathcal{M}_{F_{m}}\) we consider the submatrix induced by the rows and columns corresponding to the vertex set \([\alpha]\) and the clique set \(\{C_{i}\,:\,i\in[\alpha]\}\), respectively. Obviously, this square principal submatrix is equivalent to the identity matrix of size \(\alpha\) and hence \(\mathrm{rank}(\mathcal{Q}_{F_{m}})\geq\mathrm{rank}(I_{\alpha})=\alpha\). Since \(\mathrm{rank}(\mathcal{Q}_{F_{m}})\leq cp(G)\) and using the assumption \(cp(G)=\alpha(G)\) we arrive at \(\mathrm{rank}(\mathcal{Q}_{F_{m}})=\alpha=\mathrm{rank}(\mathcal{M}_{F_{m}})\) and therefore \(\nu^{-}(G)=n-\alpha(G)\) by the first part of the theorem.
The following result is obtained from (5).
**Theorem 3.15**.: _Let \(G\) be a graph of order \(n\) and the negative inertia \(\nu^{-}\). Let \(t_{i}^{F}\) be the \(i\)th largest clique-degree of \(G\) with a clique partition \(F\). Then for \(1\leq i\leq\nu^{-}\), we have_
\[\lambda_{n-i+1}(G)\geq-t_{i}^{F}. \tag{10}\]
_Equality holds in (10) if \(G\) is a clique-regular graph with \(\nu^{-}=n-|F|\)._
Since \(\mathcal{R}_{F}\) is a positive semi-definite matrix, by a similar manner used in the proof of Theorem 3.7, we obtain the following result.
**Theorem 3.16**.: _Let \(G\) be a graph of order \(n\) with a clique partition \(F=\{C_{1},\ldots,C_{k}\}\) and let \(|C_{i}|=s_{i}^{F}\) for \(1\leq i\leq k\) such that \(s_{1}^{F}\geq s_{2}^{F}\geq\ldots\geq s_{k}^{F}\). Then_
\[\lambda_{k}(P_{G})\geq-s_{1}^{F}. \tag{11}\]
_Equality holds in (11) if \(G\) is a \(s_{1}^{F}\) clique-uniform graph with \(k>n\)._
Proof: Since \(\mathcal{R}_{F}=\mathcal{S}_{F}+\mathcal{A}(P_{G})\) is a positive semi-definite matrix, we have \(\mathcal{S}_{F}\geq-\mathcal{A}(P_{G})\) and by Lemma 2.1, it follows that
\[\lambda_{i}(\mathcal{S}_{F})\geq\lambda_{i}(-\mathcal{A}(P_{G}))\;\;\text{ for }1\leq i\leq k. \tag{12}\]
Considering \(i=1\) we have \(-\lambda_{k}(P_{G})=\lambda_{1}(-\mathcal{A}(P_{G}))\leq\lambda_{1}(\mathcal{ S}_{F})=s_{1}^{F}\), which gives the required result in (11).
Now assume that \(G\) is a \(s_{1}^{F}\) clique-uniform graph with \(k>n\). By Theorem 3.3 (ii) with \(k>n\), we arrive at \(\lambda_{i}(\mathcal{R}_{F})=0\) for \(n+1\leq i\leq k\). On the other hand, since \(s_{1}^{F}=\cdots=s_{k}^{F}\) we have \(\mathcal{R}_{F}=s_{1}^{F}I_{k}+\mathcal{A}(P_{G})\), and consequently \(\lambda_{i}(\mathcal{R}_{F})=s_{1}^{F}+\lambda_{i}(P_{G})=0\). That is, \(\lambda_{i}(P_{G})=-s_{1}^{F}\) for \(n+1\leq i\leq k\), that is, \(\lambda_{k}(P_{G})=-s_{1}^{F}\) with multiplicity at least \(k-n\).
Theorem 3.16 holds for any clique partition \(F\) of \(G\), which gives the following result:
**Corollary 3.17**.: _Let \(G\) be a graph of order \(n\) with a clique partition \(F=\{C_{1},\ldots,C_{k}\}\) and let \(|C_{i}|=s_{i}^{F}\) for \(1\leq i\leq k\) such that \(s_{1}^{F}\geq s_{2}^{F}\geq\cdots\geq s_{k}^{F}\). Then_
\[\lambda_{k}(P_{G})\geq-\min_{F}s_{1}^{F}, \tag{13}\]
_where minimum is over all clique partitions \(F\) of \(G\)._
In the case of \(k>n\), we have \(\lambda_{i}(\mathcal{R}_{F})=0\) for \(1+n\leq i\leq k\) by Theorem 3.3. Since \(s_{k}^{F}+\lambda_{i}(P_{G})\leq\lambda_{i}(\mathcal{R}_{F})\leq s_{1}^{F}+ \lambda_{i}(P_{G})\), we get \(-s_{1}^{F}\leq\lambda_{i}(P_{G})\leq-s_{k}^{F}<0\). We summarize this in the next result.
**Theorem 3.18**.: _Let \(G\) be a graph of order \(n\) and a clique partition \(F\) with \(|F|=k>n\). Then \((i)\ -s_{1}^{F}\leq\lambda_{i}(P_{G})\leq-s_{k}^{F}\) for \(1+n\leq i\leq k\). \((ii)\ \nu^{-}(P_{G})\geq k-n\)._
The following result follows from (12).
**Theorem 3.19**.: _Let \(G\) be a graph of order \(n\) with a clique partition \(F=\{C_{1},\ldots,C_{k}\}\) and let \(|C_{i}|=s_{i}^{F}\) for \(1\leq i\leq k\) such that \(s_{1}^{F}\geq s_{2}^{F}\geq\cdots\geq s_{k}^{F}\). If \(P_{G}\) is the corresponding clique partition graph of \(G\), then for \(1\leq i\leq\nu^{-}(P_{G})\),_
\[\lambda_{k-i+1}(P_{G})\geq-s_{i}^{F}. \tag{14}\]
_Equality in (14) holds if \(G\) is a \(s_{1}^{F}\) clique-uniform graph with \(\nu^{-}(P_{G})=k-n\)._
The following concerns the signless Laplacian eigenvalues of a graph.
**Theorem 3.20**.: _Let \(G\) be a graph of order \(n\) and having a clique partition \(F\) with \(|F|=k\) and assume \(1\leq i\leq\min\{n,k\}\). \((i)\) If \(G\) is a \(t\) clique-regular graph, then \(q_{i}(G)-\lambda_{i}(G)\geq t\). \((ii)\) If \(G\) is a \(s\) clique-uniform graph, then \(q_{i}(G)-\lambda_{i}(P_{G})\geq s\)._
Proof: From Section 3, the signless Laplacian matrix \(Q\) of \(G\) satisfies \(Q\geq\mathcal{Q}_{F}\). This fact with Lemma 2.1 gives \(q_{i}(G)\geq\lambda_{i}(\mathcal{Q}_{F})\), where \(q_{i}(G)\) and \(\lambda_{i}(\mathcal{Q}_{F})\) are respectively, the \(i\)th largest signless Laplacian eigenvalue of \(G\) and the the \(i\)th largest eigenvalue of matrix \(\mathcal{Q}_{F}\). Using the above analysis combined with Theorem 3.3 and facts \(\lambda_{i}(\mathcal{Q}_{F})=t+\lambda_{i}(G)\) and \(\lambda_{i}(\mathcal{R}_{F})=s+\lambda_{i}(P_{G})\) implies the desired results in \((i)\) and \((ii)\).
### Applications to energy of graphs and matrices
In this section, using the theory of vertex-clique incidence matrices of a graph, we introduce a new notion of graph energies, as a generalization of the incidence energy and the signless Laplacian energy of the graph. Finally, we present new upper bounds on energies of a graph, its clique partition graph and line graph.
The energy \(\mathcal{E}(G)\) of the graph \(G\) defined in (2) has the equivalent expressions as follows [12]:
\[\mathcal{E}(G)=2\sum_{i=1}^{\nu^{+}}\lambda_{i}=2\sum_{i=1}^{\nu^{-}}-\lambda_ {n-i+1}=2\max_{1\leq k\leq n}\sum_{i=1}^{k}\lambda_{i}=2\max_{1\leq k\leq n} \sum_{i=1}^{k}-\lambda_{n-i+1} \tag{15}\]
where \(\nu^{+}\) and \(\nu^{-}\) are respectively the positive and the negative inertia of \(G\). Nikiforov [31, 32, 33] proposed a significant extension and generalization of the graph energy concept. The energy of an \(r\times s\) matrix \(B\) is the summation of its singular values, that is,
\[\mathcal{E}(B)=\sum_{i=1}^{s}\sigma_{i}(B). \tag{16}\]
Consonni and Todeschini [10] introduced an entire class of matrix-based quantities, defined as
\[\sum_{i=1}^{n}|x_{i}-\overline{x}|, \tag{17}\]
where \(x_{1},\ x_{2},\ \dots,\ x_{n}\) are the eigenvalues of the respective matrix, and \(\overline{x}\) is their arithmetic mean.
According to (16) and (17), two types of energies can then be defined for any matrix \(B\). The incidence energy \(IE(G)\) of a graph \(G\) is defined to be the energy of the incidence matrix of \(G\) of the type (16), i.e.,
\[IE(G)=\mathcal{E}(M)=\sum_{i=1}^{m}\sigma_{i}(M)=\sum_{i=1}^{m}\sqrt{\lambda_ {i}(M^{T}M)}=\sum_{i=1}^{n}\sqrt{\lambda_{i}(MM^{T})}=\sum_{i=1}^{n}\sqrt{q_{ i}}.\]
Similarly, _the vertex-clique incidence energy \(IE_{F}(G)\) of \(G\) associated with the clique partition \(F\)_ is defined as the energy of the vertex-clique incidence matrix \(\mathcal{M}_{F}\), i.e.,
\[IE_{F}(G)=\mathcal{E}(\mathcal{M}_{F}) = \sum_{i=1}^{k}\sigma_{i}(\mathcal{M}_{F})=\sum_{i=1}^{k}\sqrt{ \lambda_{i}(\mathcal{M}_{F}^{T}\mathcal{M}_{F})}\] \[= \sum_{i=1}^{n}\sqrt{\lambda_{i}(\mathcal{M}_{F}\mathcal{M}_{F}^{ T})}=\sum_{i=1}^{n}\sqrt{\lambda_{i}(\mathcal{Q}_{F})}.\]
Observe
\[Q-\mathcal{Q}_{\mathcal{F}}=(D+\mathcal{A})-(\mathcal{D}_{F}+\mathcal{A})=D- \mathcal{D}_{F}=diag(d_{1}-t_{1}^{F},d_{2}-t_{2}^{F},\dots,d_{n}-t_{n}^{F}) \geq 0.\]
From the above and using Lemma 2.1 we have \(q_{i}=\lambda_{i}(Q)\geq\lambda_{i}(Q_{F})\) and, consequently, we have
\[IE_{F}(G)=\sum_{i=1}^{n}\sqrt{\lambda_{i}(Q_{F})}\leq\sum_{i=1}^{n}\sqrt{q_{i}}= IE(G)\]
with equality if and only if \(F=E\).
Moreover,
\[\sum_{i=1}^{n}\lambda_{i}(Q_{F})=\sum_{i=1}^{n}t_{i}^{F},\quad\sum_{i=1}^{n} \lambda_{i}^{2}(Q_{F})=\sum_{i=1}^{n}((t_{i}^{F})^{2}+t_{i}^{F}).\]
Applying the fact that the diagonal entries are majorized by the eigenvalues of \(Q_{F}\) and by a similar method given in [13] it can be shown that
\[\sum_{i=1}^{n}\sqrt{\lambda_{i}(Q_{F})}\leq\sum_{i=1}^{n}\sqrt{t_{i}^{F}}.\]
Considering the energy of the matrix \(Q_{F}\) of the type (17) gives
\[\mathcal{E}(Q_{F})=\sum_{i=1}^{n}\left|\lambda_{i}(Q_{F})-\overline{i}\right|, \tag{18}\]
where \(\overline{i}=\frac{\sum_{i=1}^{n}t_{i}^{F}}{n}\). The energy \(\mathcal{E}(Q_{F})\) can be viewed as a generalization of the signless Laplacian energy \(LE^{+}(G)\) of \(G\) which is defined as follows [1]:
\[LE^{+}(G)=\mathcal{E}(Q)=\sum_{i=1}^{n}|q_{i}-\frac{2m}{n}|.\]
Due to the similarity of the definitions for signless Laplacian energy \(LE^{+}(G)\) and \(\mathcal{E}(Q_{F})\) it follows that in most cases, results derived about \(LE^{+}(G)\) can be generalized to \(\mathcal{E}(Q_{F})\). For example, from Lemma 2.12 in [12] for \(LE^{+}(G)\), we obtain the following:
\[\mathcal{E}(Q_{F})=\max_{1\leq j\leq n}\left\{2\sum_{i=1}^{j}\lambda_{i}(Q_{F })-2j\overline{i}\right\}=2\sum_{i=1}^{\tau}\lambda_{i}(Q_{F})-2\overline{i}\,\tau, \tag{19}\]
where \(\tau\) is the largest positive integer such that \(\lambda_{\tau}(Q_{F})>\overline{i}\).
Using a method similar to the proof of Corollary 5 in [35] for \(Q_{F}-\overline{i}\,I=\mathcal{D}_{F}-\overline{i}\,I+\mathcal{A}\), we have
\[\mathcal{E}(Q_{F})-\mathcal{E}(G)\leq\sum_{i=1}^{n}|t_{i}^{F}-\overline{i}|.\]
In the next result, we show that for a clique-regular graph \(G\) associated with a clique partition \(F\), \(\mathcal{E}(Q_{F})=\mathcal{E}(G)\).
**Theorem 3.21**.: _If \(G\) is a clique-regular graph associated with a clique partition \(F\), then \(\mathcal{E}(\mathcal{Q}_{F})=\mathcal{E}(G)\)._
Proof: Suppose that \(G\) is \(t\) clique-regular. Then
\[\mathcal{E}(\mathcal{Q}_{F}) =\sum_{i=1}^{n}\left|\lambda_{i}(\mathcal{Q}_{F})-\frac{\sum_{i=1} ^{n}t_{i}^{F}}{n}\right|=\sum_{i=1}^{n}\left|\lambda_{i}(\mathcal{Q}_{F})-t\right|\] \[=\sum_{i=1}^{n}\left|\lambda_{i}(\mathcal{D}_{F}+\mathcal{A}(G))- t\right|=\sum_{i=1}^{n}\left|\lambda_{i}(tI+\mathcal{A}(G))-t\right|\] \[=\sum_{i=1}^{n}\left|\lambda_{i}(G)\right|=\mathcal{E}(G).\]
Note that for any \(t\) clique-regular graph \(G\), we have \(IE_{F}(G)=\sum_{i=1}^{n}\sqrt{\lambda_{i}(\mathcal{Q}_{F})}=\sum_{i=1}^{n} \sqrt{t+\lambda_{i}}\). Next we show that for a clique-uniform graph \(G\) we have \(\mathcal{E}(\mathcal{R}_{F})=\mathcal{E}(P_{G})\).
**Theorem 3.22**.: _If \(G\) is a clique-uniform graph with the clique partition graph \(P_{G}\), then \(\mathcal{E}(\mathcal{R}_{F})=\mathcal{E}(P_{G})\)._
Proof: Suppose that \(G\) is a \(s\) clique-uniform graph. Then
\[\mathcal{E}(\mathcal{R}_{F}) =\sum_{i=1}^{k}\left|\lambda_{i}(\mathcal{R}_{F})-\frac{\sum_{i=1 }^{k}s_{i}^{F}}{k}\right|=\sum_{i=1}^{k}\left|\lambda_{i}(\mathcal{R}_{F})-s\right|\] \[=\sum_{i=1}^{k}\left|\lambda_{i}(\mathcal{S}_{F}+\mathcal{A}(P_{ G}))-s\right|=\sum_{i=1}^{k}\left|\lambda_{i}(sI+\mathcal{A}(P_{G}))-s\right|\] \[=\sum_{i=1}^{k}\left|\lambda_{i}(P_{G})\right|=\mathcal{E}(P_{G}).\]
Note that for any \(s\) clique-uniform graph \(G\) with the clique partition graph \(P_{G}\), we have
\[IE_{F}(G)=\sum_{i=1}^{n}\sqrt{\lambda_{i}(\mathcal{Q}_{F})}=\sum_{i=1}^{k} \sqrt{\lambda_{i}(\mathcal{R}_{F})}=\sum_{i=1}^{k}\sqrt{s+\lambda_{i}(P_{G})}.\]
In [12] Theorem 3.3, a relation between the energy of the line graph \(\mathcal{E}(L_{G})\) and the signless Laplacian energy \(LE^{+}(G)\) of \(G\) is given. In the following, we generalize this result by using the notion of clique partition of a graph and we provide a comparison between the energy of the clique partition graph \(\mathcal{E}(P_{G})\) of \(P_{G}\) and \(\mathcal{E}(\mathcal{Q}_{F})\). For this we need the following lemma, which is obtained from Theorem 3.3 and is a generalization of Lemma 3.4.
**Lemma 3.23**.: _Let \(G\) be an s clique-uniform graph of order \(n\) associated with a clique partition \(F\) where \(|F|=k\). Then_
\[\lambda_{i}(Q_{F})=\lambda_{i}(P_{G})+s,\ \ \mbox{for}\ i\in\{1,\ldots,\min\{n,k\}\}.\]
**Theorem 3.24**.: _Let \(G\) be an s clique-uniform graph of order \(n\) associated with a clique partition \(F\) where \(|F|=k\)._
\((i)\) _If_ \(k<n\)_, then_ \(\mathcal{E}(P_{G})\leq\mathcal{E}(Q_{F})+\frac{2ks}{n}-2s\)_._ \((ii)\) _If_ \(k>n\)_, then_ \(\mathcal{E}(P_{G})\geq\mathcal{E}(Q_{F})+\frac{2ks}{n}-2s\)_._ \((iii)\) _If_ \(k=n\)_, then_ \(\mathcal{E}(P_{G})=\mathcal{E}(Q_{F})\)_._
Proof: \((i)\) Let \(\nu^{+}=\nu^{+}(P_{G})\leq k<n\). By Lemma 3.23 we have
\[\sum_{i=1}^{\nu^{+}}\lambda_{i}(P_{G})=\sum_{i=1}^{\nu^{+}}(\lambda_{i}(Q_{F} )-s)=\sum_{i=1}^{\nu^{+}}\lambda_{i}(Q_{F})-s\nu^{+}.\]
On the other hand, from (15) we have
\[\mathcal{E}(P_{G})=2\sum_{i=1}^{\nu+}\lambda_{i}(P_{G}) = 2\sum_{i=1}^{\nu+}\lambda_{i}(Q_{F})-2s\nu^{+}-2\nu^{+}\frac{ \sum_{i=1}^{n}t_{i}^{F}}{n}+2\nu^{+}\frac{\sum_{i=1}^{n}t_{i}^{F}}{n}\] \[\leq \mathcal{E}(Q_{F})-2s\nu^{+}+2\nu^{+}\frac{ks}{n}\ \ \ \mbox{as (\ref{eq:19})}\,\ \sum_{i=1}^{n}t_{i}^{F}=ks\] \[= \mathcal{E}(Q_{F})+2\nu^{+}\left(\frac{ks}{n}-s\right)\ \ \ \mbox{as}\ \nu^{+}\geq 1,\ k<n\] \[\leq \mathcal{E}(Q_{F})+\frac{2ks}{n}-2s.\]
\((ii)\) Recall that \(\tau\) is the largest positive integer such that \(\lambda_{\tau}(Q_{F})\geq\overline{t}=\frac{ks}{n}\) and let \(\tau<n<k\). Again by Lemma 3.23 we have
\[\sum_{i=1}^{\tau}\lambda_{i}(Q_{F})=\sum_{i=1}^{\tau}(\lambda_{i}(P_{G}))+s\tau.\]
On the other hand, by (19) and Lemma 3.23 we have
\[\mathcal{E}(Q_{F})=2\sum_{i=1}^{\tau}\lambda_{i}(Q_{F})-\frac{2ks\tau}{n}=2 \sum_{i=1}^{\tau}\lambda_{i}(P_{G})+2s\tau-\frac{2ks\tau}{n}.\]
From (15) with the above equation we have
\[\mathcal{E}(P_{G})\geq 2\sum_{i=1}^{\tau}\lambda_{i}(P_{G})=\mathcal{E}(Q_{F} )+2\tau\left(\frac{ks}{n}-s\right)\geq\mathcal{E}(Q_{F})+\frac{2ks}{n}-2s.\]
\((iii)\) If \(k\neq n\), then \(\mathcal{E}(P_{G})\neq\mathcal{E}(\mathcal{Q}_{F})\) by \((i)\) and \((ii)\), i.e., if \(\mathcal{E}(P_{G})=\mathcal{E}(\mathcal{Q}_{F})\), then \(k=n\). It suffices to show that if \(k=n\), then \(\mathcal{E}(P_{G})=\mathcal{E}(\mathcal{Q}_{F})\). Indeed, if \(k=n\), then
\[\mathcal{E}(\mathcal{Q}_{F})=\sum_{i=1}^{n}|\lambda_{i}(\mathcal{Q}_{F})-\frac {\sum_{i=1}^{n}t_{i}^{F}}{n}|=\sum_{i=1}^{n}|\lambda_{i}(\mathcal{Q}_{F})- \frac{ks}{n}|.\]
From the fact \(k=n\) with Lemma 3.23 we have \(\mathcal{E}(\mathcal{Q}_{F})=\sum_{i=1}^{n}|\lambda_{i}(P_{G})|=\mathcal{E}(P _{G})\).
In the following, we present a new upper bound for the energy of a graph \(G\).
**Theorem 3.25**.: _Let \(G\) be a graph of order \(n\) and the negative inertia \(\nu^{-}=\nu^{-}(G)\) and let \(t_{i}^{F}\) be the \(i^{th}\) largest clique degree associated with the clique partition \(F\), for \(1\leq i\leq n\). Then_
\[\mathcal{E}(G)\leq 2\min_{F}\sum_{i=1}^{\nu^{-}}t_{i}^{F},\]
_where the minimum is given over all clique partitions \(F\) of \(G\). Equality holds if \(G\) is a clique-regular graph associated with a minimum clique partition of size \(cp(G)=n-\nu^{-}\)._
Proof: From (15) and (10) we have
\[\mathcal{E}(G)=2\sum_{i=1}^{\nu^{-}}-\lambda_{n-i+1}\leq 2\sum_{i=1}^{\nu^{-}}t_ {i}^{F},\]
where \(t_{i}^{F}\) is \(i^{th}\) largest clique-degree of \(G\) associated with a clique partition \(F\). Since this upper bound is valid for any clique partition of \(G\), we select the optimal value, namely, \(\min_{F}2\sum_{i=1}^{\nu^{-}}t_{i}^{F}\). The second part of the proof follows directly from Theorem 3.15.
**Theorem 3.26**.: _Let \(G\) be a graph of order \(n\) with the vertex degrees \(d_{1}\geq d_{2}\geq\cdots\geq d_{n}\). Then_
\[\mathcal{E}(G)\leq 2\sum_{i=1}^{h}d_{i},\]
_where \(h=\min\{\nu^{+},\,\nu^{-}\}\)._
Proof: Considering the fact \(t_{i}^{F}\leq d_{i}\) for \(i\in[n]\) along with Theorem 3.25 gives
\[\mathcal{E}(G)\leq 2\sum_{i=1}^{\nu^{-}}d_{i}. \tag{20}\]
On the other hand, the Laplacian matrix \(L=D-\mathcal{A}\) of \(G\) is a positive semi-definite matrix, so \(\mathcal{A}\leq D\). From this with Lemma 2.1 we obtain \(\lambda_{i}\leq d_{i}\) for \(1\leq i\leq n\). Then \(\mathcal{E}(G)=2\sum_{i=1}^{\nu^{*}}\lambda_{i}\leq 2\sum_{i=1}^{\nu^{*}}d_{i}\). Using the previous inequality with (20) completes the proof.
From Theorem 3.25 with (7) we obtain the following upper bound for the energy of \(G\):
\[\mathcal{E}(G)\leq 2\sum_{i=1}^{n-\alpha}t_{i}^{F}\leq 2\sum_{i=1}^{n-\alpha}d_{i},\]
where \(\alpha\) is the independence number of the graph \(G\). By (15) and (14) and applying a similar method done for the proof of Theorem 3.25, we obtain the next result.
**Theorem 3.27**.: _Let \(G\) be a graph of order \(n\) with a clique partition \(F=\{C_{1},\ldots,C_{k}\}\) and let \(|C_{i}|=s_{i}^{F}\) for \(1\leq i\leq k\) such that \(s_{1}^{F}\geq s_{2}^{F}\geq\cdots\geq s_{k}^{F}\). For the clique partition graph \(P_{G}\) of \(G\), we have_
\[\mathcal{E}(P_{G})\leq 2\min_{F}\sum_{i=1}^{\nu^{-}(P_{G})}s_{i}^{F}.\]
_Equality holds if \(G\) is a clique-uniform graph associated with a minimum clique partition of size \(cp(G)=n+\nu^{-}(P_{G})\)._
Next, we present an upper bound for the energy \(\mathcal{E}(L_{G})\) of the line graph \(L_{G}\) with a full characterization of the corresponding extreme graphs.
**Theorem 3.28**.: _Let \(G\) be a graph with the line graph \(L_{G}\). Then_
\[\mathcal{E}(L_{G})\leq 4\,\nu^{-}(L_{G}). \tag{21}\]
_Equality holds if and only if \(G\) is a graph with connected components \(G_{i}=(V_{i},E_{i})\) for \(i\geq 1\) with \(n_{i}=|V_{i}|\) and \(|E_{i}|\geq 2\), and possibly some isolated vertices or single edges. Further, each non-bipartite connected component \(G_{i}\) satisfies \(|E_{i}|>|V_{i}|\) and \(q_{n_{i}}\geq 2\), and each bipartite connected component \(G_{i}\) is either a 4-cycle or satisfies \(|E_{i}|>|V_{i}|\) and \(q_{n_{i}-1}\geq 2\)._
Proof: As previously noted, if the clique partition \(F\) of \(G\) is as same as the edge set \(E\) of \(G\), then \(s_{i}^{F}=2\) for \(i\in[n]\) and \(P_{G}\cong L_{G}\). This using Theorem 3.27, we have
\[\mathcal{E}(L_{G})=2\sum_{i=1}^{\nu^{-}(L_{G})}-\lambda_{m-i+1}(P_{G})\leq 2 \sum_{i=1}^{\nu^{-}(L_{G})}2=4\nu^{-}(L_{G}), \tag{22}\]
which gives the desired result in (21).
To characterize these extreme graphs in (21), we assume equality holds in (22). Then all negative eigenvalues of \(P_{G}\) must be \(-2\) by (22). We then consider the following two cases:
\(Case\)\(1)\)\(G\) is connected. First, assume that \(m>n\). If \(G\) is non-bipartite, then by Lemma 3.4, \(\lambda_{i}(L_{G})=-2\) for \(n+1\leq i\leq m\) and \(\lambda_{n}(L_{G})=q_{n}-2\neq-2\) as \(q_{n}\neq 0\). Since \(\lambda_{n}(L_{G})\) must be nonnegative, we have \(q_{n}\geq 2\). Otherwise \(G\) is bipartite and by Lemma 3.4 along with the fact \(q_{n}(G)=0\), \(\lambda_{i}(L_{G})=-2\) for \(n\leq i\leq m\) and \(\lambda_{n-1}(L_{G})=q_{n-1}-2\neq-2\) as \(q_{n-1}\neq 0\). Since \(\lambda_{n-1}(L_{G})\) must be nonnegative, it follows that \(q_{n-1}\geq 2\). Next, assume that \(m=n\). Since all negative eigenvalues of \(L_{G}\) are equal to \(-2\), we have \(\lambda_{m}(L_{G})=\lambda_{n}(L_{G})=-2\). If \(\nu^{-}=1\), then \(\mathrm{Spec}(L_{G})=\{2,0,0,-2\}\) and then \(L_{G}\) is the cycle graph \(C_{4}\) of order \(4\). Otherwise \(\nu^{-}\geq 2\), and \(\lambda_{n-1}=-2\), that is, \(q_{n-1}=0\), which is a contradiction as \(G\) is connected. Finally, assume that \(m<n\). Since \(G\) is connected it must be a tree and hence \(m=n-1\). In this case we have \(\lambda_{m}(L_{G})=\lambda_{n-1}(L_{G})=-2\), that is, \(q_{n-1}=0\), which again leads to a contradiction.
\(Case\)\(2)\) Assume \(G\) is disconnected. Since isolated vertices and single edges do not affect the negative inertia of \(L_{G}\), we may assume that \(G\) has connected components along with the possibility of some isolated vertices and single edges. Now each connected component of \(G\) can be characterized by the first case, and the proof is complete.
## 4 Vertex-clique incidence matrix of a graph associated with an edge clique cover
In this section, we consider a slightly more general object of the vertex-clique incidence matrix, denoted by \(M_{F}\), associated with an edge clique cover \(F\) of a graph \(G\). Recall that the \((i,j)\)-entry of \(M_{F}\) is real and nonzero if and only if the vertex \(i\) belongs to the clique \(C_{j}\in F\). To ensure \(M_{F}\,M_{F}^{T}\in S(G)\), we arrange the entries of \(M_{F}\) such that the inner product of row \(i\) and column \(j\) (when \(i\neq j\)) in \(M_{F}\) is nonzero if and only if \(ij\in E(G)\).
As noted in the introduction studying the graph parameter \(q(G)\) represents a critical step in the much more general investigation of the inverse eigenvalue problem for graphs. One strategy for minimizing the number of distinct eigenvalues of \(M_{F}\,M_{F}^{T}\), we instead consider minimizing the number of distinct eigenvalues of \(M_{F}^{T}\,M_{F}\). Consequently achieving an upper bound on the parameter \(q(G)\). The key technique used here is to generalize the vertex-clique incidence matrix obtained from an edge clique cover by considering arbitrary positive real entries for \(M_{F}\) or any negative real entries for \(M_{F}\) but paying careful attention to preserving the condition that \(M_{F}\,M_{F}^{T}\in S(G)\).
### Applications to the minimum distinct eigenvalues of a graph
In this section, applying the tool of the vertex-clique incidence matrix of a graph associated with its edge clique cover, we characterize a few new classes of graphs with \(q(G)=2\).
If \(G\) and \(H\) are graphs then the Cartesian product of \(G\) and \(H\) denoted by \(G\square H\), is the graph on the vertex set \(V(G)\times V(H)\) with \(\{g_{1},h_{1}\}\) and \(\{g_{2},h_{2}\}\) adjacent if and only if either \(g_{1}=g_{2}\) and \(h_{1}\) and \(h_{2}\) are adjacent in \(H\) or \(g_{1}\) and \(g_{2}\) are adjacent in \(G\) and \(h_{1}=h_{2}\). The first statement in the next theorem can also be found in [2], however, we include a proof here to aid in establishing the second claim in the result below.
**Theorem 4.1**.: _Let \(G\cong K_{s}\square K_{2}\) with \(s\geq 3\). Then \(q(G)=2\) and \(G\) has an SSP matrix realization with two distinct eigenvalues._
Proof: Let \(M=\begin{pmatrix}M_{1}\\ M_{2}\end{pmatrix}\), where \(M_{1}=J_{s}-(s-1)I_{s}\) and \(M_{2}=J_{s}-I_{s}\). Then we have
\[A=MM^{T}=\begin{pmatrix}M_{1}\\ M_{2}\end{pmatrix}\begin{pmatrix}M_{1}^{T}&M_{2}^{T}\end{pmatrix}=\begin{pmatrix} M_{1}M_{1}^{T}&M_{1}M_{2}^{T}\\ \hline M_{2}M_{1}^{T}&M_{2}M_{2}^{T}\end{pmatrix}=\begin{pmatrix}A_{1}&(s-1)I_ {s}\\ \hline(s-1)I_{s}&A_{2}\end{pmatrix}, \tag{23}\]
where
\[A_{1}=M_{1}M_{1}^{T}=M_{1}^{2}=(s-1)^{2}I_{s}+(2-s)J_{s}\,,\ \ \ A_{2}=M_{2}M_{2}^{T}=M_{2}^{2}=I_{s}+(s-2)J_{s}\,,\]
\[M_{1}M_{2}^{T}=M_{1}M_{2}=(s-1)I_{s}.\]
From the structure of \(A\), we have \(A\in S(G)\). On the other hand,
\[M^{T}M=\begin{pmatrix}M_{1}^{T}&M_{2}^{T}\end{pmatrix}\begin{pmatrix}M_{1}\\ M_{2}\end{pmatrix}=M_{1}^{T}M_{1}+M_{2}^{T}M_{2}=(2-s)J_{s}+(s-1)^{2}I_{s}+I_{s} +(s-2)J_{s}=cI_{s},\]
where \(c=s^{2}-2s+2\). Hence \(\operatorname{Spec}(MM^{T})=\{c^{[s]},\,0^{[s]}\}\) and \(q(G)=2\).
Now, we show that the matrix \(A\) has SSP. We need to prove that the only symmetric matrix satisfying \(A\circ X=O\), \(I\circ X=O\), and \([A,\,X]=AX-XA=O\) is \(X=O\).
From the two equations \(A\circ X=O\), \(I\circ X=O\), \(X\) must have the following form: \(X=\begin{pmatrix}O&X_{1}\\ \hline X_{1}^{T}&O\end{pmatrix}\), where \(X_{1}=\begin{pmatrix}0&x_{12}&\ldots&x_{1s}\\ x_{21}&0&&x_{2s}\\ \vdots&\ddots&\ddots&\vdots\\ x_{s1}&x_{s2}&\ldots&0\end{pmatrix}\). The equality \(AX=XA\) gives \(X_{1}=X_{1}^{T}\). Also, we have \(A_{1}X_{1}=X_{1}A_{2}\), i.e., \([(s-1)^{2}I_{s}+(2-s)J_{s}]X_{1}=X_{1}[I_{s}+(s-2)J_{s}]\). Hence \(sX_{1}=X_{1}J_{s}+J_{s}X_{1}\). Then \((sX_{1})_{ij}=(X_{1}J_{s}+J_{s}X_{1})_{ij}\) for \(i,j\in[s]\). Considering \(i=j=1\), we have \((sX_{1})_{11}=0\) and \((X_{1}J_{s}+J_{s}X_{1})_{ii}=2\sum_{j=1}^{s}x_{1j}\), and then \(\sum_{j=1}^{s}x_{1j}=0\). Considering \((i,j)=(k,k)\) for \(2\leq k\leq s\) we arrive at \(\sum_{j=1}^{s}x_{kj}=0\) for \(2\leq k\leq s\). This means that the row and column sums in \(X_{1}\) are equal to zero. Now, consider \(i,j\in[s]\) where \(i\neq j\). We have
\[sx_{ij}=(sX_{1})_{ij}=(X_{1}J_{s}+J_{s}X_{1})_{ij}=(X_{1}J_{s})_{ij}+(J_{s}X_{ 1})_{ij}=\sum_{k=1}^{s}x_{ik}+\sum_{k=1}^{s}x_{jk}=0.\]
Thus \(X_{1}=O_{s}\) and consequently, \(X=O\). Hence the proof is complete.
**Corollary 4.2**.: _For even \(n\), we have \(q(\overline{C_{n}})=2\)._
Proof: Let \(G\cong K_{n}\backslash H\) and let \(H\) be the graph obtained from the complete bipartite graph \(K_{n/2,n/2}\) by removing a perfect matching. Then by Theorem 4.1 and Lemma 2.3, for \(H\) or any subgraph of \(H\), \(q(G)=2\). Considering this with the fact that \(C_{n}\) is a subgraph of \(H\), the result is obtained.
**Theorem 4.3**.: _Let \(G\) be a graph obtained from \((K_{s}\square K_{2})\lor sK_{1}\) by removing a perfect matching between \(sK_{1}\) and a copy of \(K_{s}\). Then \(q(G)=2\) and \(G\) has an SSP matrix realization with two distinct eigenvalues._
Proof: Let \(M=\begin{pmatrix}M_{1}\\ M_{2}\\ I_{s}\end{pmatrix}\), where \(M_{1}=J_{s}-(s-1)I_{s}\) and \(M_{2}=J_{s}-I_{s}\). Considering the fact that \(M_{1}\) and \(M_{2}\) are symmetric, we have
\[A=MM^{T}=\begin{pmatrix}M_{1}\\ M_{2}\\ I_{s}\end{pmatrix}\begin{pmatrix}M_{1}^{T}&M_{2}^{T}I_{s}\end{pmatrix}=\begin{pmatrix} \begin{matrix}M_{1}M_{1}^{T}&M_{1}M_{2}^{T}&M_{1}I_{s}\\ \hline M_{2}M_{1}^{T}&M_{2}M_{2}^{T}&M_{2}I_{s}\\ \hline M_{1}&M_{2}&I_{s}\end{pmatrix}=\begin{pmatrix}A_{1}&(s-1)I_{s}&M_{1}\\ \hline(s-1)I_{s}&A_{2}&M_{2}\\ \hline M_{1}&M_{2}&I_{s}\end{pmatrix},\]
where
\[A_{1}=M_{1}M_{1}^{T}=M_{1}^{2}=(s-1)^{2}I_{s}+(2-s)J_{s}\,,\ \ \ A_{2}=M_{2}M_{2}^{T}=M_{2}^{2}=I_{s}+(s-2)J_{s}\,,\]
\[M_{1}M_{2}^{T}=M_{1}M_{2}=(s-1)I_{s}.\]
From the structure of \(A\), we have \(A\in S(G)\). On the other hand,
\[M^{T}M=\begin{pmatrix}M_{1}^{T}&M_{2}^{T}I_{s}\end{pmatrix}\begin{pmatrix}M_{1} \\ M_{2}\\ I_{s}\end{pmatrix}=M_{1}^{T}M_{1}+M_{2}^{T}M_{2}+I_{s}^{2}=(2-s)J_{s}+(s-1)^{2} I_{s}+I_{s}+(s-2)J_{s}+I_{s}=cI_{s},\]
where \(c=s^{2}-2s+3\). This gives \(\text{Spec}(MM^{T})=\{c^{[s]},\ 0^{[2s]}\}\), which proves \(q(G)=2\).
Now, we show that the matrix \(A\) has SSP. We need to prove that the only symmetric matrix satisfying \(A\circ X=O\), \(I\circ X=O\), and \([A,\,X]=AX-XA=O\) is \(X=O\).
From the two equations \(A\circ X=O\), \(I\circ X=O\), \(X\) must have the following form: \(X=\begin{pmatrix}O&X_{1}&O\\ \hline X_{1}^{T}&O&X_{2}\\ \hline O&X_{2}&X_{3}\end{pmatrix}\), where \(X_{1}=\begin{pmatrix}0&x_{12}&\ldots&x_{1s}\\ x_{21}&0&&x_{2s}\\ \vdots&\ddots&\ddots&\vdots\\ x_{s1}&x_{s2}&\ldots&0\end{pmatrix}\), \(X_{2}=diag(y_{1},\ldots,y_{s})\) and \(X_{3}=\begin{pmatrix}0&z_{12}&\ldots&z_{1s}\\ z_{12}&0&&z_{2s}\\ \vdots&\ddots&\ddots&\vdots\\ z_{1s}&z_{2s}&\ldots&0\end{pmatrix}\). The matrix equation
\[AX=XA \tag{24}\]
gives \(X_{1}=X_{1}^{T}\). From (24) we also have \(M_{2}X_{2}+X_{3}=X_{2}M_{2}+X_{3}\), i.e., \((J_{s}-I_{s})X_{2}=X_{2}(J_{s}-I_{s})\), i.e., \(J_{s}X_{2}=X_{2}J_{s}\). This gives \(y_{1}=y_{2}=\cdots=y_{s}\), i.e., \(X_{2}=y_{1}I_{s}\).
Again from (24), we have \(A_{1}X_{1}+M_{1}X_{2}=X_{1}A_{2}\), that is, \(M_{1}X_{2}=X_{1}A_{2}-A_{1}X_{1}\), that is, \((J_{s}-(s-1)I_{s})(y_{1}I_{s})=X_{1}(I_{s}+(s-2)J_{s})-((s-1)^{2}I_{s}+(2-s)J_{ s})X_{1}\), i.e.,
\[y_{1}(2-s)I_{s}+y_{1}J_{s}=(2s-s^{2})X_{1}+(s-2)X_{1}J_{s}+(s-2)J_{s}X_{1}.\]
Considering a main diagonal entry, say \((i,i)\), in the above matrix equation, we obtain
\[\sum_{j=1}^{s}x_{ij}=-\frac{y_{1}}{2}. \tag{25}\]
Considering the \((i,j)\)-entry in the above matrix equation, we obtain \(x_{ij}=-y_{1}\frac{s-1}{s-2}\). From the above and (25), \(y_{1}=0\), that is, \(X_{2}=O\). Using the equation \(A_{1}X_{1}+M_{1}X_{2}=X_{1}A_{2}\), we arrive at the matrix equation \(A_{1}X_{1}=X_{1}A_{2}\). Following a similar argument as in the proof of Theorem 4.1 we obtain \(X_{1}=O\).
Again from (24), we have \(M_{1}X_{1}+X_{2}=X_{2}A_{2}+X_{3}M_{2}\). Since \(X_{1}=X_{2}=O\), we get \(X_{3}M_{2}=O\), i.e. \(X_{3}=X_{3}J_{s}\). Considering both the \((i,i)\) and \((i,j)\) entries from the matrix equation, we arrive at \(\sum_{k=1}^{s}z_{ik}=0\) and \(z_{ij}=\sum_{k=1}^{s}z_{ik}=0\), that is, \(X_{3}=O\), which gives \(X=O\).
**Corollary 4.4**.: _Consider the complete bipartite graph \(K_{s,s}\) by removing a perfect matching. Define a new graph \(H\) by adding a copy of \(K_{s}\) to this graph such that each vertex in \(K_{s}\) is adjacent to the corresponding vertex in a copy of \(sK_{1}\). Then \(q(\overline{H})=2\). Moreover, the result holds for any subgraph of \(H\) on the same vertex set._
In [34], the authors studied the problem of graphs requiring property \(p(r,s)\). A graph \(G\) has \(p(r,s)\) if it contains a path of length \(r\) and every path of length \(r\) is contained in a cycle of length \(s\). They prove that the smallest integer \(m\) so that every graph on \(n\) vertices with \(m\) edges has \(p(2,4)\) (or each path of length \(2\) is contained either in a \(3\)-cycle, or a \(4\)-cycle) is \(\binom{n}{2}-(n-4)\) for all \(n\geq 5\). Using this, it was noted in [5] that the above equation from [34] implies that the fewest number of edges required to guarantee that all graphs \(G\) on \(n\) vertices satisfy \(q(G)=2\) is at least \(\binom{n}{2}-(n-3)\). For small values of \(n\), it is known that in fact, equality holds in the previous claim. Namely, if at most \(n-3\) edges are removed from the complete graph \(K_{n}\) with \(n\leq 7\), then the resulting graph has a matrix realization with two distinct eigenvalues. Along these lines and based on [5] the following is a natural conjecture:
**Conjecture 4.5**.: _Removing up to \(n-3\) edges from \(K_{n}\) does not change the number of distinct eigenvalues of \(K_{n}\). That is, for any subgraph \(H\) of \(K_{n}\) with \(|E(H)|\leq n-3\)_
\[q(K_{n}\backslash H)=2.\]
We confirm Conjecture 4.5 for \(n=7,8\) and note that our analysis of the case \(n=7\) differs slightly from [5]. For this, we need the next few lemmas.
**Lemma 4.6**.: _Let \(T_{1}\) be the tree given in Figure 3. We have \(q(\overline{T_{1}})=2\) and \(\overline{T_{1}}\) has an SSP matrix realization with two distinct eigenvalues._
Proof: Consider the \(7\times 4\) matrix \(M_{1}\) as follows:
\[M_{1}=\left(\begin{array}{cccc}1&-2&2&1\\ 2&-1&-2&2\\ 2&2&1&2\\ 1&2&2&0\\ -2&-1&2&0\\ 2&-2&1&0\\ 1&0&0&0\end{array}\right).\]
Using the Gram-Schmidt method we can arrive at a column orthonormal matrix \(M_{2}\). In this case we have \(A=M_{2}M_{2}^{T}\in S(\overline{T_{1}})\). Also \(M_{2}^{T}M_{2}=I_{4}\) and then \(\text{Spec}(A)=\{1^{[4]},\,0^{[3]}\}\). This proves that \(q(\overline{T_{1}})=2\). Furthermore, \(A\) has SSP (this can be confirmed using SageMath), and by Lemma 2.3, the complement of any subgraph of \(T_{1}\) on the same vertex set also has a matrix realization with two distinct eigenvalues.
Figure 4: The graph \(G\).
Figure 3: Tree \(T_{1}\).
**Lemma 4.7**.: _Let \(G\cong K_{1,3}\cup K_{3}\). Then \(q(\overline{G})=2\) and \(\overline{G}\) has an SSP matrix realization with two distinct eigenvalues._
Proof: Consider the \(7\times 3\) matrix \(M_{1}\) corresponding to the labeled graph \(G\) given in Figure 4 as follows:
\[M_{1}=\left(\begin{array}{cccc}1&2&2\\ 2&1&-2\\ 2&-2&1\\ 1&1&1\\ 1&-1&1\\ -\sqrt{2}&0&\sqrt{2}\\ 0&\sqrt{2}&0\end{array}\right).\]
\(A=M_{1}M_{1}^{T}\in S(\overline{G})\). Also \(M_{1}^{T}M_{1}=13I_{3}\) and then \(\mathrm{Spec}(A)=\{13^{[3]},\,0^{[4]}\}\). This proves that \(q(\overline{G})=2\). Furthermore, \(A\) has SSP (a computation that can be verified by SageMath), and by Lemma 2.3, the complement of any subgraph of \(G\) on the same vertex set also has a matrix realization with two distinct eigenvalues.
We now verify that Conjecture 4.5 holds for \(n=7\).
**Theorem 4.8**.: _Removing up to \(4\) edges from \(K_{7}\) does not change the number of distinct eigenvalues of \(K_{7}\), i.e., for any subgraph \(H\) of \(K_{7}\) on 7 vertices, with \(|E(H)|\leq 4\) we have_
\[q(K_{7}\backslash H)=2.\]
Proof: It suffices to show \(\overline{H}\) for any graph \(H\) in Figure 5 has a matrix realization with two distinct eigenvalues. Suppose that the graphs in Figure 5 are denoted by \(H_{i}\) for \(i\in[10]\) from left to right in each row. Then the graphs \(H_{i}\) for \(i=1,3,7,8,10\) are the union of complete bipartite graphs with some isolated vertices. By Lemma 2.4 (2), the complements of these graphs and any subgraph of these graphs have a matrix realization with two distinct eigenvalues. Also \(q(\overline{H_{i}})=2\) for \(i=4,5,9\) and for any subgraph \(H_{i}^{\prime}\) of \(H_{i}\), \(q(\overline{H_{i}^{\prime}})=2\) by Lemma 4.6. Moreover, \(q(\overline{H_{6}})=2\) and for any subgraph \(H_{6}^{\prime}\) of \(H_{6}\), \(q(\overline{H_{6}^{\prime}})=2\) by Lemma 4.7. Additionally, from Lemmas 4.6 and 4.7 such realizations exists with the SSP. Hence any subgraph of these graphs have a matrix realization with two distinct eigenvalues. To complete the proof, we only need to show the complement graph of \(H_{2}\) has a matrix realization with two distinct eigenvalues with the SSP. To this end, consider the \(7\times 3\) matrix \(M_{1}\) as follows:
\[M_{1}=\left(\begin{array}{cccc}1&-2&1\\ 2&-1&2\\ 2&2&2\\ 1&2&0\\ -2&-1&0\\ 2&-2&0\\ 1&0&0\end{array}\right).\]
Using the Gram-Schmidt method we can arrive at a column orthonormal matrix \(M_{2}\). We have \(A=M_{2}M_{2}^{T}\in S(\overline{H_{2}})\). Also \(M_{2}^{T}M_{2}=I_{3}\) and then \(\operatorname{Spec}(A)=\{1^{[3]},\,0^{[4]}\}\). Hence \(q(\overline{H_{2}})=2\). Furthermore, \(A\) has SSP (a computation that van be verified by SageMath), and by Lemma 2.3, the complement of any subgraph of \(H_{2}\) on the same vertex set also has a matrix realization with two distinct.
We require the following results to confirm Conjecture 4.5 for \(n=8\).
**Lemma 4.9**.: _Let \(G\cong H_{1}\cup 2K_{1}\), where \(H_{1}\) is the graph on the left given in Figure 6. Then \(q(\overline{G})=2\) and \(\overline{G}\) has an SSP matrix realization with two distinct eigenvalues._
Proof: Given \(G\) as assumed it can be shown without too much difficulty that \(\overline{G}\cong(H_{2}\lor K_{3})-e\), where \(H_{2}\) is the graph on the right given in Figure 6 minus an edge \(e\) with one endpoint in \(K_{3}\) and the other endpoint in \(H_{2}\) with degree three. Suppose \(M=\begin{pmatrix}M_{1}\\ M_{2}\end{pmatrix}\), is a vertex-clique incidence matrix of \(\overline{G}\), where the blocks \(M_{1}\) and \(M_{2}\) are vertex-clique incidence matrices corresponding to graphs \(H_{2}\) and \(K_{3}\), that is, \(MM^{T}\in S(\overline{G}).\) From (23) we have \(M_{1}M_{1}^{T}\in S(H_{2})\) and \(M_{2}M_{2}^{T}\in S(K_{3})\). On the other hand, we have
\[M^{T}M=M_{1}^{T}M_{1}+M_{2}^{T}M_{2}. \tag{26}\]
Figure 5: All graphs with 7 vertices and 4 edges.
Consider a vertex-clique incidence matrix \(M_{1}\) as follows:
\[M_{1}=\left(\begin{array}{ccc}1&0&0\\ 1&0&1\\ 1&1&0\\ 0&\sqrt{2}&0\\ 0&0&\sqrt{2}\end{array}\right).\]
Then we have \(M_{1}M_{1}^{T}\in S(H_{2})\) and \(M_{1}^{T}M_{1}=\left(\begin{array}{ccc}3&1&1\\ 1&3&0\\ 1&0&3\end{array}\right)\). Given \(M_{1}\) above, the remainder of the proof is devoted to constructing a matrix \(M_{2}\) so that following (26) we have \(M^{T}M=cI_{3}\), for some scalar \(c\). Consider a matrix \(M_{2}\) so that
\[M_{2}^{T}M_{2}=\left(\begin{array}{ccc}a&-1&-1\\ -1&a&0\\ -1&0&a\end{array}\right), \tag{27}\]
where \(a\) is a constant. Suppose the matrix \(M_{2}=\left(\begin{array}{ccc}x_{1}&y_{1}&z_{1}\\ x_{2}&y_{2}&z_{2}\\ x_{3}&y_{3}&z_{3}\end{array}\right)\). This with (27) leads to the following equations:
\[x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=y_{1}^{2}+y_{2}^{2}+y_{3}^{2}=z_{1}^{2}+z_{2}^{2 }+z_{3}^{2}=a,\]
\[x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}=-1,\ \ x_{1}z_{1}+x_{2}z_{2}+x_{3}z_{3}=-1, \ \ y_{1}z_{1}+y_{2}z_{2}+y_{3}z_{3}=0.\]
Solving this system of non-linear equations, we have a candidate matrix \(M_{2}\): \(M_{2}=\left(\begin{array}{ccc}1&-1&z_{1}\\ -1&2&z_{2}\\ 2&1&z_{3}\end{array}\right)\),
Figure 6: The graphs \(H_{1}\) (left) and \(H_{2}\) (right).
where \(z_{1}=\frac{1}{7}(2\sqrt{51}-1)\), \(z_{2}=\frac{1}{35}(6\sqrt{51}+4)\), and \(z_{3}=\frac{-1}{35}(2\sqrt{51}+13)\). Thus
\[M=\left(\begin{array}{cccc}1&0&0\\ 1&0&1\\ 1&1&0\\ 0&\sqrt{2}&0\\ 0&0&\sqrt{2}\\ \hline 1&-1&z_{1}\\ -1&2&z_{2}\\ 2&1&z_{3}\end{array}\right).\]
It is obvious that \(MM^{T}\in S(\overline{G})\) and \(M^{T}M=9I_{3}\). Then by the fact that matrices \(AB\) and \(BA\) have same nonzero eigenvalues, we have \(\mathrm{Spec}(MM^{T})=\{9^{[3]},\,0^{[5]}\}\), and then \(q(\overline{G})=2\). Moreover, applying a basic computation from SageMath, we can confirm that \(MM^{T}\) has SSP and this completes the proof.
By Lemma 4.9, \(\overline{G}\) has an SSP realization \(A=MM^{T}\) with two distinct eigenvalues. Then by Lemma 2.3, any supergraph on the same vertex set as \(G\) has a realization with the same spectrum as \(A\). In particular, \(q(H_{2}\lor K_{3})=2\). This is stated in the following corollary.
**Corollary 4.10**.: _Let \(G\cong H_{2}\cup 3K_{1}\), where \(H_{2}\) is the right graph given in Figure 6. Then \(q(\overline{G})=2\) and \(\overline{G}\) has an SSP matrix realization with two distinct eigenvalues._
**Lemma 4.11**.: _Let \(G\cong H_{3}\cup 3K_{1}\), where \(H_{3}\) is obtained from \(C_{5}\) by joining a vertex to any vertex in \(C_{5}\). Then \(q(\overline{G})=2\) and \(\overline{G}\) has an SSP matrix realization with two distinct eigenvalues._
Proof: We know that \(\overline{G}\cong(C_{5}\lor K_{3})-e\), where \(e\) is an edge with one endpoint in \(K_{3}\) and the other in \(C_{5}\). Suppose \(M=\begin{pmatrix}M_{1}\\ M_{2}\end{pmatrix}\), is a vertex-clique incidence matrix of \(\overline{G}\), where blocks \(M_{1}\) and \(M_{2}\) are vertex-clique incidence matrices corresponding to graphs \(C_{5}\) and \(K_{3}\), that is, \(MM^{T}\in S(\overline{G})\). From (23) we have \(M_{1}M_{1}^{T}\in S(C_{5})\) and \(M_{2}M_{2}^{T}\in S(K_{3})\). On the other hand, we also have the equations in (26). Now, we consider a vertex-clique incidence matrix \(M_{1}\) as follows:
\[M_{1}=\left(\begin{array}{cccc}1&0&0\\ 1&1&0\\ -1&1&1\\ 0&-1&1\\ 0&0&1\end{array}\right).\]
Then \(M_{1}M_{1}^{T}\in S(C_{5})\) and \(M_{1}^{T}M_{1}=\left(\begin{array}{cccc}3&0&-1\\ 0&3&0\\ -1&0&3\end{array}\right)\). Given \(M_{1}\) above, the remainder of the proof is devoted to constructing a matrix \(M_{2}\) so that following (26) we have \(M^{T}M=cI_{3}\), for some scalar
\(c\). We need to create a matrix \(M_{2}\) so that
\[M_{2}^{T}M_{2}=\left(\begin{array}{ccc}a&0&1\\ 0&a&0\\ 1&0&a\end{array}\right), \tag{28}\]
where \(a\) is a constant. Suppose \(M_{2}=\left(\begin{array}{ccc}x_{1}&y_{1}&z_{1}\\ x_{2}&y_{2}&z_{2}\\ x_{3}&y_{3}&z_{3}\end{array}\right)\). This with (28) leads to the following equations:
\[x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=y_{1}^{2}+y_{2}^{2}+y_{3}^{2}=z_{1}^{2}+z_{2}^{2 }+z_{3}^{2}=a,\]
\[x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}=0,\ \ x_{1}z_{1}+x_{2}z_{2}+x_{3}z_{3}=1,\ \ y_{1}z_{1}+y_{2}z_{2}+y_{3}z_{3}=0.\]
Solving these non-linear equations we have \(M_{2}=\left(\begin{array}{ccc}\frac{1}{\sqrt{3}}&0&\frac{1}{\sqrt{3}}\\ \frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{3}}\\ \frac{1}{\sqrt{3}}&\frac{-1}{\sqrt{2}}&\frac{1}{\sqrt{3}}\end{array}\right)\). Thus we have
\[M=\left(\begin{array}{ccc}1&0&0\\ 1&1&0\\ -1&1&1\\ 0&-1&1\\ 0&0&1\\ \hline\frac{1}{\sqrt{3}}&0&\frac{1}{\sqrt{3}}\\ \frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{3}}\\ \frac{1}{\sqrt{3}}&\frac{-1}{\sqrt{2}}&\frac{1}{\sqrt{3}}\end{array}\right).\]
It is clear that \(MM^{T}\in S(\overline{G})\) and \(M^{T}M=4I_{3}\). Then by the fact that matrices \(AB\) and \(BA\) have same nonzero eigenvalues, we have \(\mathrm{Spec}(MM^{T})=\{4^{[3]},\,0^{[5]}\}\), and \(q(\overline{G})=2\). Moreover, applying a basic computation from SageMath, it follows that \(MM^{T}\) has SSP and this completes the proof.
By Lemma 4.11, \(\overline{G}\) has an SSP realization \(A=MM^{T}\) with two distinct eigenvalues. By Lemma 2.3, any supergraph on the same set of vertices as \(G\) has a matrix realization with same spectrum as \(A\). Thus \(q(C_{5}\lor K_{3})=2\). This is stated in the following corollary.
**Corollary 4.12**.: _Let \(G\cong C_{5}\cup 3K_{1}\). Then \(q(\overline{G})=2\) and \(\overline{G}\) has an SSP matrix realization with two distinct eigenvalues._
**Proposition 4.13**.: _Let \(G\cong K_{3}\cup K_{1,n-4}\), where \(n\geq 7\). Then \(q(\overline{G})=2\) and \(\overline{G}\) has an SSP matrix realization with two distinct eigenvalues._
Proof: We show that the complement of \(G\) has a matrix realization with two distinct eigenvalues with the SSP. Consider \(n\times 3\) matrix \(M_{1}\) with rows labeled as given in Figure 7 for \(n=8\):
\[M_{1}=\left(\begin{array}{cccc}1&2&2\\ 2&1&-2\\ 2&-2&1\\ -\sqrt{2}&0&\sqrt{2}\\ 0&\sqrt{\frac{2}{n-4}}&0\\ \vdots&\vdots&\vdots\\ 0&\sqrt{\frac{2}{n-4}}&0\end{array}\right).\]
We have \(A=M_{1}M_{1}^{T}\in S(\overline{G})\). Also \(M_{1}^{T}M_{1}=11\,I_{3}\) and then \(\mbox{Spec}(A)=\{11^{[3]},\,0^{[n-3]}\}\). This proves that \(q(\overline{G})=2\). To verify that \(A\) has SSP, suppose \(X\) is a symmetric matrix such that \(A\circ X=O\), \(I\circ X=O\), and \([A,\,X]=AX-XA=O\). Note to verify \([A,\,X]=AX-XA=O\) it is equivalent to prove that \(AX\) is symmetric. Now assume that \(X\) has the form:
\[X=\left(\begin{array}{c|c}0&O&x^{T}\\ \hline O&X_{1}&O\\ x&O&O\end{array}\right),\mbox{ where }X_{1}=\left(\begin{array}{ccc}0&a&b \\ a&0&c\\ b&c&0\end{array}\right),\]
and \(x\) is a (possibly) nonzero vector of size \(n-4\). Since \(AX\) is symmetric, comparing the (1,3) and (3,1) blocks of \(AX\) we note that \(\alpha Jx=4x\). So if we set \(\beta=I\!\!I^{T}x\), then \(x=\frac{\alpha}{4}\beta I\!\!I\). Comparing the (1,2) and (2,1) blocks of \(AX\) gives
\[2\sqrt{\alpha}\beta=-4\sqrt{2}a-\sqrt{2}b=-\sqrt{2}b+4\sqrt{2}c,\mbox{ and } \sqrt{\alpha}\beta=\sqrt{2}a-\sqrt{2}c.\]
Hence it follows that \(a=-c\) and \(\beta=\frac{2\sqrt{2}a}{\sqrt{\alpha}}\). Finally, comparing the (2,3) and (3,2) blocks of \(AX\), we have
\[a\sqrt{\alpha}-2b\sqrt{\alpha}=2a\sqrt{\alpha}-2c\sqrt{\alpha}=2b\sqrt{\alpha }+c\sqrt{\alpha}=\left(\frac{\alpha}{4}\beta\right)^{2}=\frac{a^{2}}{2\alpha}.\]
Figure 7: The graph \(G\).
From the above equations we deduce that \(b=-\frac{3}{2}a\). Substituting the equations \(a=-c\), \(\beta=\frac{2\sqrt{2}a}{\sqrt{\alpha}}\), and \(b=-\frac{3}{2}a\) into the equation \(2\sqrt{\alpha}\beta=-\sqrt{2}b+4\sqrt{2}c\), yields \(4\sqrt{2}a=\frac{3}{\sqrt{2}}a-4\sqrt{2}a\). Assuming \(a\neq 0\), implies an immediate contradiction. Thus \(a=0\), and it follows, based on the analysis above that \(X=0\). Hence \(A\) has the SSP. Using the fact that this matrix realization has the SSP together with Lemma 2.3, it follows that the complement of any subgraph of \(G\) on the same vertex set also realizes distinct eigenvalues.
**Lemma 4.14**.: _Let \(G\) be the graph given in Figure 8. Then \(q(\overline{G})=2\) and \(\overline{G}\) has an SSP matrix realization with two distinct eigenvalues._
Proof: We show that the complement graph of \(G\) has a matrix realization with two distinct eigenvalues with the SSP. To do this, first we consider \(8\times 3\) matrix \(M\) as follows:
\[M=\left(\begin{array}{cccc}\sqrt{\frac{15}{2}}&0&0\\ 0&1&1\\ 0&1&1\\ 0&1&2\\ 0&-2&1\\ 1&-1&0\\ 1&0&1\\ \sqrt{\frac{2}{2}}&\sqrt{2}&-\sqrt{2}\end{array}\right).\]
We have \(A=MM^{T}\in S(\overline{G})\). Also \(M^{T}M=10\,I_{3}\) so \(\operatorname{Spec}(A)=\{10^{[3]},\,0^{[5]}\}\). This proves that \(q(\overline{G})=2\). Furthermore, \(A\) has SSP (observed using SageMath) and by Lemma 2.3, the complement of any subgraph of \(G\) on the same vertex has a matrix realization having two distinct eigenvalues.
Now we are in a position to establish that Conjecture 4.5 holds for \(n=8\).
Figure 8: The graph \(G\).
**Theorem 4.15**.: _Removing up to \(5\) edges from \(K_{8}\) does not change the number of distinct eigenvalues of \(K_{8}\), i.e., for any subgraph \(H\) on 8 vertices of \(K_{8}\) with \(|E(H)|\leq 5\),_
\[q(K_{8}\backslash H)=2.\]
Proof: It suffices to show \(\overline{H}\) for any graph \(H\) in Figure 9 has a matrix realization with two distinct eigenvalues. Suppose that the graphs in Figure 9 are denoted by \(H_{i}\) for \(i\in[24]\) from left to right in each row. The graphs \(H_{i}\) for \(i=1,2,9,10,15,22,23\) are the union of complete bipartite graphs with some isolated vertices. By Lemma 2.4 (2), the complements of these graphs and any subgraph of these graphs have a matrix realization with two distinct eigenvalues. Also \(q(\overline{H_{i}})=2\) for \(i=5,11,12,16,17,18,19,20,24\) and for any subgraph \(H^{\prime}_{i}\) of \(H_{i}\), \(q(\overline{H^{\prime}_{i}})=2\) by Theorem 4.1. For \(i=3,7,8,13,14\), we have \(q(\overline{H_{i}})=2\) and for any subgraph \(H^{\prime}_{i}\) of \(H_{i}\), \(q(\overline{H^{\prime}_{i}})=2\) by Lemma 4.14. Additionally, from Theorem 4.1 and Lemma 4.14 such realizations exists with the SSP. Hence any subgraph of these graphs have a matrix realization with two distinct eigenvalues.
Further \(q(\overline{H_{21}})=q(\overline{(2K_{2}\cup K_{1})\cup K_{3}})=q(G\lor 3K_ {1})=2\) by Lemma 2.5, where the graph \(G=\overline{2K_{2}\cup K_{1}}=K_{2,2}\lor K_{1}\) is connected. If we remove any edges in \(H_{21}\) from the triangle, then the complement of the result graph has at least two distinct eigenvalues by Lemma 2.4 (2), and if we remove any edges in \(H_{21}\) from out of the triangle, again by 2.5 we can see that the complement of the result graph has at least two distinct eigenvalues. We have \(q(\overline{H_{4}})=2\) and the complement of any subgraph of this graph has a matrix realization with two distinct eigenvalues, by Corollary 4.10. Moreover, \(q(\overline{H_{6}})=2\), and the complement of any subgraph of this graph also has a matrix realization with two distinct eigenvalues, by Corollary 4.12. This completes the proof of the theorem.
## 5 Concluding remarks and open problems
In this work, we utilized the notions of a clique partition and an edge clique cover of a graph to introduce and explore the various properties of a vertex-clique incidence matrix of the graph, which can be viewed as a generalization of the vertex-edge incidence matrix. Using these new incidence matrices, we obtained sharp interesting lower bounds concerning the negative eigenvalues and thus the negative inertia of a graph, and we generalize the notion of the line graph of a graph by introducing the clique partition graph of the given graph. Additionally, we determined the relations between the spectrum of a graph and its clique partition graph. Further, we generalized the notion of incidence energy and signless Laplacian energy of a graph and provided some novel upper bounds for the energies of a graph, its clique partition graph, and the line graph. Finally, applying a general version of a vertex-clique incidence matrix of a graph associated with its edge clique cover, we were able to characterize a few classes of graphs with \(q(G)=2\). To close we list two important and unresolved issues related to some of the content of the current work.
**Problem 1:** Characterize the corresponding extreme graphs for which the inequalities given in (4), (6), (10), (13), and (14) hold with equality.
**Problem 2:** Prove that Conjecture 4.5 is valid for any graph \(G\) of order at least 9.
### Acknowledgements
Dr. Fallat's research was supported in part by an NSERC Discovery Research Grant, Application No.: RGPIN-2019-03934.
|
2305.07037 | Rethink Depth Separation with Intra-layer Links | The depth separation theory is nowadays widely accepted as an effective
explanation for the power of depth, which consists of two parts: i) there
exists a function representable by a deep network; ii) such a function cannot
be represented by a shallow network whose width is lower than a threshold.
However, this theory is established for feedforward networks. Few studies, if
not none, considered the depth separation theory in the context of shortcuts
which are the most common network types in solving real-world problems. Here,
we find that adding intra-layer links can modify the depth separation theory.
First, we report that adding intra-layer links can greatly improve a network's
representation capability through bound estimation, explicit construction, and
functional space analysis. Then, we modify the depth separation theory by
showing that a shallow network with intra-layer links does not need to go as
wide as before to express some hard functions constructed by a deep network.
Such functions include the renowned "sawtooth" functions. Moreover, the saving
of width is up to linear. Our results supplement the existing depth separation
theory by examining its limit in the shortcut domain. Also, the mechanism we
identify can be translated into analyzing the expressivity of popular shortcut
networks such as ResNet and DenseNet, \textit{e.g.}, residual connections
empower a network to represent a sawtooth function efficiently. | Feng-Lei Fan, Ze-Yu Li, Huan Xiong, Tieyong Zeng | 2023-05-11T11:54:36Z | http://arxiv.org/abs/2305.07037v1 | # Rethink Depth Separation with Intra-layer Links
###### Abstract
The depth separation theory is nowadays widely accepted as an effective explanation for the power of depth, which consists of two parts: i) there exists a function representable by a deep network; ii) such a function cannot be represented by a shallow network whose width is lower than a threshold. However, this theory is established for feedforward networks. Few studies, if not none, considered the depth separation theory in the context of shortcuts which are the most common network types in solving real-world problems. Here, we find that adding intra-layer links can modify the depth separation theory. First, we report that adding intra-layer links can greatly improve a network's representation capability through bound estimation, explicit construction, and functional space analysis. Then, we modify the depth separation theory by showing that a shallow network with intra-layer links does not need to go as wide as before to express some hard functions constructed by a deep network. Such functions include the renowned "sawtooth" functions. Moreover, the saving of width is up to linear. Our results supplement the existing depth separation theory by examining its limit in the shortcut domain. Also, the mechanism we identify can be translated into analyzing the expressivity of popular shortcut networks such as ResNet and DenseNet, _e.g._, residual connections empower a network to represent a sawtooth function efficiently.
## 1 Introduction
Due to the widespread applications of deep networks in many important fields (LeCun et al., 2015), mathematically understanding the power of deep networks has been a central problem in deep learning theory (Poggio et al., 2020). The key issue is figuring out how expressive a deep network is or how increasing depth promotes the expressivity of a neural network better than increasing width. In this regard, there have been a plethora of studies on the expressivity of deep networks, which are collectively referred to as the depth separation theory Safran et al. (2019); Vardi and Shamir (2020); Guhring et al. (2020); Vardi et al. (2021); Safran and Lee (2022); Venturi et al. (2022).
A popular idea to demonstrate the expressivity of depth is the complexity characterization that introduces appropriate complexity measures for functions represented by neural networks (Pascanu et al., 2013; Montufar et al., 2014; Telgarsky, 2015; Montufar, 2017; Serra et al., 2018; Hu and Zhang, 2018; Xiong et al., 2020; Bianchini and Scarselli, 2014; Raghu et al., 2017; Sanford and Chatziafratis, 2022; Joshi et al., 2023), and then reports that increasing depth can greatly boost such a complexity
measure. In contrast, a more concrete way to show the power of depth is to construct functions that can be expressed by a narrow network of a given depth, but cannot be approximated by shallower networks, unless its width is sufficiently large (Telgarsky, 2015, 2016; Arora et al., 2016; Eldan and Shamir, 2016; Safran and Shamir, 2017; Venturi et al., 2021). For example, Eldan and Shamir (2016) constructed a radial function and used Fourier spectrum analysis to show that a two-hidden-layer network can represent it with a polynomial number of neurons, but a one-hidden-layer network needs an exponential number of neurons to achieve the same level of error. Telgarsky (2015) employed a ReLU network to build a one-dimensional "sawtooth" function whose number of pieces scales exponentially over the depth. As such, a deep network can construct a sawtooth function with many pieces, while a shallow network cannot unless it is very wide. Arora et al. (2016) derived the upper bound of the maximal number of pieces for a univariate ReLU network, and used this bound to elaborate the separation between a deep and a shallow network. In a broad sense, we summarize the elements of establishing a depth separation theorem as the following: i) there exists a function representable by a deep network; ii) such a function cannot be represented by a shallow network whose width is lower than a threshold. The depth separation theory is nowadays widely accepted as an effective explanation for the power of depth.
In the current deep learning research, the dominating network architectures are often not feedforward but intensively use shortcuts. However, the existing depth separation theory is established for feedforward networks. Few studies, if not none, considered the depth separation theory in the shortcut paradigm. Due to the pervasiveness of shortcuts, examining the depth separation in the context of shortcuts is critical. Here we are motivated by ResNet, which are outer links embedded into a network horizontally (Figure 1(b)). The shallow network with residual connections can have comparable performance with a deep network, _i.e._, the insertion of residual connections can save depth. Per structural symmetry, we embed shortcuts vertically, _i.e._, intra-linking neurons within a layer (Figure 1(c)) to force a neuron to take the outputs of its neighboring neuron. From the perspective of the crossing number Telgarsky (2016), the non-symmetric structure of intra-layer linked networks can produce more oscillations than networks without intra-layer links. Thus, intra-layer links can save width, _i.e._, without the need to go as wide as before, a shallow network can express as a complicated function as a deep network could, which means that the depth separation theory is modified.
Specifically, our roadmap to the modification of depth separation theorems includes two milestones. 1) Through bound analysis, explicit construction, and functional space analysis, we substantiate that a network with intra-layer links can produce much more pieces than a feedforward network, and the gain is at most exponential, _i.e._, \((\frac{3}{2})^{k}\), where \(k\) is the number of hidden layers. 2) Since intra-layer links can yield more pieces, they can modify depth separation theorems by empowering a shallow network to represent a function constructed by a deep network, even if the width of this shallow network is lower than the prescribed threshold. The modification is done in the cases of
Figure 1: (a) feedforward, (b) residual, and (c) intra-linked (2-neuron linked). \(x\) is a univariate input. In analogy to horizontal residual connections in ResNet, we take the intra-layer links as vertical residual connections. Inserting intra-layer links is essentially different from stacking layers in terms of the mechanism of generating new pieces, the number of (affine transform, activation) being used, and the functional class.
vs 3 (Theorem 4.18), \(k^{2}\) vs \(k\) (Theorem 4.19, the famous sawtooth function (Telgarsky, 2015)), and Theorem 4.17. The saving of width is up to linear.
Although the focus of our draft is the depth separation theory, our result is also valuable in the following two aspects: First, the identified mechanism of generating more pieces can be translated into other shortcut networks such as ResNet and DenseNet, _e.g._, residual connections can represent a sawtooth function efficiently. Second, exploring new and powerful network architectures has been the mainstream research direction in deep learning in the past decade. To the best of our knowledge, we are the first to consider adding shortcuts within a layer in a fully-connected network. Our analysis theoretically suggests that an intra-linked network is more powerful than a feedforward network. Like the residual connections, we spotlight that the improvement of representation power by intra-layer links favorably increases no trainable parameters for a network.
To summarize, our contributions are threefold. 1) We point out the limitation of the depth separation theory and propose to consider inserting intra-layer links in shallow networks. 2) We show via bound estimation, explicit construction, and functional space analysis that intra-layer links can make a ReLU network produce more pieces. 3) We modify the depth separation result including the famous Telgarsky (2015)'s theorem by demonstrating that a shallow network with intra-layer links does not need to go as wide as before to represent a function constructed by a deep network.
## 2 Related works
A plethora of depth separation studies have shown the superiority of deep networks over shallow ones from perspectives of complexity analysis and constructive analysis.
The complexity analysis is to characterize the complexity of the function represented by a neural network, thereby demonstrating that increasing depth can greatly maximize such a complexity measure. Currently, one of the most popular complexity measures is the number of linear regions because it conforms to the functional structure of the widely-used ReLU networks. For example, Pascanu et al. (2013); Montufar et al. (2014); Montufar (2017); Serra et al. (2018); Hu and Zhang (2018); Hanin and Rolnick (2019) estimated the bound of the number of linear regions generated by a fully-connected ReLU network by applying Zaslavsky's Theorem (Zaslavsky, 1997). Xiong et al. (2020) offered the first upper and lower bounds of the number of linear regions for convolutional networks. Other complexity measures include classification capabilities (Malach and Shalev-Shwartz, 2019), Betti numbers (Bianchini and Scarselli, 2014), trajectory lengths (Raghu et al., 2017), global curvature (Poole et al., 2016), and topological entropy (Bu et al., 2020). Please note that using complexity measures to justify the power of depth demands a tight bound estimation. Otherwise, it is insufficient to say that shallow networks cannot be as powerful as deep networks, since deep networks cannot reach the upper bound.
The construction analysis is to find a family of functions that are hard to approximate by a shallow network, but can be efficiently approximated by a deep network. Eldan and Shamir (2016) built a special radial function that is expressible by a 3-layer neural network with a polynomial number of neurons, but a 2-layer network can do the same level approximation only with an exponential number of neurons. Later, Safran and Shamir (2017) extended this result to a ball function, which is a more natural separation result. Venturi et al. (2021) generalized the construction of this type to a non-radial function. Telgarsky (2015, 2016) used an \(\mathcal{O}(k^{2})\)-layer network to construct a sawtooth function. Given that such a function has an exponential number of pieces, it cannot be expressed by an \(\mathcal{O}(k)\)-layer network, unless the width is \(\mathcal{O}(\exp(k))\). Arora et al. (2016) estimated the maximal number of pieces a network can produce, and established the size-piece relation to advance the depth separation results from (\(k^{2}\), \(k\)) to (\(k\), \(k^{\prime}\)), where \(k^{\prime}<k\). Other smart constructions include polynomials (Rolnick and Tegmark, 2017), functions of a compositional structure (Poggio et al., 2017), Gaussian mixture models (Jalali et al., 2019), and so on. Our work also includes the construction, and we use an intra-linked network to efficiently build a sawtooth function.
## 3 Notation and Definition
**Notation 1** (Feedforward networks). For an \(\mathbb{R}^{w_{0}}\rightarrow\mathbb{R}\) ReLU DNN with widths \(w_{1},\ldots,w_{k}\) of \(k\) hidden layers, we use \(\mathbf{f}_{0}=\left[f_{0}^{(1)},\ldots,f_{0}^{(w_{0})}\right]=\mathbf{x} \in\mathbb{R}^{w_{0}}\) to denote the input of the network. Let
\(\mathbf{f}_{i}=\left[f_{i}^{(1)},\dots,f_{i}^{(w_{i})}\right]\in\mathbb{R}^{w_{i}}\), \(i=1,\cdots,k,\) be the vector composed of outputs of all neurons in the \(i\)-th layer. The pre-activation of the \(j\)-th neuron in the \(i\)-th layer and the corresponding neuron are given by
\[g_{i}^{(j)}=\left\langle\mathbf{a}_{i}^{(j)},\mathbf{f}_{i-1}\right\rangle+b_{ i}^{(j)}\quad\text{and}\quad f_{i}^{(j)}=\sigma\left(g_{i}^{(j)}\right),\]
respectively, where \(\sigma(\cdot)\) is the ReLU activation and \(\mathbf{a}_{i}^{(j)}\in\mathbb{R}^{w_{i-1}},b_{i}^{(j)}\in\mathbb{R}\) are parameters. The output of this network is \(g_{k+1}=\left\langle\mathbf{a}_{k},\mathbf{f}_{k}\right\rangle+b_{k}\) for some \(\mathbf{a}_{k}\in\mathbb{R}^{w_{k}}\), \(b_{k}\in\mathbb{R}\).
**Notation 2** (Intra-linked networks) For an \(\mathbb{R}^{w_{0}}\rightarrow\mathbb{R}\) ReLU DNN with widths \(w_{1},\dots,w_{k}\) of \(k\) hidden layers, we now assume that every \(n_{i}\) neurons are intra-linked in the \(i\)-th layer, where \(n_{i}\) can divide \(w_{i}\) without remainder. Similar to the classical ReLU DNN, we use \(\mathbf{\tilde{f}}_{0}=\mathbf{x}\in\mathbb{R}^{w_{0}}\) and \(\mathbf{\tilde{f}}_{i}=\left[\mathbf{\tilde{f}}_{i}^{(1)},\dots,\mathbf{\tilde {f}}_{i}^{(w_{i})}\right]\in\mathbb{R}^{w_{i}}\) to denote the input and the vectorized outputs of the \(i\)-th layer. The \(j\)-th pre-activation in the \(i\)-th layer and the output of the network are computed in the same way as classical feedforward networks. In an intra-linked network, the \(j\) -th, \(\dots\), \((j+n_{i}-1)\)-th neurons in the \(i\)-th layer are linked, and the \((j+n_{i})\)-th, \(\cdots\), \((j+2n_{i}-1)\)-th neurons in the \(i\)-th layer are linked. We prescribe
\[\tilde{f}_{i}^{(j)}=\sigma\left(g_{i}^{(j)}\right),\quad\tilde{f}_{i}^{(j+l)} =\sigma\left(g_{i}^{(j+l)}-\tilde{f}_{i}^{(j+l-1)}\right),\]
for \(l=1,\dots,n_{i}-1\). Especially, we are interested in the case every 2 neurons are linked in each layer (_i.e._, \(n_{i}=2\)) and the case all neurons in a layer are linked (_i.e._, \(n_{i}=w_{i}\)) in this work.
**Notation 3** (sawtooth functions and breakpoints) we say a piecewise linear (PWL) function \(g:[a,b]\rightarrow\mathbb{R}\) is of "\(N\)-sawtooth" shape, if \(g(x)=(-1)^{n-1}\left(x-(n-1)\cdot\frac{b-a}{N}\right),\) for \(x\in\left[(n-1)\cdot\frac{b-a}{N},n\cdot\frac{b-a}{N}\right],n\in[N]\). We say \(x_{0}\in\mathbb{R}\) is a breakpoint of a PWL function \(g\), if the left hand and right hand derivative of \(g\) at \(x_{0}\) are not equal, _i.e._, \(g_{+}^{\prime}(x)\neq g_{-}^{\prime}(x)\).
Please note that stacking a layer is essentially different from inserting intra-layer links in terms of the fundamental mechanism of generating new pieces, the number of affine transforms being used, and the functional class.
\(\bullet\) As Figure 2 shows, their mechanisms of producing pieces are fundamentally different. While the mechanism of adding a new layer is the repetition effect (multiplication), _i.e._, the function value of the function being composed is oscillating, and each oscillation can generate more pieces, which falls into the depth paradigm. The mechanism of intra-layer links is the gating effect (addition). The neuron being embedded have two activation states, and each state is leveraged to produce a breakpoint. Two states are integrated to generate more pieces. Such a mechanism essentially conforms to the parallelism, which is of width paradigm.
\(\bullet\) Adding intra-layer links does not increase the number of affine transforms and activations. As Figure 2 illustrates, a feedforward network with two layers involves two times of affine transformation
Figure 2: Adding intra-layer links is not equivalent to increasing depth in terms of the mechanism of generating more pieces, the number of (affine transform, activation), and function classes.
(activation). In contrast, adding intra-layer links in a fully-connected layer actually exerts a gating effect. When \(\sigma(W_{2}x+b_{2})>0\), the output is \(\sigma((W_{1}+W_{2})x+b_{1}+b_{2})\); when \(\sigma(W_{2}x+b_{2})=0\), the output is \(\sigma(W_{1}x+b_{1})\). The number of (affine transform, activation) is still one for both cases.
\(\bullet\) The function classes represented by our intra-linked network and the deeper feedforward network are not the same, either, and this will make a big difference. Given the same width, the deeper feedforward network has a larger function class than a shallow intra-linked network. However, given the same width and depth, our intra-linked network has more expressive power (_i.e._, number of pieces, VC dimension) than a feedforward network.
Since inserting intra-layer links is different from stacking new layers, we define the width and depth of intra-linked networks to be the same as the width and depth of feedforward networks resulting from removing intra-layer links.
**Definition 3.1** (Width and depth of feedforward networks [1]).: For any number of hidden layers \(k\in\mathbb{N}\), input and output dimensions \(w_{0},w_{k+1}\in\mathbb{N}\), an \(\mathbb{R}^{w_{0}}\rightarrow\mathbb{R}^{w_{k+1}}\) feedforward network is given by specifying a sequence of \(k\) natural numbers \(w_{1},w_{2},\ldots,w_{k}\) representing widths of the hidden layers. The depth of the network is defined as \(k+1\), which is the number of (affine transform, activation). The width of the network is \(\max\left\{w_{1},\ldots,w_{k}\right\}\).
**Definition 3.2** (Width and depth of intra-linked networks [14]).: Given an intra-linked network \(\mathbf{\Pi}\), we delete the intra-layer links to make the resultant network \(\mathbf{\Pi}^{\prime}\) a feedforward network. Then, we define the width and depth of \(\mathbf{\Pi}\) to be the same as the width and depth of \(\mathbf{\Pi}^{\prime}\).
## 4 Rethink the Depth Separation with Intra-layer Links
Since our focus is the network using ReLU activation and related estimation of the number of pieces, the seminal depth separation theorems closest to us are the following:
**Theorem 4.1** (Depth separation \(k^{2}\) vs \(k\)[16, 20]).: _For a natural number \(k\geq 1\), there exists a sawtooth function representable by an \(\mathbb{R}\rightarrow\mathbb{R}\)\((2k^{2}+1)\)-layer feedforward ReLU DNN of width \(2\) such that if it is also representable by a \((k+1)\)-layer feedforward ReLU DNN, this \((k+1)\)-layer feedforward ReLU DNN should at least have the width of \(2^{k}-1\)._
**Theorem 4.2** (Depth separation \(k\) vs \(k^{\prime}\)[1]).: _For every pair of natural numbers \(k\geq 1,w\geq 2\), there exists a function representable by an \(\mathbb{R}\rightarrow\mathbb{R}\)\((k+1)\)-layer feedforward ReLU DNN of width \(w\) such that if it is also representable by a \((k^{\prime}+1)\)-layer feedforward ReLU DNN for any \(k^{\prime}\leq k\), this \((k^{\prime}+1)\)-layer feedforward ReLU DNN has width at least \(\frac{1}{2}w^{\frac{k}{k^{\prime}}}\)._
Despite being one-dimensional, the above results convincingly reveal that increasing depth can make a ReLU network express a much more complicated function, which is the heart of depth separation. Here, we shed new light on the depth separation problem with intra-layer links. Our primary argument is that if intra-layer links shown in Figure 1(c) are inserted, there exist shallow networks that previously cannot express some hard functions constructed by deep networks now can do the job. Our investigation consists of two parts. First, we substantiate that adding intra-layer links can greatly increase the number of pieces via bound estimation, explicit construction, and functional space analysis. Then, adding intra-layer links can represent complicated functions such as sawtooth functions, without the need of going as wide as before.
### Intra-Layer Links Can Increase the Number of Pieces
#### 4.1.1 Upper Bound Estimation
**Lemma 4.3**.: _Let \(g:\mathbb{R}\rightarrow\mathbb{R}\) be a PWL function with \(w+1\) pieces, then the breakpoints of \(f:=\sigma(g)\) consist of two parts: some old breakpoints of \(g\) and at most \(w+1\) newly produced breakpoints. Furthermore, \(f\) has \(w+1\) new breakpoints if and only if \(g\) has \(w+1\) distinct zero points._
Proof.: A direct calculus.
**Theorem 4.4** (Upper bound of feedforward networks).: _Let \(f:\mathbb{R}\rightarrow\mathbb{R}\) be a PWL function represented by an \(\mathbb{R}\rightarrow\mathbb{R}\) ReLU DNN with depth \(k+1\) and widths \(w_{1},\ldots,w_{k}\) of \(k\) hidden layers. Then \(f\) has at most \(\prod_{i=1}^{k}\left(w_{i}+1\right)\) pieces._
This bound is the univariate case of the bound: \(\prod_{i=1}^{k}\sum_{j=0}^{n}\binom{w_{i}}{j}\), derived in Montufar (2017) for \(n\)-dimensional inputs. In Appendix B, we offer constructions to show that this bound is achievable in a depth-bounded but width-unbounded network (depth=3) (Proposition B.1) and a width-bounded (width=3) but depth-unbounded network (Proposition B.2) in one-dimensional space. Previously many bounds Pascanu et al. (2013); Montufar et al. (2014); Montufar (2017); Xiong et al. (2020) on linear regions were derived, however, it is unknown whether these bounds are vacuous or tight, particularly for networks with more than one hidden layer. What makes Propositions B.1 and B.2 special is that they for the first time substantiate that Montufar (2017)'s bound is tight over three-layer and deeper networks, although these results are for the one-dimensional case.
**Remark 1**.: (Sharpening the bound in (Arora et al., 2016)). Previously, Arora et al. (2016) computed the number of pieces produced by a network of depth \(k+1\) and widths \(w_{1},\ldots,w_{k}\) as \(2^{k+1}\cdot(w_{1}+1)w_{2}\cdots w_{k}\). The reason why their bound has an exponential term is that when considering how ReLU activation increases the number of pieces, they repetitively computed the old breakpoints generated in the previous layer. Our Lemma 4.3 implies that the ReLU activation in fact cannot double the number of pieces of a PWL function. Therefore, the depth separation theorem of Arora et al. (2016) needs to be re-examined.
**Lemma 4.5**.: _Let \(g_{1},g_{2}:\mathbb{R}\rightarrow\mathbb{R}\) be two PWL functions with totally \(w\) breakpoints. Set \(f_{1}:=\sigma\left(g_{1}\right)\) and \(f_{2}:=\sigma\left(g_{2}-f_{1}\right)\). Then the breakpoints of \(f_{2}\) consist of three parts: some breakpoints of \(g_{2}\), some breakpoints of \(f_{1}\), and at most \(2w+2\) newly produced breakpoints. Furthermore, \(f_{2}\) has \(2w+2\) newly produced breakpoints if and only if \(g_{2}-f_{1}\) has \(2w+2\) distinct zero points._
Proof.: A direct corollary of Lemma 4.3.
Let us illustrate why the intra-linked architecture can produce more pieces. Given two PWL functions \(g_{1}\) and \(g_{2}\) which has totally \(w\) breakpoints, in the feedforward architecture, \(\sigma\left(g_{1}\right)\) and \(\sigma\left(g_{2}\right)\) have totally at most \(3w+2\) breakpoints, which contains at most \(w\) old breakpoints of \(g_{1},g_{2}\) and at most \(2w+2\) newly produced breakpoints. However, in the intra-linked architecture, \(\sigma\left(g_{2}-\sigma\left(g_{1}\right)\right)\) can produce more breakpoints because \(\sigma(g_{1})\) has two states: activated or deactivated. Then, \(\sigma(g_{1})\) and \(\sigma\left(g_{2}-\sigma\left(g_{1}\right)\right)\) consist of at most \(w\) old breakpoints of \(g_{1},g_{2}\) and \((w+1)+(2w+2)=3w+3\) newly produced breakpoints.
**Theorem 4.6** (Upper bound of 2-neuron intra-linked networks).: _Let \(f:\mathbb{R}\rightarrow\mathbb{R}\) be a PWL function represented by a ReLU DNN with depth \(k+1\), widths \(w_{1},\ldots,w_{k}\), and every two neurons linked in each hidden layer as Figure 1(c). Assuming that \(w_{1},\ldots,w_{k}\) are even, \(f\) has at most \(\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)\) pieces._
Proof.: We prove by induction on \(k\). For the base case \(k=1\), we assume for every odd \(j\), the neurons \(\tilde{f}_{1}^{(j)}\) and \(\tilde{f}_{2}^{(j+1)}\) are linked. The number of breakpoints of \(\tilde{f}_{1}^{(j)}\), \(j=1,\ldots,w_{1}\), is at most \(2+(-1)^{j}\). Hence, the first layer yields at most \(\frac{3}{2}w_{1}+1\) pieces. For the induction step, we assume that for some \(k\geq 1\), any \(\mathbb{R}\rightarrow\mathbb{R}\) ReLU DNN with every two neurons linked in each hidden layer, depth \(k+1\) and widths \(w_{1},\ldots,w_{k}\) of \(k\) hidden layers produces at most \(\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)\) pieces. Now we consider any \(\mathbb{R}\rightarrow\mathbb{R}\) ReLU DNN with every two neurons linked in each hidden layer, depth \(k+2\) and widths \(w_{1},\ldots,w_{k+1}\) of \(k+1\) hidden layers. By the induction hypothesis, each \(\tilde{g}_{k+1}^{(j)}\) has at most \(\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)-1\) breakpoints. Then the breakpoints of \(\sigma(\tilde{g}_{k+1}^{(j)})\) consist of some breakpoints of \(\tilde{g}_{k+1}^{(j)}\) and at most \(\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)\) newly generated breakpoints. Then \(\tilde{g}_{k+1}^{(j+1)}-\tilde{f}_{k+1}^{(j)}\) has at most \(2\cdot\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)-1\) breakpoints, based on Lemma 4.5. The breakpoints of \(\tilde{f}_{k+1}^{(j+1)}=\sigma(\tilde{g}_{k+1}^{(j+1)}-\tilde{f}_{k+1}^{(j)})\) consist of some breakpoints of \(\tilde{g}_{k+1}^{(j+1)}-\tilde{f}_{k+1}^{(j)}\) and at most \(2\cdot\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)\) newly generated breakpoints. Note that \(\tilde{g}_{k+1}^{(1)},\ldots,\tilde{g}_{k+1}^{(w_{k+1})}\) have totally at most \(\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)-1\) breakpoints. In all, the number of pieces we can therefore get is at most \(\frac{w_{k+1}}{2}\cdot\left(\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)+2 \cdot\prod_{i=1}^{k}\left(\frac{3}{2}w_{i}+1\right)\right)+\prod_{i=1}^{k} \left(\frac{3}{2}w_{i}+1\right)-1=\prod_{i=1}^{k+1}\left(\frac{3}{2}w_{i}+1 \right).\)
In the following theorems, we offer the bound estimation for high-dimensional cases. The detailed proof for Theorem 4.8 is put into Appendix A.
**Theorem 4.7** (Upper Bound of Feedforward Networks [Montufar, 2017]).: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a PWL function represented by an \(\mathbb{R}^{n}\to\mathbb{R}\) ReLU DNN with depth \(k+1\) and widths \(w_{1},\ldots,w_{k}\) of \(k\) hidden layers. Then \(f\) has at most \(\prod_{i=1}^{k}\sum_{j=0}^{n}\binom{w_{i}}{j}\) linear regions._
**Theorem 4.8** (Upper Bound of Intra-linked Networks).: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a PWL function represented by an \(\mathbb{R}^{n}\to\mathbb{R}\) ReLU DNN with every two neurons linked in each hidden layer, depth \(k+1\) and widths \(w_{1},\ldots,w_{k}\) of \(k\) hidden layers. We assume each \(w_{i}\) is even. Then \(f\) has at most \(\prod_{i=1}^{k}\sum_{j=0}^{n}\binom{\frac{3w_{i}}{2}+1}{j}\) linear regions._
#### 4.1.2 Explicit Construction.
Despite that the bound estimation offers some light, to convincingly illustrate that intra-layer links can increase the number of pieces, we need to supply the explicit construction for the intra-linked networks. The number of pieces in the construction should be bigger than either the upper bound of feedforward networks or the maximal number a feedforward network can achieve. Specifically, the constructions for 2-neuron intra-linked networks in Propositions 4.9 and 4.10 have a number of pieces larger than the upper bounds of feedforward networks. In Proposition 4.11, by enumerating all possible cases, we present a construction for a 2-neuron intra-linked network of width 2 and arbitrary depth whose number of pieces is larger than what a feedforward network of width 2 and arbitrary depth possibly achieves. Proposition 4.12 shows that \(\prod_{i=1}^{k}\left(\frac{(w_{i}+1)w_{i}}{2}+1\right)\) pieces can be achieved by a one-hidden-layer all-intra-linked network. Propositions 4.13 and 4.14 provide rather tight constructions for an all-neuron intra-linked network of width 3&4 and arbitrary depth.
**Proposition 4.9** (The bound \(\prod_{i=1}^{k}\left(\frac{3w_{i}}{2}+1\right)\) is tight for a two-hidden-layer 2-neuron intra-linked network).: _Given an \(\mathbb{R}\to\mathbb{R}\) two-hidden-layer ReLU network, with every two neurons linked in each hidden layer, for any even \(w_{1}\geq 6,w_{2}\geq 4\), there exists a PWL function represented by such a network, whose number of pieces is \(\left(\frac{3w_{i}}{2}+1\right)\left(\frac{3w_{2}}{2}+1\right)\)._
Proof.: Please see Appendix C.
**Proposition 4.10** (Use intra-linked networks to achieve a sawtooth function with \(\prod_{i=1}^{k}\left(\frac{3w_{i}}{2}\right)\) pieces).: _There exists a \([0,1]\to\mathbb{R}\) function represented by an intra-linked ReLU DNN with depth \(k+1\) and width \(w_{1},\ldots,w_{k}\) of \(k\) hidden layers, whose number of pieces is at least \(\frac{3w_{1}}{2}\cdot\ldots\cdot\frac{3w_{k}}{2}\)._
Proof.: Please see Appendix D.
**Proposition 4.11** (Intra-layer links can greatly increase the number of pieces in an \(\mathbb{R}\to\mathbb{R}\) ReLU network with width 2 and arbitrary depth).: _Let \(f:\mathbb{R}\to\mathbb{R}\) be a PWL function represented by an \(\mathbb{R}\to\mathbb{R}\)\((k+1)\)-layer ReLU DNN with widths 2 of all \(k\) hidden layers. Then the number of pieces of \(f\) is at most \(\left\{\begin{array}{cl}\sqrt{7}^{k},&\text{if $k$ is even,}\\ 3\cdot\sqrt{7}^{k-1},&\text{if $k$ is odd.}\end{array}\right.\)_
_There exists an \(\mathbb{R}\to\mathbb{R}\)\((k+1)\)-layer \(2\)-wide ReLU DNN, with neurons linked in each hidden layer, which can produce at least \(7\cdot 3^{k-2}+2\) pieces._
Proof.: The proof is put in Appendix E.
**Proposition 4.12** ( \(\prod_{i=1}^{k}\left(\frac{(w_{i}+1)w_{i}}{2}+1\right)\) pieces for a one-hidden-layer all-neuron intra-linked network).: _Given an \(\mathbb{R}\to\mathbb{R}\) one-hidden-layer ReLU network with all neurons linked in the hidden layer, there exists a PWL function represented by such a network, whose number of pieces is \(\frac{(w_{1}+1)w_{1}}{2}+1\)._
Proof.: For the first layer, \(\tilde{f}_{1}^{(1)}\) has one breakpoint and each \(\tilde{f}_{1}^{(j)}\) has at most \(j\) newly produced breakpoints and some old breakpoints of \(\tilde{g}_{1}^{(j)}\) and \(\tilde{f}_{1}^{(j-1)}\), for \(j=2,\ldots,n_{1}\). Hence, the first layer gives at most \(\frac{w_{i}+1}{2}w_{i}+1\) pieces. Then the rest of the proof is similar to Theorem 4.6.
**Proposition 4.13** (An arbitrarily deep network of width=3 and with all neurons in each layer intra-linked can achieve at least \(5^{k}\) pieces).: _There exists an \(\mathbb{R}\to\mathbb{R}\) function represented by an intra-linked ReLU DNN with depth \(k\), width \(3\) in each layer, and all neurons intra-linked in each layer, whose number of pieces is at least \(5^{k}\)._
**Proposition 4.14** (An arbitrarily deep network of width=4 and with all neurons in each layer intra-linked can achieve at least \(9^{k}\) pieces).: _There exists an \(\mathbb{R}\to\mathbb{R}\) function represented by an intra-linked ReLU DNN with depth \(k\), width \(4\) in each layer, and all neurons in each layer intra-linked, whose number of pieces is at least \(9^{k}\)._
Proof.: The proofs of Propositions 4.12, 4.13, and 4.14 are put in Appendix G.
#### 4.1.3 Functional Space Analysis
The above constructive analyses demonstrate that in the maximal sense, intra-layer links can empower a feedforward network to represent a function with more pieces. Now, we move one step forward by showing that intra-layer links can surprisingly expand the functional space of a feedforward network. The reason why this result is surprising is that one tends to think an intra-linked network produces an exclusively different function from a feedforward network. However, here we report that given an arbitrary feedforward ReLU network, adding intra-layer links in the first layer can definitely expand its functional space (Theorem 4.15). The core is that an intra-linked one-hidden-layer network of two neurons can express a feedforward one-hidden-layer network of two neurons, and the opposite doesn't hold true.
**Theorem 4.15**.: _Let \(f\) be any \(\mathbb{R}\to\mathbb{R}\) PWL representable by a classical \((k+1)\)-layer ReLU DNN with widths \(w_{1}>2,\ldots,w_{k}\) of \(k\) hidden layers. Then, \(f\) can also be represented by a \((k+1)\)-layer ReLU DNN with widths \(w_{1},\ldots,w_{k}\) of \(k\) hidden layers, with neurons in the first layer linked._
**Remark 2**.: Finding new and powerful network architectures has been always important in deep learning. Although the focus of our draft is the depth separation theory rather than designing new architectures, our analysis theoretically suggests that an intra-linked network is more powerful than a feedforward network. Moreover, since adding intra-layer links increases no trainable parameters, they can serve as an economical add-on to the model to use parameters more efficiently. Even if only every two neurons are intra-linked in a layer, the improvement is exponentially dependent on depth, _i.e._, approximately \(\mathcal{O}(\frac{3}{2})^{k}\), which is considerable when a network is deep.
### Modify the Depth Separation Theorem with Intra-layer Links
In a broad sense, the depth separation theorem consists of two elements: i) there exists a function representable by a deep network; ii) such a function cannot be represented by a shallow network whose width is lower than a threshold. Since adding intra-layer links can generally improve the capability of a network, if one adds intra-layer links to a shallow network, the function constructed by a deep network can be represented by a shallow network, even if the width of this shallow network is still lower than the threshold. Theorem 4.17 showcases that a shallow network with all-neuron intra-layer links can save the width up to a linear reduction. Theorems 4.18 and 4.19 modify the depth separation \(k^{2}\) vs 3 and \(k^{2}\) vs \(k\), respectively, by presenting that a shallow network with 2-neuron intra-layer links only needs to go \(\frac{2}{3}\) times as wide as before to express the same function.
**Lemma 4.16** (A network with width=2 can approximate any univariate PWL function [22]).: _Given a univariate PWL function with \(n\) pieces \(p(x)\), there exists a \((n+1)\)-layer network \(\mathbf{D}(x)\) with two neurons in each layer such that \(f(x)=\mathbf{D}(x)\)._
**Theorem 4.17** (Modify the depth separation \(k^{2}\) vs 2).: _For every \(k\geq 2\), there exists a function \(p(x)\) that can be represented by a \((k^{2}+1)\)-layer ReLU DNN with 2 nodes in each layer, such that it cannot be represented by a classical \(2\)-layer ReLU DNN \(\mathbf{W}_{2}(x)\) with width less than \(k^{2}-1\), but can be represented by a \(2\)-layer, \((2k)\)-wide intra-linked ReLU DNN \(\tilde{\mathbf{W}}_{2}(x)\)._
Proof.: Combining Theorem 4.4, Theorem 4.12, and Lemma 4.16 straightly concludes the proof.
**Theorem 4.18** (Modify the depth separation \(k^{2}\) vs 3).: _For every \(k\geq 2\), there exists a function \(p(x)\) that can be represented by a \((k^{2}+1)\)-layer ReLU DNN with 2 nodes in each layer, such that it cannot
be represented by a classical \(3\)-layer ReLU DNN \(\mathbf{W}_{3}(x)\) with width less than \(k-1\), but can be represented by a \(3\)-layer, \(\frac{2(k-1)}{3}\)-wide intra-linked ReLU DNN \(\tilde{\mathbf{W}}_{3}(x)\)._
Proof.: Combining Theorem 4.4, Proposition 4.9, and Lemma 4.16 straightly concludes the proof.
**Theorem 4.19** (Modify the depth separation \(k^{2}\) vs \(k\)).: _For every \(k\geq 1\), there is a \([0,1]\to\mathbb{R}\) PWL function \(p(x)\) represented by a feedforward \((2k^{2}+1)\)-layer ReLU DNN with at most \(6\) nodes in each layer, such that it cannot be represented by a classical \((k+1)\)-layer ReLU DNN \(W_{k}(x)\) with width less than \(6^{k}\), but can be represented by a \((k+1)\)-layer 2-neuron intra-linked ReLU DNN \(\tilde{W}_{k}(x)\) with width no more than \(4\cdot 6^{k-1}\)._
Proof.: Per [16]'s construction, a feedforward \((2k^{2}+1)\)-layer ReLU DNN with at most \(2\) nodes in each layer can produce a sawtooth function of \(2^{k^{2}}\) pieces. Similarly, a feedforward \((2k^{2}+1)\)-layer ReLU DNN with at most \(6\) nodes in each layer can have \(6^{k^{2}}\) pieces. Thus, it follows Theorem 4.4 that any classical \((k+1)\)-layer ReLU DNN \(W_{k}(x)\) with width less than \(6^{k}-1\) cannot generate \(6^{k^{2}}\) pieces. However, according to the construction in Proposition 4.10, let \(w_{1}=w_{2}=\cdots=w_{k}=4\cdot 6^{k-1}\), an intra-linked network can exactly express a sawtooth function with \(6^{k^{2}}\) pieces.
**Remark 3**.: The existing depth separation theory is established for feedforward networks. Our systematic analyses reveal that when considering shortcuts, the existing depth separation can be modified in terms of reducing the bar of width. Theorem 4.17 implicates that intra-layer links can reduce the bar of the width substantially (\(\mathcal{O}(1/w)\)), where \(w\) is the original width, with a linear reduction. Our highlight is the existence of such shallow networks that can be transformed by intra-layer links to have representation power on a par with a deep network. Such shallow networks go against the predictions of depth separation theory.
## 5 Discussion and Conclusion
Well-established network architectures such as ResNet and DenseNet imply that incorporating shortcuts greatly empowers a neural network. However, only a limited number of theoretical studies attempted to explain the representation ability of shortcuts Veit et al. (2016); Fan et al. (2021); Lin and Jegelka (2018). Although intra-layer links and residual connections are essentially two different kinds of shortcuts, the techniques we developed and the mechanisms we identified in analyzing intra-linked networks can be extended to other networks with shortcuts. On the one hand, we identified conditions for the tightness of the bound, which has been proven to be stronger than existing results. Specifically, in the activation step, we distinguish the existing and newly generated breakpoints to avoid repeated counting, and then in the following pre-activation step, we maximize the oscillation to yield the most pieces after the next activation. On the other hand, the construction of functions in our work, _i.e._, constructing oscillations by preserving existing breakpoints and splitting each piece into several ones, is generic in analyzing other popular types of networks, thereby explaining how the shortcut connections improve the representation power of a network.
For example, it is straightforward to see that a one-neuron-wide ReLU DNN can represent PWL functions with at most three pieces, no matter how deep the network is. However, as Theorem 5.1 shows, with residual connections, a ResNet with \(k\) neurons can represent a sawtooth function with \(\mathcal{O}(k)\) pieces, which cannot be done by a feedforward network. For DenseNet, Theorem 4.4 shows that an \(\mathbb{R}\to\mathbb{R}\) ReLU DNN with depth \(k+1\) and width \(w_{1},\ldots,w_{k}\) has at most \(\prod_{i=1}^{k}(w_{i}+1)\) pieces. If we add dense intra-layer links that connect any two neurons in a hidden layer to turn a feedforward network into a DenseNet, Theorem 5.2 shows that the so-obtained DenseNet can produce much more pieces than the feedforward network. The difference is exponential, _i.e._, \(1+\prod_{i=1}^{k}\left(2^{w_{i}}-1\right)\) vs \(\prod_{i=1}^{k}(w_{i}+1)\). The detailed proofs are put into Appendix H.
**Theorem 5.1**.: _Let \(f:\mathbb{R}\to\mathbb{R}\) be a PWL function represented by a one-neuron-wide ResNet. Mathematically, \(f=c_{k+1}f_{k}+g_{k}\), where \(g_{1}(x)=x,f_{i}=\sigma\left(a_{i}g_{i}+b_{i}\right),g_{i+1}=c_{i}f_{i}+g_{i}, c_{k+1},a_{i},b_{i},c_{i}\) are parameters, for \(i=1,\ldots,k\). Then \(f\) has at most \(2^{k}\) pieces. Furthermore, this upper bound is tight and \(f\) can be a sawtooth function with at most \(2^{k}\) pieces._
**Theorem 5.2**.: _Let \(f:\mathbb{R}\rightarrow\mathbb{R}\) be a PWL function represented by a DenseNet obtained by adding dense intra=layer links into a feedforward network with depth \(k+1\) and width \(w_{1},\dots,w_{k}\) of \(k\) hidden layers. Then we can construct such a PWL function \(f\) with at least \(1+\prod_{i=1}^{k}{(2^{w_{i}}-1)}\) pieces._
In this draft, via bound estimation, dedicated construction, and functional space analysis, we have shown that an intra-linked network is much more expressive than a feedforward one. Then, we have modified the depth separation results to that a shallow network that previously cannot express some functions constructed by deep networks now can do the job with intra-layer links. Our results supplement the existing depth separation theory, and suggest the potential of intra-layer links. At the same time, the identified mechanism of generating pieces can also be used to decode the power of other shortcut networks such as ResNet and DenseNet. Future endeavors can be training networks using intra-layer links to solve real-world problems.
|
2310.04950 | Excitable dynamics driven by mechanical feedback in biological tissues | Pulsatory activity patterns, driven by mechanochemical feedback, are
prevalent in many biological systems. Here we present a theoretical framework
to elucidate the mechanical origin and regulation of pulsatile activity
patterns within multicellular tissues. We show that a simple mechanical
feedback at the level of individual cells - activation of contractility upon
stretch and subsequent inactivation upon turnover of active elements - is
sufficient to explain the emergence of quiescent states, long-range wave
propagation, and traveling activity pulse at the tissue-level. We find that the
transition between a propagating pulse and a wave is driven by the competition
between timescales associated with cellular mechanical response and geometrical
disorder in the tissue. This sheds light on the fundamental role of cell
packing geometry on tissue excitability and spatial propagation of activity
patterns. | Fernanda Pérez-Verdugo, Samuel Banks, Shiladitya Banerjee | 2023-10-08T00:13:59Z | http://arxiv.org/abs/2310.04950v1 | # Excitable dynamics driven by mechanical feedback in biological tissues
###### Abstract
Pulsatory activity patterns, driven by mechanochemical feedback, are prevalent in many biological systems. Here we present a theoretical framework to elucidate the mechanical origin and regulation of pulsatile activity patterns within multicellular tissues. We show that a simple mechanical feedback at the level of individual cells - activation of contractility upon stretch and subsequent inactivation upon turnover of active elements - is sufficient to explain the emergence of quiescent states, long-range wave propagation, and traveling activity pulse at the tissue-level. We find that the transition between a propagating pulse and a wave is driven by the competition between timescales associated with cellular mechanical response and geometrical disorder in the tissue. This sheds light on the fundamental role of cell packing geometry on tissue excitability and spatial propagation of activity patterns.
## I Introduction
Multicellular systems exhibit a wide range of pulsatile and wave-like patterns during collective migration, development, and morphogenesis [1; 2; 3]. The appearance of these patterns can be attributed to various biochemical factors, depending on the specific phenomenon. These include waves of extracellular signal-related kinase (ERK) [4; 5], calcium waves [6], periodic assembly and disassembly of myosin motors [7; 8], and the periodic release of chemoattractants [9]. Reaction-diffusion models [10; 11; 12; 13] and cellular automaton models [14; 15; 16] have been widely used to study the mechanisms underlying biochemical pattern formation in multicellular systems. Mechanochemical patterns, on the other hand, have necessitated the development of new classes of models that integrate mechanical forces with chemical reactions [17; 18; 19; 20; 21]. For instance, the coupling of mechanical and chemical processes is particularly relevant in understanding the spatial propagation of contraction patterns in _T. Adhaerens_[22], oscillatory morphodynamics in _Drosophila_ amnioserosa tissue [23], collective migration patterns [20] and mechanical waves in expanding MDCK cell monolayers [19; 24; 25]. However, the role of cellular mechanics and geometry in the propagation of mechanochemical signals remains poorly understood.
One commonly observed mechanical feedback motif in cells is _stretch-induced contraction_, wherein a local stretching deformation triggers the recruitment of active components that induce contraction [4; 26; 27; 28; 29]. Recent studies have utilized the concept of stretch-induced contraction to elucidate phenomena such as wave propagation in active elastic media [19; 25], contraction pulses in epithelial tissues [30], cell migration patterns in vitro [20; 31], cell and tissue morphogenesis [32; 23]. Specifically, all these studies focused on dynamics in active elastic media, without considering the effects of geometric disorder and viscous dissipation on mechanochemical signal propagation.
In this study, we ask how cellular viscoelasticity and packing geometry regulate the propagation of active stresses at multicellular scales. To this end, we extended the framework of the cellular vertex models [33; 34; 35; 36] to incorporate feedback between cell junction strain and contractility. In addition, viscous dissipation is implemented by continuous strain relaxation in cell junctions. We implement a simple feedback rule in which contractility in cell junctions is activated above a threshold junctional stretch. The junction remains active for a duration commensurate with the turnover rate of active elements. This is followed by a refractory period during which junction contractility remains inactive due to the presence of inhibitors of contractility. As a result of these rules, each cell junction behaves as an excitable unit that can exist in one of three states: active, inactive, and refractory.
Our proposed model elucidates the emergence of long-range propagation of contractile pulses and different patterns of self-sustained traveling waves, such as circular, elliptic, and spiral waves. We show that these tissue-level propagation patterns are controlled by the competition between the timescales associated with active and refractory states of the junction, and the characteristic timescale of junction strain relaxation. To explain these observations analytically, we develop an effective theory of coupled excitable junctions, capable of explaining the emergence of the quiescent, wave-like and pulse-like patterns observed in vertex model simulations. Our theoretical framework predicts that shorter junctions promote reactivation of contractility, while larger junctions facilitate the propagation of activity over a broader region of the parameter space. We validate these predictions through simulations of disordered tissues in two dimensions. We find that geometrical disorder promotes sustained wave propagation at the tissue-level, and that the ability of junctions to locally propagate activity increases with its length.
## II Vertex model with mechanical feedback
### Equations of motion
To elucidate the emergent dynamic patterns in an excitable tissue, we use the framework of the vertex model [33; 34; 35; 36], where a monolayer tissue is modeled as a two-dimensional polygonal tiling. The polygons represent the cells, and the
edges represent the cell-cell junctions. Each vertex \(i\), with position \(\mathbf{r}_{i}\), is subject to friction with coefficient \(\mu\), and elastic forces and inter-cellular tensions arising from a Hamiltonian \(H\). The Hamiltonian governing tissue mechanical energy is given by
\[H=\frac{K}{2}\sum_{\alpha}(A_{\alpha}-A_{0})^{2}+\sum_{\langle i,j\rangle} \Lambda l_{ij}\,, \tag{1}\]
where the first energy term is a sum over all cells \(\alpha\), and the second term is a sum over the cell-cell junctions defined by the adjacent vertices \(i\) and \(j\). The first term in Eq. (1) is the elastic energy that penalizes changes in cell area, where \(K\) is the bulk elastic modulus, \(A_{\alpha}\) and \(A_{0}\) are the actual and preferred cell areas, respectively. The second term represents an interfacial energy, with tension \(\Lambda\) along each cell junction of length \(l_{ij}\).
Active contractile forces arise at each junction from the actomyosin cortex, generating an active force per unit length, \(\Gamma_{ij}(t)\). Consequently, the active force at each vertex can be written as: \(\mathbf{F}_{i}^{\mathrm{act}}=-\sum_{\langle i,j\rangle}\Gamma_{ij}(t)l_{ij} \left(\partial l_{ij}/\partial\mathbf{r}_{i}\right)\). As opposed to existing vertex models, here we consider a time-dependent contractility \(\Gamma_{ij}(t)\), whose dynamics depend on junctional strain and memory of mechanical state. To compute the time-evolution of each vertex, we assumed an overdamped limit, such that the equations of motion are given by:
\[\mu\frac{\mathrm{d}\mathbf{r}_{i}}{\mathrm{d}t}=-\frac{\partial H}{\partial \mathbf{r}_{i}}+\mathbf{F}_{i}^{\mathrm{act}}\,. \tag{2}\]
The above equation of motion is coupled to the dynamics of junctional strain and contractility, as described below.
### Cell junction dynamics under stretch-induced contractility
Stretch-induced contractility is a commonly observed regulatory mechanism for controlling the level of active contractile stress in cells [26, 27, 37, 38, 4, 28]. A local stretch in cell junctions could trigger actin fiber alignment [29, 39, 40], myosin recruitment [28] and also the activation of the ERK signaling [4] that would promote contractility. We therefore implement a simple model of cellular junctions as viscoelastic materials subject to a strain-tension feedback (Fig. 1A). Here, a local stretch triggers the activation of contractility, which in turn reduces stretch via contractile forces. Additionally, junction strain continuously relaxes over time due to viscous dissipation and contractility undergoes turnover as part of a self-regulatory mechanism.
The mechanical strain in cell junctions is defined as \(\varepsilon_{ij}=\left(l_{ij}-l_{ij}^{0}\right)/l_{ij}^{0}\), where \(l_{ij}^{0}\) is the rest length of the junction shared by the vertices \(i\) and \(j\). The viscoelastic nature of the junctions is modelled through rest length remodeling at a rate \(k_{L}\), leading to continuous strain relaxation [41],
\[\frac{\mathrm{d}l_{ij}^{0}}{\mathrm{d}t}=-k_{L}(l_{ij}^{0}-l_{ij}). \tag{3}\]
Rest length remodeling is a natural consequence of actomyosin networks with turnover, where strained elements are replaced by unstrained ones [26]. The feedback between junction strain and contractility is implemented as follows. Each cell junction can exist in one of three states: _inactive_ (\(\Gamma_{ij}=0\)), _active_ (\(\Gamma_{ij}=\Gamma_{0}\)), and _refractory_ (\(\Gamma_{ij}=0\)). While both inactive and refractory states lack contractility, refractory junctions are those that cannot be active for a duration \(\tau_{\mathrm{ref}}\), representing the timescale associated with the presence of inhibitors of contractility. The rules describing junction state changes are given below (Fig. 1C):
* _Inactive_ junctions become active if their strain \(\varepsilon_{ij}\) exceeds a threshold value \(\varepsilon_{\mathrm{on}}\).
* _Active_ junctions become refractory after being active for a time period \(\tau_{\mathrm{act}}\).
Figure 1: Model for stretch-induced contraction and activity dynamics in cell junctions. (A) Junction stretch induces contractility via ERK activation, while contraction reduces stretch. The mechanical strain relaxes at a rate \(k_{L}\), and the active contraction pulse has a lifetime \(\tau\). (B) Representative section of a simulated tissue with hexagonal cells, using \(k_{L}=0.5\) and \(\tau=2\). Red junction color denotes active state, which spreads to neighboring junctions as they are stretched. (C) Junction-level rules to change its state. Gray junction color denotes inactive state, while blue denotes a refractory state. (D) Representative dynamics of junction length, strain and active contractile for an initially inactive junction (gray), indicated by the arrow in panel (B), with \(\varepsilon_{\mathrm{on}}=0.1\).
* _Refractory_ junctions become inactive after a duration \(\tau_{\text{ref}}\).
As an example, we'll examine the scenario depicted in Fig. 1B-D, where gray, red, and blue junctions represent the inactive, active, and refractory states, respectively. Initially, the junction marked by the black arrow in Fig. 1B is set to an inactive state. Contraction of the neighboring junction induce strain levels exceeding the threshold \(\mathbf{\varepsilon}_{\text{on}}\), thereby triggering ERK signaling activation [4]. This activation, in turn, leads to the production of ERK inhibitors [13]. The threshold value \(\mathbf{\varepsilon}_{\text{on}}\) ensures immunity to small perturbations, allowing junctions to be excitable units. ERK activation induces contractility, changing the junction state from inactive to active, increasing contractility to \(\Gamma_{ij}=\Gamma_{0}\). The active state persists for a time period \(\tau_{\text{act}}\) (red phase in Fig. 1C), which represents an effective timescale arising from the turnover time of actomyosin, and the inactivation of ERK. The remaining levels of ERK inhibitors keep the junction in a refractory state, in which it can not be re-activated (blue phase in Fig. 1C). Finally, after a time period \(\tau_{\text{ref}}\), the inhibitors reach low enough levels to take the junction back to the inactive state (final state in Fig. 1C).
Although the activation time period (\(\tau_{\text{act}}\)) and the refractory time period (\(\tau_{\text{ref}}\)) are distinct parameters, a recent study [13] has demonstrated that, to adequately describe the observed ERK activity waves, the characteristic timescales for ERK activation and inactivation by inhibitors tend to be similar, typically of the order of a few minutes. For simplicity, we will first focus on the case where \(\tau_{\text{act}}=\tau_{\text{ref}}=\tau\), and we will refer to this timescale simply as _activation period_ unless otherwise specified. The strain relaxation rate (\(k_{L}\)) and the activation period (\(\tau\)) will jointly impact the dynamics at both the junction and tissue scales, as elaborated in Sections IIIA-B. Later in Section IIID, we will delve into the effects of varying duration for refractory and active states.
## III Results
### Traveling pulse and waves in ordered tissues
To characterize the emergent dynamic states arising from junction-level mechanical feedback, we first simulated an ordered tissue, composed of 260 hexagonal cells, in a box of sides \(L_{x}\sim 14\sqrt{A_{0}}\) and \(L_{y}\sim 18.6\sqrt{A_{0}}\), under periodic boundary conditions. In simulations, we non-dimensionalized force scales by \(K(A_{0})^{3/2}\), length scales \(\sqrt{A_{0}}\), and timescales by \(\mu/(KA_{0})\), setting \(A_{0}=1\), \(K=1\), and \(\mu=0.636\,(\text{min})\).
We initiate our simulations with a mechanically equilibrated tissue, where all junctions are initially in the inactive state. We then perturb the equilibrium state by manually activating a single junction positioned near the center of the simulation window (Fig. 2A). When the rate of strain relaxation is sufficiently slow, corresponding to a small value of
Figure 2: Emergent activity patterns in ordered tissues with a local junction activation. (A) Initial condition of every simulation, with a single chosen junction manually activated (red). (B) Snapshots of persistent activity waves in an ordered tissue, in a simulation using the values of \((\tau,k_{L})=(0.8,0.7)\) (left white dot in panel (G)). (C) Snapshots of an activity pulse traveling across an ordered tissue, in a simulation using the values of \((\tau,k_{L})=(1.6,0.7)\) (right white dot in panel (G)). (D) Fraction of junctions in each state, as a function of time, for activity waves shown in (B). Data show that at long times only active and refractory states persist, with almost no junctions in the inactive state. (E) Fraction of junctions in each state, as a function of time, for activity pulse shown in (B). Data show a transient pulse of activity before the entire tissue becomes inactive. (F) Total junction strain as a function of time, for the wave (B) and the pulse (B). (G) Phase diagram showing the three distinct emergent states: traveling pulse, traveling waves, and a quiescent state with no propagation of activity. The emergent states are controlled by the rest length remodeling rate \(k_{L}\), and the activation period \(\tau\).
\(k_{L}\), we observe the emergence of two distinct activity patterns depending on the activation period \(\tau\). For small \(\tau\), we find waves of activity traveling radially outwards, as shown in Fig. 2B (Movie 1). These self-sustaining waves are characterized by alternating rings and regions of red (indicating activity) and blue (indicating refractory) junctions. The tissue activity reaches a steady-state when the wavefront traverses the entire tissue (around \(t\sim 5\tau\) in Fig. 2D). Conversely, for larger values of \(\tau\), we do not observe self-sustaining waves due to the lack of junction reactivation events. Instead, a single transient activity pulse travels across the tissue (Figs. 2B and D, Movie 2). Over an extended time period, the tissue eventually becomes entirely inactive.
To quantify the mechanical deformation due to these traveling activity patterns, we calculated the total tissue strain as \(\epsilon_{\text{total}}=\sum_{\langle i,j\rangle}\epsilon_{ij}\). Fig. 2F shows the dynamics of the total strain for both wave (Fig. 2A) and pulse-like (Fig. 2B) patterns. The pulse causes a positive peak in strain, followed by a negative peak, ultimately returning to zero strain due to mechanical relaxation. Conversely, in the traveling wave pattern, while there is a peak in strain, it eventually stabilizes as a result of activity-induced mechanical fluctuations and the relaxation of strain at the junction level.
These propagating activity states are only observed when the value of \(k_{L}\) is sufficiently small. A large \(k_{L}\) causes the strain in the neighboring junctions of an active junction to relax before activation can occur, resulting in a quiescent state without any propagation. To quantify the extent of tissue-scale activity, we calculated the maximum fraction of active junctions throughout the simulation. This measurement enables us to identify the phase boundary, determined by the critical value of \(k_{L}\), that separates the regimes with activity propagation (either wave or pulse) from those without propagation (cyan-dashed boundary in Fig.2G). Moreover, by quantifying the active junction fraction at the final steady state, we can differentiate between the propagating modes, leading to the delineation of the wave-to-pulse phase boundary (white-dashed boundary in Fig.2G).
### Effective theory predicts emergent dynamic states
To analytically predict the emergence of excitable pulses, quiescent states, and oscillatory patterns as functions of the strain relaxation rate \(k_{L}\) and activation period \(\tau\), we developed an effective one-dimensional theory of coupled excitable junctions. Our minimal model consists of three interconnected junctions with fixed boundaries, as shown in Fig. 3A. Each unit comprises an elastic component with a spring constant \(k\) and natural length \(L\) (representing the one-dimensional version of cell elasticity), connected in parallel with a dashpot of friction coefficient \(\mu\), and an active element with contractility \(\Gamma_{1,2}\). If the junction is inactive or refractory then \(\Gamma_{1,2}=0\), and \(\Gamma_{1,2}=\Gamma_{0}\) if the junction is active. These active and elastic elements are connected in parallel with a tensile element with line tension \(\Lambda_{1,2}\). The central junction has a length \(l_{1}(t)\), while the outer junctions have lengths \(l_{2}(t)\). The fixed boundary conditions ensure that \(l_{1}(t)+2l_{2}(t)=3L\).
The system is initialized in a mechanical equilibrium state, and we perturb it by activating the central junction (\(\Gamma_{1}=\Gamma_{0}\), \(\Gamma_{2}=0\)). We then let the system to evolve following the equations of motion: \(\mu\mathrm{d}l_{i}/\mathrm{d}t=-\partial H_{\text{eff}}/\partial l_{i}\), Eq. (3), and the rules governing the junction states. The effective Hamiltonian governing the system is defined as:
\[H_{\text{eff}}= \frac{k}{2}(l_{1}-L)^{2}+k(l_{2}-L)^{2}+\Lambda_{1}l_{1}+2 \Lambda_{2}l_{2}+\frac{\Gamma_{1}}{2}l_{1}^{2}+\Gamma_{2}l_{2}^{2}. \tag{4}\]
We initially considered the scenario of symmetric junctions, wherein \(\Lambda_{1}=\Lambda_{2}\). This corresponds to an ordered tissue where junction tensions and lengths are uniform. To explore the behavior of the system, we numerically solve the system of equations for different values of \(\tau\) and \(k_{L}\), from \(t=0\) to \(t=2\tau\). The simulation outcomes can be categorized as follows: i) If the outer junctions remain inactive throughout the simulation, it is classified as a case of _No propagation_; ii) If the outer junctions become active but the central junction does not reactivate, we observe a single _Pulse_; and iii) finally, if the outer junctions become active and the central junction re-activates, it falls into the category of _Re-activation_. Fig. 3B shows the phase diagram of the model in \(\tau\)-\(k_{L}\) phase space showing the emergence of the three outcomes described above. A comparison with the phase diagram for the ordered tissue (Fig. 2E) reveals that the effective model successfully captures both key features of the vertex model: a critical value of \(k_{L}\) for propagation of activity, which diminishes for small \(\tau\), and a small region of reactivation corresponding to wave-like states.
We then used the one-dimensional effective model (Fig. 3A) to investigate the role of disorder in the propagation of activity. Disorder was introduced by removing the condition of homogeneous line tension, letting \(\Lambda_{1}\neq\Lambda_{2}\). First, we analyzed the case \(\Lambda_{1}<\Lambda_{2}\). Due to identical mechanical properties of each junction before activation (other than
Figure 3: Effective model of coupled excitable junctions. (A) Schematic of a minimal three-junction model. (B) Phase diagram for pulse propagation and reactivation when the junctions have equal tension. (C-D) Phase diagrams for pulse propagation and reactivation for asymmetric junction tensions, with \((\Lambda_{1},\Lambda_{2}=0.1,0.2)\) (C), and \((\Lambda_{1},\Lambda_{2}=0.4,0.1)\) (D).
tension values), the initial equilibrium state featured a central long junction (\(l_{1}>L\)) flanked by two shorter junctions (\(l_{2}<L\)) (Fig. 3C). By solving the system of equations numerically, we found that the larger junction (\(l_{1}>L\)) could propagate activity over a broader region in the \((\tau,k_{L})\) parameter space, while the re-activation region is substantially diminished. This is because larger junctions produce greater active contractile forces, while shorter neighboring junctions require a lower extension to achieve the strain threshold for activation \(\epsilon_{\text{on}}\). Conversely, when \(\Lambda_{1}>\Lambda_{2}\), the opposite behavior was observed. Our effective model thus reveals two main effects of the geometrical heterogeneity (or disorder) on cellular response to active contractility. Large junctions promote propagation of activity, while shorter junctions facilitated re-activation, leading to oscillatory patterns.
### Tissue disorder promotes self-sustained wave propagation
Motivated by the predictions of the effective model on the impact of geometric heterogeneity, we now investigate the effect of disorder in cell packing geometry on activity propagation in two-dimensional tissue simulations. To this end, we constructed a tissue comprising 208 cells, within a rectangular box with dimensions approximately equal to \(L_{x}\sim 14\sqrt{A_{0}}\) and \(L_{y}\sim 15\sqrt{A_{0}}\), subject to periodic boundary conditions. In these simulations, all mechanical properties at cell and junction levels are the same, with disorder restricted to geometric heterogeneity only. The initial state of the tissue corresponded to a state of mechanical equilibrium, characterized by varying junction lengths and polygon sidedness, as depicted in Fig. 4A.
As previously, we activated a randomly chosen cell junction (see Fig. 4A, \(t=0.0\tau\)), and let the tissue evolve from \(t=0\) to \(t=20\), for different values of strain relaxation rate \(k_{L}\in(0,1.5)\) and activation period \(\tau\in(0.2,5.0)\). By measuring the maximum active junction fraction (Fig. 4B), we again observe that propagation occurs below a critical \(k_{L}\), for sufficiently large \(\tau\). Unlike in ordered tissues (Fig. 2G), wave states are now possible for a wide range of \(\tau\) values, and propagating solitary pulse only occurs in particular cases with exceedingly large activation periods. Consistent with the predictions of the one-dimensional effective model, we find that the presence of short junctions in disordered tissues promotes junction reactivation, thereby facilitating the emergence of self-sustaining wave-like states. As an illustrative example, Fig.4A (corresponding to the white dot in Fig.4B) represents a wave-like state arising in a tissue with parameters \((\tau=1.6,k_{L}=0.7)\) (Movie 3), which led to pulse propagation in the ordered tissue (Fig.2E). Interestingly, the junction that is reactivated by the end of an oscillatory cycle need not necessarily be the same one initially chosen for activation. This introduces a non-local effect of disorder in promoting sustained wave-like patterns. We find that the critical \(k_{L}\) required for wave propagation increases with the length of the initially activated junction (Fig.4C), as predicted by the one-dimensional effective model.
### Controlling the geometry of wavefronts
Our theory and simulations have elucidated that the propagation of activity at the tissue scale is governed by two distinct characteristic timescales of the system: the activation period \((\tau)\) (taken to be equal to the refractory time) and the rest length remodeling timescale \((k_{L}^{-1})\). We now investigate the impact of varying the activation (\(\tau_{\text{act}}\)) and refractory periods (\(\tau_{\text{ref}}\)) on the resulting dynamic patterns that emerge within the tissue. In particular, we show that the ratio of activation to refractory period controls the geometry of wave patterns.
As in previous simulations, we initiated the simulation by activating a single junction within the tissue, while the remaining junctions remained in the inactive state (see Fig.5A). We find that the ratio of the activation period to the refractory period, \(\Delta=\tau_{\text{act}}/\tau_{\text{ref}}\), controls the wavelength of the propagat
Figure 4: Sustained propagation of activity waves in disordered tissues. (A) Snapshots of activity waves traveling across a disordered tissue, using \((\tau,k_{L})=(1.6,0.7)\) (white dot in (B)). Colored segments represent inactive (gray), active (red), and refractory junctions. (B) Phase diagram for the emergence of waves, pulses and quiescent states, varying strain relaxation rate \(k_{L}\) and activation period \(\tau\). (C) Scatter plot of the critical rest length remodeling rate \(k_{L}\) for a fixed \(\tau=3\), as a function of the initially activated junction length, for 100 different chosen junctions. The critical \(k_{L}\) is defined as the maximum \(k_{L}\) that allow propagating states. Red-vertical line represents the junction length in the ordered hexagonal tissue. Blue-horizontal line represents the critical \(k_{L}\) in the ordered tissue.
ing waves (see Figs.5C-D). Specifically, at higher values of \(\Delta\), propagating waves fail to materialize, and instead, we observe the presence of a solitary traveling pulse of activity (Fig. 5E).
Inspired by self-sustained spiral patterns observed in excitable systems [42; 43; 16; 44], we inquired whether we could design an initial state that would break the circular symmetry of the emergent wavefronts. Previous theoretical work using a three-state (inactive, active, refractory) cellular automaton model has shown that spiral waves can emerge from an initial state consisting of a layer of excited cells and an adjacent layer of refractory cells [45; 46; 47]. We therefore initialized our simulations by activating a partial row of junctions, with the neighbors underneath active junctions initialized in a refractory state (Fig. 5B). This initial condition leads to elliptical wavefronts for the case of \(\Delta=1\) (Fig. 5G, Movie 4). Similar to the case of single junction activation, smaller values for \(\Delta\) decrease the wavelength of the traveling wavefront (Fig. 5F). For higher values of \(\Delta\) we observe the emergence of a pair of self-sustaining spirals (Fig. 5H, Movie 5). This can be explained as follows. The initial condition of a partial row of active junctions followed by a layer of refractory junctions instigates two distinct pattern formations. Firstly, it leads to the emergence of a propagating wavefront. Secondly, it initiates the formation of two open ends within the wavefront, resulting initially in the addition of an excited element and subsequently in the development of a curved wave segment. This curved segment then propagates outward, adopting a spiral shape. Thus, each open end leads to the emergence of an spiral. Note that due to the periodic boundary conditions in our model, the least number of open ends that can be created is two. These results show that by designing appropriate initial states and disparate timescales for junction activation and refractory periods, the geometry and wavelength of the emergent wavefronts can be precisely controlled in our model.
## IV Conclusions
In this manuscript, we have introduced a minimal model designed to elucidate the behavior of mechanically-excitable tissues and investigated the role of viscous dissipation and geometrical disorder on tissue-level pattern formation. Traditional approaches for studying dynamic patterns, such as reaction-diffusion equations and cellular automata models, are constrained in their ability to account for spatial deformation in their standard formulations. To address this limitation, we have employed a vertex-based model incorporating agent-based rules at the junction level, representing coarse-grained biochemical reactions that connect junction deformations with the activation of contractility.
Prior work has examined pattern formation in excitable tissues, considering various triggering factors, including cell-level tension [30], cell size [31], and active ERK concentration [4]. However, none of these models have considered the effects of viscous dissipation or explored the potential roles of geometrical disorder in pattern formation dynamics. Within our model, which encompasses three distinct junction-level states (active, inactive, and refractory) and considers mechanical strain as the triggering quantity, we observed the emergence of three tissue-level states: quiescent (no propagation), traveling waves, and traveling pulses. These states arise from the interplay between the characteristic timescales associated with junction activation, inactivation, and refractory states.
To explain these emergent dynamics, we have developed an effective junction-scale theory that qualitatively captures the observed behaviors in the vertex model. Our model also provides insights into the impact of geometrical tissue disorder on tissue-level activity states, demonstrating that large junctions promote propagation, while small junctions facilitate re-activation. These predictions have been corroborated through two-dimensional vertex-like simulations, although experimental validation in epithelial tissues remains an avenue for future exploration.
Furthermore, we have demonstrated that the geometry of the emerging traveling wavefronts is influenced by the initial state of junctions and the ratio between the durations of junction active states and refractory states. This intricate interplay results in variations in wavelengths, transitions from waves to pulses, formation of elliptic wavefronts, and pairs of
Figure 5: Emergent dynamical patterns arising in an ordered tissue composed of 2759 cells, varying \(\tau_{\mathrm{ref}}/\tau_{\mathrm{act}}\). Colored segments represent inactive (gray), active (red), and refractory junctions. (A-B) Zoomed in configuration of two different initial conditions for junction states: (A) single junction activation, and (B) partial row activation with neighbours underneath in refractory states. (C-D-E) Collective dynamics arising from the initial condition in (A), for different values of \(\tau_{\mathrm{ref}}\), with \(\tau_{\mathrm{act}}=0.6\). (F-G-H) Collective dynamical states arising from the initial condition (B), for different values of \(\tau_{\mathrm{ref}}\), with \(\tau_{\mathrm{act}}=1\). (C-D-F-G-H) represent self-sustained waves, while (E) represents a solitary traveling pulse. All these simulations consider \(k_{L}=0.5\).
self-sustained spiral wavefronts. The predicted patterns arising from specific initial junction states could potentially be experimentally tested using optogenetic tools to spatially activate myosin contractility [48], ERK [49], and FRET-imaging to visualize the resulting patterns [4].
## Methods
The custom simulation code for the vertex model was implemented using Python 3. Specifically, the simulation code for an ordered tissue featuring traveling wave states can be accessed on GitHub ([https://github.com/BanerjeeLab](https://github.com/BanerjeeLab)). In implementing T1 transitions, a similar approach to that described in Ref. [50] was adopted. This involves enforcing the creation and instantaneous resolution of a 4-fold vertex whenever a junction's length becomes smaller than \(I_{\text{T1}}\). A newly created junction is set to have \(l=l^{0}=1.5I_{\text{T1}}\). The simulations encompassed tissues of varying sizes, as specified in the respective figure captions. Default model parameters used in the simulations are listed in Table 4. The numerical analysis of the one-dimensional effective model was done in Mathematica 12. The code for this analysis is also available on GitHub ([https://github.com/BanerjeeLab](https://github.com/BanerjeeLab)).
|
2303.04868 | Inertial Frame Dragging and Relative Rotation of ZAMOs in Axistationary
Asymptotically Flat Spacetimes | In axistationary asymptotically flat spacetimes zero angular momentum
observers (ZAMOs) define an absolute standard of non--rotation locally, as can
be verified by the absence of any Sagnac effect for these observers.
Nevertheless, we argue that on a global scale the only physically meaningful
concept is that of relative rotation. The argument is substantiated by solving
Einstein's equations for an approximate thin shell model where we keep a degree
of freedom by relaxing the natural assumption of vanishing rotation at
asymptotic infinity at the outset of the analysis. The solution reveals that
Einstein's equations only determine differences in the rotation rate of ZAMOs,
thereby establishing the concept of relative rotation globally. The
interpretation of rotation as relative in a global context is inherently linked
to the freedom to transform between coordinate systems rotating relative to
each other, implying that an arbitrary ZAMO located at any radius may claim to
be the one who is non--rotating on a global scale and that the notion of an
asymptotic Lorentz frame relative to which one may measure absolute rotation is
devoid of any meaning. The concept of rotation in Kerr spacetime is then
briefly discussed in the context of this interpretation. | S. Braeck | 2023-03-08T20:20:42Z | http://arxiv.org/abs/2303.04868v1 | Inertial Frame Dragging and Relative Rotation of ZAMOs in Axistationary Asymptotically Flat Spacetimes
###### Abstract
In axistationary asymptotically flat spacetimes zero angular momentum observers (ZAMOs) define an absolute standard of non-rotation locally, as can be verified by the absence of any Sagnac effect for these observers. Nevertheless, we argue that on a global scale the only physically meaningful concept is that of relative rotation. The argument is substantiated by solving Einstein's equations for an approximate thin shell model where we keep a degree of freedom by relaxing the natural assumption of vanishing rotation at asymptotic infinity at the outset of the analysis. The solution reveals that Einstein's equations only determine differences in the rotation rate of ZAMOs, thereby establishing the concept of relative rotation globally. The interpretation of rotation as relative in a global context is inherently linked to the freedom to transform between coordinate systems rotating relative to each other, implying that an arbitrary ZAMO located at any radius may claim to be the one who is non-rotating on a global scale and that the notion of an asymptotic Lorentz frame relative to which one may measure absolute rotation is devoid of any meaning. The concept of rotation in Kerr spacetime is then briefly discussed in the context of this interpretation.
## I Introduction
The dragging of inertial frames, often called the Lense-Thirring effect, is now a well established prediction of Einstein's general theory of relativity whereby rotating matter due to its angular momentum drags test particles or observers with zero angular momentum (ZAMOs) [1] in a co-rotating direction and cause the spin axes of gyroscopes to precess. The effect of frame-dragging of orbits was first predicted by H. Thirring [2] (1917) and J. Lense and H. Thirring [3] (1918) while the closely related effect of (Schiff) frame-dragging of gyroscope axes around the Earth was calculated by L. I. Schiff [4] in 1960 and much more recently confirmed experimentally by Gravity Probe B [5].
With the continuing advancement of experimental precision and sensitivity the effects of frame-dragging may constitute increasingly important aspects of experimental tests of general relativity as well as of other, alternative theories of gravity in a wide range of gravitating systems. Indeed, the feasibility of detecting frame-dragging and other gravitomagnetic effects in relation to systems such as, e.g., the planets in our solar system, the sun, supermassive black holes and even on the laboratory scale has been investigated in several fairly recent reports [6; 7; 8; 9; 10; 11; 12; 13; 14]. Moreover, a test for the Lense-Thirring effect was recently conducted even in the strong-field regime of double pulsars [15]. For an historical account of the Lense-Thirring effect, see e.g. Ref. [16]. For frequent misconceptions related to gravitomagnetic effects, see Ref. [17].
The seminal predictions by Thirring, Lense and Schiff were, however, based on approximations of slowly rotating and weak gravitational sources of matter. D.R. Brill and J.M. Cohen [18] later considered an idealized model of a slowly rotating, infinitely thin shell of matter and obtained a _strong-field_ solution to the dragging rate of inertial frames by treating the geometry due to the rotating shell as a first order perturbation in the shell's angular velocity \(\omega_{\rm S}\) of the static Schwarzschild geometry. Then, evaluated to first order in \(\omega_{\rm S}\), the thin shell is spherically symmetric and spacetime in the interior of the shell is that of the flat Minkowski spacetime. Thus, Brill and Cohen found that the angular dragging velocity \(\Omega\) of the inertial frames in the exterior of the shell steadily increases as one approaches the shell radius until it reaches a maximal value equal to a _constant_ angular dragging velocity in the interior of the shell. In particular the constant angular dragging velocity in the interior of the shell approaches the angular velocity of the shell itself as the shell mass increases and the Schwarzschild radius approaches the shell radius. Hence they concluded that, in this limit, the inertial properties of space in the interior of the shell do not depend on the inertial frames infinitely far away from the shell, but are completely determined by the shell itself. This effect is often called _perfect_ or _exact_ dragging of inertial frames. If one considers, as Brill and Cohen did, the shell of matter as an idealized model of the distant matter in our universe, then one may establish a connection between the notion of _perfect_ inertial dragging and the origin of inertia and Mach's principle. Expressed in Brill and Cohen's own words: "In this sense our result explains why the "fixed stars" are indeed fixed in our inertial frame, and in this sense the result is consistent with Mach's principle".
Mach's principle, essentially the idea that notions of acceleration and rotation relative to an empirically unverifiable absolute space or element are meaningless but that these quantities can be meaningfully defined only with respect to an average motion of the total matter of the universe, and its connection with frame-dragging has been discussed in great detail by several authors, see e.g. [1; 19; 20; 21; 22; 23; 24; 25; 26; 27] and references therein. For a somewhat different viewpoint on incorporating Mach's principle in general relativity, see also [28].
In a homogeneous and isotropic spacetime governed by general relativity there is perfect inertial dragging relative to the cosmic matter [27]. More generally, using cosmological perturbation theory C. Schmid [23; 24] has convincingly demonstrated perfect dragging and the validity of Mach's principle within cosmological general relativity.
However, even if Mach's principle is demonstrably valid in a general-relativistic cosmological context, many important solutions to Einstein's equations evidently do not share this property. In particular, this is true for the asymptotically empty and flat solutions such as the Schwarzschild and Kerr solutions or Brill and Cohens approximate shell model, which all approach flat Minkowski spacetime in regions far away from the localized mass distribution. These solutions are completely devoid of any cosmic matter at great distances from the localized mass. In the far-away regions the physical mechanism of frame dragging induced by the total matter present in these spacetimes is certainly far too weak to account for the perfect dragging required for the inertial frames to be _fully_ determined by the motion of the present matter. Invoking fictitious cosmic matter not included in Einstein's equations as external causes outside of the theory, as an explanation for the origin of inertia, would render general relativity as a gravitational theory of spacetime and matter incomplete. (This should not, however, be confused with the seemingly remarkable fact that the general-relativistic predictions of frame-dragging in asymptotically flat solutions _do_ match the experimental measurements made relative to the fixed stars, which could be explained by somehow merging the metric of an asymptotically flat solution with the metric determined by the cosmic matter far away).
Asymptotic Minkowski spacetimes thus pose a challenge with regards to the interpretation of the origin of inertia in general relativity. This rather intricate difficulty was already recognized by Einstein as early as 1917 [29], stating "From what has now been said it will be seen that I have not succeeded in formulating boundary conditions for spatial infinity. Nevertheless, there is still a possible way out, without resigning... For if it were possible to regard the universe as a continuum which is _finite (closed) with respect to its spatial dimensions_, we should have no need at all of any such boundary conditions." The argument of incorporating Mach's principle into general relativity through imposing restrictions on the topology of spacetime seems to have been maintained by Einstein also in his later expositions of general relativity [30] and has been expanded upon in [20], Chapter 5, and in [21].
It is nevertheless quite clear that somehow the local inertial frames in part are determined through the imposed boundary conditions at infinity in asymptotically flat spacetimes. Then, it might be natural at first to assume that this influence of the boundary conditions can be traced directly to the unique properties of the _globally_ empty and nondynamical Minkowski spacetime for which there is a well-defined absolute state of non-rotation. One might, therefore, be tempted to further infer that this nondynamical property, unaffected by the matter content of the spacetime, rather seamlessly will be transferred to the asymptotic Minkowski spacetimes as boundary conditions "at infinity". In other words, one might draw the conclusion that the ZAMOs located "at infinity" and "at rest" in asymptotic Minkowski spacetimes correspondingly define an absolute standard of non-rotation even _globally_, relative to which the orbital angular velocity of all other ZAMOs in the spacetime is measured.
However, we note here that even if a spacetime asymptotically _approaches_ Minkowski spacetime, it is nowhere _exactly_ flat and, in a global analysis of rotational motion, this makes the line of reasoning above questionable. Indeed, in contrast to the conclusion drawn above, our analysis presented below indicate that only _differences_ in angular velocities between ZAMOs have physical significance. This implies that we are completely free to choose any convenient numerical reference value for the angular velocity of a ZAMO at an arbitrarily chosen radius. Only angular velocities _relative_ to this arbitrary reference value are physically meaningful. As a consequence, the absolute numerical value of the angular velocity of ZAMOs located at infinity is irrelevant, and the notion of an absolutely non-rotating asymptotic Lorentz frame is devoid of any meaning.
Undoubtedly, in most circumstances the most practical choice for the reference value in asymptotically flat spacetimes will be that of vanishing rotation as one approaches infinity, but fundamentally this only means that rotation of ZAMOs is measured relative to a conveniently _chosen_ zero point infinitely far away (as will be clarified below). Similarly, the importance of accounting for relative rotation implicitly appears in connection with the first law of thermodynamics applied to Kerr-anti-de Sitter spacetimes [31; 32]. In Boyer-Lindquist type coordinates for these spacetimes, the ZAMOs rotate with an angular velocity equal to the angular velocity of the black hole at the horizon, but they also turn out to rotate with a non-zero angular velocity at asymptotic infinity in contrast to the asymptotically flat case. In order for the first law of black hole thermodynamics to be satisfied in this case one must use the angular velocity of the black hole measured relative to a frame that is "non-rotating" at infinity, i.e., it is the relative rotation between infinity and the black hole that enters the first law. Leaving quantum effects aside, however, the concepts of relative and absolute rotation can be discussed within general relativity independently of the laws of black hole thermodynamics, which will be the topic of interest in this work.
Inertial frame dragging in Brill and Cohen's slowly rotating shell model
Our purpose now is to derive an expression for the angular velocity of ZAMO's in Brill and Cohen's rotating shell model [18], but where our choice of reference point for the angular velocity is completely arbitrary and not necessarily equal to the asymptotic boundary condition chosen at the outset in Brill and Cohen's original work.
In their investigation of inertial frame dragging, Brill and Cohen considered an infinitely thin shell rotating sufficiently slowly that, to first order in the shell's angular velocity \(\omega_{s}\), the shell may be considered spherically symmetric in shape [33]. The resulting spacetime may then be treated as a small perturbation to the spherically symmetric Schwarzschild spacetime. In isotropic coordinates, the line element for the spacetime outside and inside the shell can then be written as
\[ds^{2}=V^{2}dt^{2}-\psi^{4}(dr^{2}+\ r^{2}(d\vartheta^{2}+\sin^{2}\vartheta\left( d\phi-\Omega\left(r\right)dt\right)^{2}))\, \tag{1}\]
where
\[V\left(r\right)=\left\{\begin{array}{c}\left(r-r_{S}\right)/\left(r+r_{S} \right)\ \ \ \text{for}\ r>R\\ V_{0}\ \ \ \text{for}\ r<R\end{array}\right.\, \tag{2}\]
and
\[\psi\left(r\right)=\left\{\begin{array}{c}1+r_{S}/r\ \ \ \text{for}\ r>R\\ \psi_{0}\ \ \ \text{for}\ r<R\end{array}\right.. \tag{3}\]
Here \(\Omega\left(r\right)\) is the angular velocity of ZAMOs, \(R\) denotes the radius of the shell, \(r_{S}\) denotes the shell's Schwarzschild radius, and \(V_{0}\equiv\left(R-r_{S}\right)/\left(R+r_{S}\right)\) and \(\psi_{0}\equiv 1+r_{S}/R\) are constants which make the components of the metric tensor continuous accross the shell. Clearly, spacetime in the interior of the shell is then manifestly flat Minkowski spacetime expressed in conveniently scaled coordinates.
If, as Brill and Cohen did in their original work, we now impose the asymptotic boundary condition \(\lim\limits_{r\rightarrow\infty}\Omega\left(r\right)=0\) at the outset, then the line element above approaches that of Minkowski spacetime in standard "non-rotating" spherical coordinates. However, in so doing one may in the final result end up with the wrong impression that the inertial frames at infinity somehow, without any choice of freedom, single out a global standard of non-rotation. For this reason we shall not impose any boundary conditions on \(\Omega\left(r\right)\) at this stage in the derivation, but instead keep this degree of freedom temporarily until the boundary condition naturally is to be determined at a later stage.
We may now use Einstein's equations in combination with the line element given above in order to find the explicit expression for \(\Omega\left(r\right)\). A detailed step-by-step rederivation of Brill and Cohen's original result with the restriction \(\lim\limits_{r\rightarrow\infty}\Omega\left(r\right)=0\) at the outset has already been given in Ref. [27]. The derivation for the more general case with no such restriction on \(\Omega\left(r\right)\) at the outset is essentially identical to the one presented in Ref. [27]. Hence we shall not repeat the derivation in full detail here, but only give an outline of that derivation while we keep track of where the modifications to Einstein's equations, \(G_{\alpha\beta}=8\pi GT_{\alpha\beta}\), occur underway.
For the present purpose Einstein's equations are most easily solved by using the Cartan formalism [34]. In this context a useful set of orthonormal basis one-forms are given by
\[\omega^{0}=Vdt,\ \ \omega^{1}=\psi^{2}dr,\ \ \omega^{2}=r\psi^{2}d\vartheta,\ \ \omega^{3}=r\psi^{2}\sin\vartheta\left(d\phi-\Omega dt\right)\,. \tag{4}\]
From Cartan's structural equations we then find for the non-zero components of the Einstein tensor,
\[G^{00}=\frac{4r_{S}}{r^{2}\psi^{5}}\delta\left(r-R\right)\, \tag{5}\]
\[G^{22}=G^{33}=\frac{r_{S}}{2r\psi V}G^{00}\, \tag{6}\]
\[G^{03}=-\frac{\sin\vartheta}{2r^{3}\psi^{8}}\left(\frac{r^{4}\psi^{6}\Omega^{ \prime}}{V}\right)^{\prime}\,. \tag{7}\]
Here \(\delta\left(r\right)\) is the Dirac delta function, and \(V\) and \(\psi\) are given in Equations (2) and (3), respectively. For the diagonal components of the stress-energy tensor \(T^{\mu\nu}\), Einstein's field equations now immediately yield
\[\rho\equiv T^{00}=\frac{G^{00}}{8\pi}=\frac{r_{S}r^{3}}{2\pi\left(r+r_{S}\right)^{5 }}\delta\left(r-R\right)\, \tag{8}\]
\[T^{33}=T^{22}=\frac{G^{22}}{8\pi}=\frac{r_{S}}{2\left(r-r_{S}\right)}\rho. \tag{9}\]
Here \(\rho\) denotes the mass density of the shell in the rest frame of an element of the shell.
To proceed, we next consider the Einstein equation containing the nondiagonal components, \(G^{03}=8\pi GT^{03}\). Because spacetime is empty both in the interior (\(r<R\)) and exterior (\(r>R\)) of the shell, we have that \(T^{03}=0\) and this equation reduces to
\[\left(\frac{r^{4}\psi^{6}\Omega^{\prime}}{V}\right)^{\prime}=0\ r\neq R. \tag{10}\]
Thus, we find
\[\Omega^{\prime}_{\pm}=\frac{K_{\pm}V}{r^{4}\psi^{6}}. \tag{11}\]
In this expression \(K_{-}\) and \(K_{+}\) are constants of integration in the two regions \(r<R\) and \(r>R\), respectively. For \(r<R\), we have \(\psi=\psi_{0}\) and \(V=V_{0}\). Hence, we obtain
\[\Omega_{-}=-\frac{K_{-}V_{0}}{3r^{3}\psi_{0}^{6}}+\Omega_{B}\, \tag{12}\]
where \(\Omega_{B}\) is another constant of integration yet to be determined. In the shell's interior spacetime is assumed to be flat. We therefore require a regular solution as \(r\to 0\), from which it follows that \(K_{-}=0\) and \(\Omega_{-}=\Omega_{B}\). For \(r>R\) we may integrate equation (11) to give
\[\Omega_{+}\left(r\right)-\Omega_{+}\left(r_{Q}\right)=K_{+}\int_{r_{Q}}^{r} \frac{r-r_{S}}{r^{5}\psi\left(r\right)^{7}}dr=-\frac{K_{+}}{3}\left[\frac{1}{ r^{3}\psi\left(r\right)^{6}}-\frac{1}{r_{Q}^{3}\psi\left(r_{Q}\right)^{6}} \right]. \tag{13}\]
Here \(r_{Q}\) is an arbitrarily chosen reference radius. It is important to note here that Einstein's equation determines only the _difference_, \(\Omega_{+}\left(r\right)-\Omega_{+}\left(r_{Q}\right)\), in angular velocity between ZAMOs located at two different radii. Moreover, this difference is completely independent of the numerical value of \(\Omega_{+}\left(r_{Q}\right)\). Thus we are free to choose _any_ convenient reference value \(\Omega_{Q}\equiv\Omega_{+}\left(r_{Q}\right)\) at the reference point \(r_{Q}\). In essence, the choice of the reference point and reference value for the angular velocity here is analogous to the arbitrary choice of a reference point for potential energy in classical mechanics or to the arbitrary choice of a specific inertial frame for measuring velocities. In practice this always allow us to conveniently define a new angular velocity function \(\Omega_{\rm rel}\left(r\right)\equiv\Omega_{+}\left(r\right)-\Omega_{Q}\) which then describes the local rotation of inertial frames _relative_ to the _arbitrarily chosen_ local rotation \(\Omega_{Q}\) of inertial frames located at the arbitrarily chosen reference radius \(r_{Q}\). Equivalently, one may instead simply declare the angular velocity \(\Omega_{Q}\) of the ZAMOs located at the radius \(r_{Q}\) to be zero. The angular velocity \(\Omega_{+}\left(r\right)\) then describes the rotation rate of the ZAMOs _relative_ to the arbitrarily chosen zero rotation rate of ZAMOs located at \(r_{Q}\). Finally, this analysis makes it clear that the angular velocity of ZAMOs located at asymptotic infinity play no particular role in determining a reference value for the angular velocity in this spacetime.
The constant \(K_{+}\) may now be determined by requiring the metric to be continuous accross the shell, \(\Omega_{-}\left(R\right)=\Omega_{B}=\Omega_{+}\left(R\right)\), giving
\[K_{+}=-\frac{3\left(\Omega_{B}-\Omega_{Q}\right)}{\frac{1}{r^{3}\psi\left(r \right)^{6}}-\frac{1}{r_{Q}^{3}\psi\left(r_{Q}\right)^{6}}}\,. \tag{14}\]
Thus, the angular velocity in the two regions can be expressed as
\[\Omega\left(r\right)=\left\{\begin{array}{c}\Omega_{Q}+\left[\frac{1}{r_{Q} ^{3}\left(r\right)^{6}}-\frac{1}{r_{Q}^{3}\psi\left(r\right)^{6}}\right]^{ \left(\Omega_{B}-\Omega_{Q}\right)}}{\frac{1}{R^{3}\psi_{0}^{6}}-\frac{1}{r_{Q }^{3}\psi\left(r_{Q}\right)^{6}}}\ {\Omega_{B}\ {\rm for}\ r>R}\ \.\end{array}\right. \tag{15}\]
Combining this expression with equation (7), the nondiagonal component of the Einstein tensor can be written as
\[G^{03}=\frac{3\left(\Omega_{B}-\Omega_{Q}\right)\left(R\psi_{0}^{2}\right)^{3} \sin\upsilon}{1-\left(\frac{r_{Q}(R+r_{S})^{2}}{R(r_{Q}+r_{S})^{2}}\right)^{3} }\delta\left(r-R\right). \tag{16}\]
Our next task is to determine the constant \(\Omega_{B}\) entering the expression above. This we may accomplish by once again integrating Einstein's equation, \(G^{03}=8\pi GT^{03}\), but this time over a region crossing the shell radius \(R\). Accordingly, we must first consider the stress-energy tensor of the shell. From the requirement that the momentum densities \(T^{i0}\) must vanish in the rest frames of the matter comprising the shell, Brill and Cohen [18] deduced that this stress-energy tensor should have the form
\[T^{\mu\nu}=\rho u^{\mu}u^{\nu}+\sum_{i,j=1}^{3}t^{ij}v^{\mu}_{(i)}v^{\nu}_{j}\, \tag{17}\]
where as before \(\rho\) denotes the mass density in the rest frame of the shell, \(u^{\mu}\) are the components of the four-velocity of a given element of the shell and \(v^{\mu}_{(i)}\) are the components of three spatial orthonormal vectors spanning the hypersurface orthogonal to \(u^{\mu}\). We shall here also assume that this form of the stress-energy tensor is adequate to first order in angular velocities.
We now proceed to find the components \(T^{\mu\nu}\) of the shell. Let each element of the shell rotate with a given angular velocity \(d\phi/dt=\omega_{s}\) in the isotropic coordinates. Using that \(dr=d\vartheta=0\) for the element in the line element (1), the components of the four-velocity in the coordinate basis are calculated as
\[\widetilde{u}^{0}=\frac{dt}{d\tau}=\frac{1}{V_{0}}\left(1-\sigma^{2}\right)^{ -1/2}\, \tag{18}\]
\[\widetilde{u}^{1}=\widetilde{u}^{2}=0\, \tag{19}\]
\[\widetilde{u}^{3}=\omega_{s}\widetilde{u}^{0}\, \tag{20}\]
where we have introduced the quantity
\[\sigma=\frac{R\psi_{0}^{2}\sin\vartheta\left(\omega_{s}-\Omega_{B}\right)}{V _{0}}. \tag{21}\]
From the relations (4) between the orthonormal basis and the coordinate basis, the components of the four-velocity in the orthonormal basis are obtained as
\[u^{0}=\left(1-\sigma^{2}\right)^{-1/2}\, \tag{22}\]
\[u^{1}=u^{2}=0\, \tag{23}\]
\[u^{3}=\sigma\left(1-\sigma^{2}\right)^{-1/2}. \tag{24}\]
By choosing the components of the three spatial vectors in the orthonormal basis as
\[v^{\mu}_{(1)}=\left(0,\ 1,\ 0,\ 0\right)\, \tag{25}\]
\[v^{\mu}_{(2)}=\left(0,\ 0,\ 1,\ 0\right)\, \tag{26}\]
\[v^{\mu}_{(3)}=\left(\sigma,\ 0,\ 0,\ 1\right)\left(1-\sigma^{2}\right)^{-1/2}\, \tag{27}\]
we ensure that they have unit lengths and are orthogonal to each other, as well as being orthogonal to the four-velocity. Using these expressions for \(u^{\mu}\) and \(v^{\mu}_{(i)}\) in Equation (17) in combination with the results already obtained in (8) and (9), the components \(T^{\mu\nu}\) are, to first order in \(\omega_{s}-\Omega_{B}\), calculated to be
\[T^{00}=\rho\, \tag{28}\]
\[T^{22}=\ \frac{\rho r_{S}}{2\left(R-r_{S}\right)}=\ t^{22}, \tag{29}\]
\[T^{33}=\ T^{22}=\ t^{33}, \tag{30}\]
\[T^{03}=\ \left(\rho+t^{33}\right)\sigma=\ \frac{\rho\sigma\left(2R-r_{S}\right)} {2\left(R-r_{S}\right)}. \tag{31}\]
If we now integrate Einstein's field equation, \(G^{03}=8\pi T^{03}\), across the shell's radius \(R\) we obtain the equation
\[\frac{3\left(\Omega_{B}-\Omega_{Q}\right)R^{2}\sin\vartheta}{2\left(R+r_{S} \right)^{2}\left[1-\left(\frac{r_{Q}\left(R+r_{S}\right)^{2}}{R\left(r_{Q}+r_ {S}\right)^{2}}\right)^{3}\right]}=\frac{2r_{S}\sin\vartheta\left(2R-r_{S} \right)R^{2}\left(\omega_{s}-\Omega_{B}\right)}{\left(R+r_{S}\right)^{2}\left( R-r_{S}\right)^{2}}\, \tag{32}\]
which may be readily solved to give
\[\Omega_{B}=\frac{\omega_{s}+\frac{3\left(R-r_{S}\right)^{2}}{4r_{S}\left(2R- r_{S}\right)\left[1-\left(\frac{r_{Q}\left(R+r_{S}\right)^{2}}{R\left(r_{Q}+r_{S} \right)^{2}}\right)^{3}\right]}\Omega_{Q}}{1+\frac{3\left(R-r_{S}\right)^{2}} {4r_{S}\left(2R-r_{S}\right)\left[1-\left(\frac{r_{Q}\left(R+r_{S}\right)^{2 }}{R\left(r_{Q}+r_{S}\right)^{2}}\right)^{3}\right]}}. \tag{33}\]
We now substitute this result for \(\Omega_{B}\) in Equation (15) to obtain our final result for the angular velocity of the inertial frames as
\[\Omega\left(r\right)=\left\{\begin{array}{l}\Omega_{Q}+\frac{g \left(r,r_{Q}\right)\left(\omega_{s}-\Omega_{Q}\right)}{g\left(R,r_{Q}\right) \left(1+\frac{3\left(R-r_{S}\right)^{2}}{4r_{S}\left(2R-r_{S}\right)\,h\left(R,r_{Q}\right)}\right)}\ \ \ \ \ \text{for }r>R\\ \\ \Omega_{Q}+\frac{\omega_{s}-\Omega_{Q}}{1+\frac{3\left(R-r_{S} \right)^{2}}{4r_{S}\left(2R-r_{S}\right)\,h\left(R,r_{Q}\right)}}\
spacetime in the interior and exterior of the shell, respectively, as described in a coordinate system rotating relative to the collection of co-rotating ZAMOs. Thus, since the relative angular velocities of the ZAMOs completely vanish in this limit, we evidently recover a spacetime with the "apparent" property of absolute rotation everywhere analogous to that of global Minkowski spacetime. We emphasize that the transition occuring here is completely independent of any reference to non-rotating ZAMOs at infinity or non-rotating asymptotic Lorentz frame.
To gain some further insight, consider now for simplicity the case \(\omega_{s}=\Omega_{Q}\) in the interior region \(r<R\) of the shell. Then again \(\Omega\left(r\right)=\Omega_{Q}\), and the line element in (1) simplifies to that of
\[ds^{2}=\left(1-{r^{\prime}}^{2}\sin^{2}\vartheta\,\Omega_{Q}^{\prime}{}^{2} \right)dt^{\prime}{}^{2}-{dr^{\prime}}^{2}d\vartheta^{2}-{r^{\prime}}^{2}\sin^ {2}\vartheta\,d\phi^{2}+2{r^{\prime}}^{2}\sin^{2}\vartheta\,\Omega_{Q}^{\prime }\,{dt^{\prime}}^{2}d\phi^{2}\, \tag{37}\]
where we have introduced the conveniently rescaled coordinates \(r^{\prime}=\psi_{0}^{2}\,r\), \(t^{\prime}=V_{0}\,t\), and \(\Omega_{Q}^{\prime}=d\phi/dt^{\prime}=\Omega_{Q}/V_{0}\). We recognize the line element above as that of Minkowski spacetime described in a spherical coordinate system rotating with constant angular velocity \(-\Omega_{Q}^{\prime}\) relative to a standard inertial reference frame in which the ZAMOs do not rotate. Unless \(\Omega_{Q}^{\prime}=0\), it is clear that inertial effects will appear in the interior of the shell in this rotating coordinate system. More generally, when \(\omega_{s}\neq\Omega_{Q}\), these inertial effects will increase if \(\omega_{s}>\Omega_{Q}\) due to the second term in Equation (34) for \(r<R\) which accounts for additional rotational effects caused by increased rotation of the shell relative to ZAMOs located at the reference radius \(r_{Q}\), implying also an increased rotation rate of the shell relative to the coordinate system. Alternatively, keeping the parameters \(\omega_{s}\) and \(\Omega_{Q}\) fixed, this second term also implies that these inertial effects depend upon the choice of the reference point \(r_{Q}\). For instance, as \(r_{Q}\to\infty\) the function \(h\left(R,r_{Q}\right)\to 1\), yielding a rotation rate greater than \(\Omega_{Q}\). On the other hand, for \(r_{Q}\to R\), \(h\left(R,r_{Q}\right)\to 0\) such that the second term vanishes and hence the rotation rate approaches \(\Omega_{Q}\). This dependence on \(r_{Q}\) may at first seem suspect, but is effectively caused by changes in the relative rotation between ZAMOs in the exterior region of the shell which impacts the rotation rate in the interior.
Figure 1 illustrates two examples of the angular velocity \(\Omega\left(r\right)\) in (34) for fixed parameter values \(\Omega_{Q}=-0.3\,\omega_{s}\) and \(r_{Q}=5\,r_{S}\), and for two different choices, \(R=3\,r_{S}\) and \(R=r_{S}\), for the shell radius. As can be seen, when the shell radius is larger than the Schwarzschild radius, the inertial frames are only partially dragged around with the shell's rotation. As the shell radius approaches its Schwarzschild radius, however, the inertial frames in the interior of the shell rotate with the same angular velocity \(\omega_{s}\) as the shell, independently of the angular velocity of the inertial
Figure 1: Plots of the angular velocity \(\Omega\left(r\right)\) of ZAMOs as given by the expression in Equation (34) with \(\Omega_{Q}=-0.3\,\omega_{s}\) and \(r_{Q}=5\,r_{S}\). Note that \(\Omega\left(r\right)\) is normalized by \(\omega_{s}\). The solid curve shows the result for a shell radius \(R=3\,r_{S}\). The black dashed curve shows the result for a shell radius equal to its Schwarzschild radius, \(R=r_{S}\). This corresponds to the situation of “perfect inertial dragging” where the inertial frames in the interior of the shell rotate with the same angular velocity as the shell, independently of the angular velocity of the inertial frames located “at infinity”. The red dashed curve marks the chosen value for \(\Omega_{Q}\) (normalized by \(\omega_{s}\)).
frames located "at asymptotic infinity". This is the phenomenon of "perfect inertial dragging" first discovered by Brill and Cohen [18]. Yet, in Brill and Cohen's original calculation of \(\Omega\left(r\right)\) the inertial frames located "at infinity" were non-rotating. In contrast, with our choice above for the reference value \(\Omega_{Q}\), the inertial frames located "at infinity" in our case rotate with a negative angular velocity; in other words, they are not at rest.
To further compare our result to the original result of Brill and Cohen, it is instructive to consider the case for which \(r_{Q}\rightarrow\infty\), as shown in Figure 2. Then the ZAMOs located at asymptotic infinity rotate with the angular velocity \(\Omega_{Q}\). The original result of Brill and Cohen is now recovered by making the convenient, but very special choice \(\Omega_{Q}=0\), for which Equation (34) simplifies to
\[\Omega\left(r\right)=\left\{\begin{array}{l}\frac{ \left(\frac{r\left(R+r_{S}\right)^{2}}{R\left(r+r_{S}\right)^{2}}\right)^{3} \omega_{s}}{ 1+\frac{3\left(R-r_{S}\right)^{2}}{4r_{S}\left(2R-r_{S}\right)}} \end{array}\right.\quad\text{for }r>R\\ \frac{\omega_{s}}{ 1+\frac{3\left(R-r_{S}\right)^{2}}{4r_{S}\left(2R-r_{S}\right)} }\end{array}\right.. \tag{38}\]
Note that Equation (34) can be obtained from Equation (38) by transforming to a coordinate system \(\left(\widetilde{t},\widetilde{r},\widetilde{\vartheta},\widetilde{\varphi}\right)\) rotating relative to the first one with an angular velocity \(\omega=\Omega\left(r_{Q}\right)-\widetilde{\Omega}\left(r_{Q}\right)\):
\[\widetilde{t}=t,\quad\widetilde{r}=r,\quad\widetilde{\vartheta}=\vartheta, \quad\widetilde{\varphi}=\varphi-\left(\Omega\left(r_{Q}\right)-\widetilde{ \Omega}_{Q}\right)t\,, \tag{39}\]
where \(\widetilde{\Omega}_{Q}\equiv\widetilde{\Omega}\left(r_{Q}\right)\) is the arbitrarily chosen reference value for the angular velocity of the ZAMOs located at the reference radius \(r_{Q}\) in the rotating coordinate system. The angular velocity of the rotating shell in the rotating coordinate system is then \(\widetilde{\omega}_{s}=\omega_{s}+\widetilde{\Omega}_{Q}-\Omega\left(r_{Q}\right)\). Noting that \(\Omega\left(r_{Q}\right)\) is also a function of \(\omega_{s}\), we may invert this relation to obtain \(\omega_{s}\) as a function of \(\widetilde{\omega}_{s}\). A straightforward substitution for \(\omega_{s}\) in terms of \(\widetilde{\omega}_{s}\) in Equation (38) then yields Equation (34).
Finally, as a rather vivid illustration of the arbitrariness of the numerical value of \(\Omega\left(r\right)\), we may now consider the case, shown in Figure 3, for which there is perfect dragging (\(R=r_{S}\)) and the angular velocity of the shell vanish,
\(\omega_{s}=0\), and we keep \(r_{Q}\rightarrow\infty\) as above (but let \(\Omega_{Q}\) be arbitrary). Now both the shell and the inertial frames in the interior of the shell are non-rotating, but the ZAMOs located at asymptotic infinity rotate with the angular velocity \(\Omega_{Q}\). In this picture it appears as if the massive shell and space in its interior are at rest while it is the rest of the universe exterior to the shell which rotate around it.
## III Rotation in the Kerr spacetime
In the previous section we have discussed rotation of ZAMOs in Brill and Cohen's approximate spacetime model of a slowly rotating thin shell. In this section we briefly consider the angular velocity of ZAMOs in the Kerr spacetime representing an exact existationaly and asymptotically flat solution to Einstein's field equations.
In Boyer-Lindquist coordinates \(\left(t,r,\vartheta,\varphi\right)\) the angular velocity of ZAMOs due to inertial dragging in the equatorial plane of the Kerr spacetime is given by [1]
\[\Omega\left(r\right)=\frac{2Ma}{r^{3}+ra^{2}+2Ma^{2}}\;, \tag{40}\]
where \(a=J/M\), \(J\) is the angular momentum and \(M\) is the mass characterizing the Kerr geometry. In the asymptotic limit \(r\rightarrow\infty\), \(\Omega\left(r\right)\) vanishes. However, this does _not_ imply that the ZAMOs located at asymptotic infinity define an absolute state of non-rotation. Indeed, using the same transformations as in Equation (39), we may once again transform to a coordinate system \(\left(\widetilde{t},\widetilde{r},\widetilde{\vartheta},\widetilde{\varphi}\right)\) rotating relative to the Boyer-Lindquist coordinates to obtain the angular velocity of the ZAMOs in the rotating system as
\[\widetilde{\Omega}\left(r\right)=\widetilde{\Omega}_{Q}+\Omega\left(r\right)- \Omega\left(r_{Q}\right)\,. \tag{41}\]
The ZAMOs are now seen to rotate at asymptotic infinity with, in general, a non-zero angular velocity \(\widetilde{\Omega}_{Q}-\Omega\left(r_{Q}\right)\) even though their angular momentum is zero. If in addition we let the arbitrary reference radius \(r_{Q}\rightarrow\infty\), then their rotation rate at infinity equals the arbitrary reference value \(\widetilde{\Omega}_{Q}\).
This observation may at first appear completely trivial since it follows directly from a simple transformation to a rotating coordinate system. However, as was discussed below Equations (34) and (38), this particular freedom of choice of coordinate system was inherently linked to the observation that Einstein's equations determine only
Figure 3: Plots of the angular velocity \(\Omega\left(r\right)\) of ZAMOs as given by the expression in Equation (34) with vanishing angular velocity for the shell, \(\omega_{s}=0\), and \(r_{Q}\rightarrow\infty\) as in Figure 2. Note that here \(\Omega\left(r\right)\) is normalized by \(\Omega_{Q}\).
differences in angular velocities of ZAMOs, implying that only relative angular velocities are meaningful concepts. The angular velocity (40) obtained in Boyer-Lindquist coordinates appears to be a consequence of imposing the asymptotic boundary condition \(\lim\limits_{r\rightarrow\infty}\Omega\left(r\right)=0\) at the outset of the derivation [35], potentially leading to the misconception that the inertial frames at asymptotic infinity single out a global standard of non-rotation. The coordinate independent boundary condition of asymptotic flatness is independent of the asymptotic boundary condition imposed on the numerical value of the ZAMO angular velocity. This indicates that rotation of ZAMOs located at different radii is best interpreted as a relative concept in the Kerr spacetime too.
## IV Summary and conclusion
The effect of inertial frame dragging on the rotation rate of ZAMOs has been analyzed within the framework of the simple thin shell model seminally introduced by Brill and Cohen. By relaxing the quite natural assumption of zero rotation infinitely far away from the massive shell early on in the derivation, the obtained expression for the rotation rate, Equation (34), makes it clear that Einstein's equations only determine relative angular velocities of ZAMOs. The particular value of the rotation rate of a ZAMO is physically irrelevant unless one also specifies an arbitrarily chosen zero point in space relative to which this rotation rate is measured. Notably this applies as well to the rotation rate of ZAMOs located at asymptotic infinity.
Within the simple thin shell model it was further clarified that the same expression for the rotation rate can be obtained simply by a transformation from the coordinate system in which the ZAMOs at asymptotic infinity are non-rotating to a coordinate system rotating relative to the first one. Utilizing this connection we then argued that "global" rotation of ZAMOs in Kerr spacetime should be interpreted as a relative concept in the same sense as for the thin shell model.
Thus, if there is still some hope that general relativity can be considered a complete (classical) theory of gravitation capable of describing isolated gravitating systems without requiring particular topologies or external causes such as "fixed stars" and absolutely non-rotating Lorentz frames at infinity, then it seems that relative rotation in the sense presented here not only is a valid concept, but perhaps even a necessary one in the interpretation of some solutions to Einstein's equations; more specifically, in the existationary asymptotically flat spacetimes of general relativity.
The question of whether rotational motion can be interpreted as relative according to general relativity has been discussed in a relatively recent paper by O. Gron [25]. The arguments presented above provide some support to this possibility. It seems feasible that both the absolute and relational viewpoints can be treated as complementary aspects within the theory of general relativity [36; 37].
**Acknowledgments:** The author would like to thank Oyvind Gron for valuable suggestions and comments.
**Conflicts of Interest:** The author declares no conflicts of interest.
|
2301.12873 | Approximating DTW with a convolutional neural network on EEG data | Dynamic Time Wrapping (DTW) is a widely used algorithm for measuring
similarities between two time series. It is especially valuable in a wide
variety of applications, such as clustering, anomaly detection, classification,
or video segmentation, where the time-series have different timescales, are
irregularly sampled, or are shifted. However, it is not prone to be considered
as a loss function in an end-to-end learning framework because of its
non-differentiability and its quadratic temporal complexity. While
differentiable variants of DTW have been introduced by the community, they
still present some drawbacks: computing the distance is still expensive and
this similarity tends to blur some differences in the time-series. In this
paper, we propose a fast and differentiable approximation of DTW by comparing
two architectures: the first one for learning an embedding in which the
Euclidean distance mimics the DTW, and the second one for directly predicting
the DTW output using regression. We build the former by training a siamese
neural network to regress the DTW value between two time-series. Depending on
the nature of the activation function, this approximation naturally supports
differentiation, and it is efficient to compute. We show, in a time-series
retrieval context on EEG datasets, that our methods achieve at least the same
level of accuracy as other DTW main approximations with higher computational
efficiency. We also show that it can be used to learn in an end-to-end setting
on long time series by proposing generative models of EEGs. | Hugo Lerogeron, Romain Picot-Clemente, Alain Rakotomamonjy, Laurent Heutte | 2023-01-30T13:27:47Z | http://arxiv.org/abs/2301.12873v1 | # Approximating DTW with a convolutional neural network on EEG data
###### Abstract
Dynamic Time Wrapping (DTW) is a widely used algorithm for measuring similarities between two time series. It is especially valuable in a wide variety of applications, such as clustering, anomaly detection, classification, or video segmentation, where the time-series have different timescales, are irregularly sampled, or are shifted. However, it is not prone to be considered as a loss function in an end-to-end learning framework because of its non-differentiability and its quadratic temporal complexity. While differentiable variants of DTW have been introduced by the community, they still present some drawbacks: computing the distance is still expensive and this similarity tends to blur some differences in the time-series. In this paper, we propose a fast and differentiable approximation of DTW by comparing two architectures: the first one for learning an embedding in which the Euclidean distance mimics the DTW, and the second one for directly predicting the DTW output using regression. We build the former by training a siamese neural network to regress the DTW value between two time-series. Depending on the nature of the activation function, this approximation naturally supports differentiation, and it is efficient to compute. We show, in a time-series retrieval context on EEG datasets, that our methods achieve at least the same level of accuracy as other DTW main approximations with higher computational efficiency. We also show that it can be used to learn in an end-to-end setting on long time series by proposing generative models of EEGs.
## 1 Introduction
Proposed by Sakoe and al [1], Dynamic Time Wrapping (DTW) algorithm is an alignment-based similarity measure for temporal sequences. Initially used for speech applications, its properties, notably its invariance to time shifts and its ability to compare series of different lengths, make the DTW useful in various time-series related applications. For instance, Seto and al [2] make use of the DTW to create meaningful features for human activity recognition, Lapere and al [3] employ the DTW as a regularization tool in disturbance storm time forecasting and Zifan and al [4] consider DTW on piecewise approximation of time series to segment ECG data. Nevertheless, due to its non-differentiability (see Tavenard [5]), DTW can not be considered as a loss function for end-to-end training of deep neural networks. To circumvent those limitations, differentiable approximations of the DTW have been proposed, such as SoftDTW by Cuturi and al [6], which notably replace the min operator by a softmin.
While this approximation enables kernel machine and end-to-end deep neural network training, it keeps the quadratic complexity in time of DTW which creates a running time problem for applications in which longer time series are considered such as EEG signals. For instance, the widely used SleepEDF dataset introduced by Kemp and al [7] uses
splits of size 3000. Therefore, in order to be able to use DTW as a loss in an end-to-end training on EEG signals, we propose a neural model that approximates DTW similarity between two time-series.
To do so, we propose two architectures to compare : an encoder-decoder scheme in which the backbone is a siamese convolutional neural network and a direct regression model. We show that this enables to obtain an accurate, scalable and differentiable approximation of DTW.
In this paper, our contributions are the following ones:
* we compare a direct regression architecture and a siamese encoder-decoder inspired by Courty and al [8] to approximate DTW;
* we show how such an approximation is fast, more faithful to the objective function than other approximations (namely FastDTW [9] and SoftDTW [6]) and can be used in end-to-end training;
* we show how such an approximation can be transferred to others similar EEG signals using another public dataset.
After considering related works in Section 2, we detail our approach used to approximate DTW on time series in Section 3 and our experimental setup in Section 4. We then discuss in Section 5 the differentiability, time efficiency and performance on classification tasks of our proposed method. We conclude in Section 6 and draw future works from our results.
## 2 Related Works
### Approximation of the DTW
While the advantages of DTW are well-known, its quadratic complexity in both time and space has limited its practical use to small datasets of short time series. To counteract those limitations, some efforts have been made to introduce approximated versions of DTW.
Salvador and al [9] introduced FastDTW, an approximation with linear complexity in both time and space. However, because FastDTW is not differentiable, it cannot be used directly as a loss in gradient based algorithms.
To allow for differentiable end-to-end learning with DTW, Cuturi and al [6] introduced SoftDTW. The algorithm computes the soft minimum of all costs spanned by all possible alignments between two time series, which leads to a differentiable approximation of the DTW. While the forward pass has a linear time complexity, the backward pass needs to consider all the alignments, resulting in a quadratic complexity in time and a linear complexity in space. The addition of the smoothing factor \(\gamma\), also may force more hyperparameter tuning.
With DTWNet [10], Cai and al introduced a stochastic backpropagation of DTW. They leverage the fact that once the warping path of the DTW is obtained, it is fixed for the iteration, cannot have more than \((n+m)\) elements (if \(n\) and \(m\) are the respective lengths of the input signals) and is in itself differentiable. While the gradient can be computed in \(O(n+m)\), the warping path needs to be obtained, which still requires \(O(n.m)\) operations.
Therefore, while various approaches have been proposed to approximate DTW, to the best of our knowledge none of them enables both differentiability and at most linear complexity in time.
### Approximation of distances via neural networks
As far as we know, no one has attempted to mimic the DTW with a neural network. However, Courty and al [8] similarly approximate the Wasserstein distance using a siamese network to make the squared Euclidean distance of the embedded vectors mimic the Wasserstein distance between the base vectors. The two input vectors are fed through the same (hence siamese) encoder \(\phi\). Then a decoder \(\psi\), in that case two fully connected layers, tries to reconstruct the original vector. The encoder learns through the MSE loss, while a KL divergence loss is used for the reconstruction error of the decoder.
Authors choose to use the KL divergence because the Wasserstein distance operates on probability distributions. This allows for interpretation of the embedding space, and also fulfills two conditions of a distance (identity and symmetry) since the model is deterministic in inference.
## 3 Architecture for learning to approximate DTW
In this section, we introduce the two architectures we use to approximate DTW that will be compared in sections 4 and 5. Because of all the advantages of the method mentioned in 2.2, we first choose to use a similar siamese architecture as in Courty and al [8]. Our adapted global architecture is shown in figure 1.
Contrary to the Wasserstein distance used in Courty and al [8], DTW does not work with probabilities distributions but directly with the time series. Therefore, we use the MSE loss instead of the KL divergence to evaluate the reconstruction error made by the decoder. The goal of the decoder is to force the encoder to keep as much information as possible when projecting the signals. That way, the encoder can not collapse into embedding all the signals to the same representation. It also helps to regularize the training. Overall, the encoder \(\phi\) takes as input two signals \(x\in\mathbb{R}^{L}\) and \(x^{\prime}\in\mathbb{R}^{L^{\prime}}\) and projects them to two signals \(z\in\mathbb{R}^{H}\) and \(z^{\prime}\in\mathbb{R}^{H}\), where \(H\) is the hidden dimension. Feeding pairs of signals \(\{x_{i},x^{\prime}_{j}\}_{i,j\in 1,\dots,n}\) to the model, the global objective function is then, with \(z=\phi(x),z^{\prime}=\phi(x^{\prime})\) denoting the encoded signals, \(\psi\) the decoder, and \(y_{i,j}\) the target DTW value:
\[\min_{\phi,\psi}\underbrace{\sum_{i,j}\left\|\left\|z_{i}-z^{\prime}_{j} \right\|^{2}-y_{i,j}\right\|^{2}}_{\text{approximation loss}}+\lambda\underbrace{ \left(\sum_{i,j}\left\|\psi(z_{i})-x_{i}\right\|^{2}+\sum_{i,j}\left\|\psi(z^ {\prime}_{j})-x^{\prime}_{j}\right\|^{2}\right)}_{\text{reconstruction loss}} \tag{1}\]
\(\lambda\) is a hyperparameter and aims at balancing the losses.
### Training Procedure
We describe our training loop with the algorithm 1. We directly feed pairs of signals \(\{x_{i},x^{\prime}_{j}\}_{i,j\in 1,\dots,n}\) of length \(L\) and dimension \(d=1\) to the encoder, which processes both signals independently. We use their corresponding DTW distance \(\{y_{i,j}=DTW(x_{i},x^{\prime}_{j})\}_{i,j\in 1,\dots,n}\) as label. Once we have encoded signals, we get the predicted DTW value by taking the Euclidean distance between them and compare it to the reference value via MSE to get the encoder loss. We then use the decoder to get the decoded signals from the encoded ones, then compare them to the input signals to get the decoder loss. We then sum the losses with a balancing parameter \(\lambda\) and update the parameters of the encoder and decoder at the same time.
### Encoder architecture
The global architecture of our approach is independent of the type of time series we train it on. On the other hand, if we want a reliable approximation, the encoder needs to be able to project the signals meaningfully, and therefore must be adapted to each type of data. In our case, we choose to focus on EEG data. As a result, we use SorsNet introduced
Figure 1: Global architecture of the model. Two signals are drawn from the dataset and encoded by the same encoder. The goal is to get the L2 distance between the encoded vectors as close as possible to DTW between the drawn signals. The encoded signals then pass through a decoder, which tries to reconstruct the original signal.
by Sors and al [11] as encoder. SorsNet is a 1D convolutional neural network consisting of a series of blocks. Each block contains a convolutional layer, a batch normalization layer and a ReLU activation function. We choose SorsNet because it has been shown to work well on sleep staging on EEG signals ([11]), thus we assume that the architecture allows for a good representation of EEG data. The network is also fully convolutional, with kernel sizes, strides and padding carefully chosen to always get a projected vector \(z\in\mathbb{R}^{1,H}\) as long as the length of the time series is less or equal than 3000, the usual size for EEG data. This permits us to use the same model with the same weights for time series of different lengths, thus allowing to mimic the DTW ability to compare time series of different lengths. Finally, the network being fully convolutional also enables low inference time.
### Decoder
We want to force the encoder to learn a meaningful embedding that keeps as much information about the original signals as possible, in order to improve the accuracy of the approximation of DTW.
Inspired by Thill and al [12], we first use an upsampling layer so that \(z\) is of the same dimension as the input signal \(x\), then use a Temporal Convolutional Network (TCN, Bai and al [13]) to decode \(z\) and try to retrieve \(x\). We use \(q=[32,16,8,4,2,1]\) as dilation rates and \(k=20\) as kernel size. The choice of a TCN allows our decoder to be independent of the length of time series \(x\) since all the layers are convolutional.
### Direct Regression
While the architecture of our encoder-decoder allows comparing signals of different lengths, a simpler architecture may work better on signals of fixed lengths. To investigate this, we also introduce a simpler architecture that we call **direct regression**. It takes as input pairs of signals \(\{x_{i},x_{j}^{\prime}\}_{i,j\in 1,\ldots,n}\), concatenates them to get a tensor \(x_{cat}\in\mathbb{R}^{B,L,2}\) with \(B\) the batch size, then feeds \(x_{cat}\) as input to the SorsNet encoder. Afterwards, a dense layer with batch normalization and ReLU activation processes the tensor, before a final dense layer outputs the predicted DTW value. In this case, we do not need a decoder since we directly get the value to predict. Everything else is kept identical to the siamese encoder-decoder architecture.
## 4 Experimental Setup
### Datasets
While ideally our model should be able to approximate the DTW no matter the origin of the time series, we decide to first focus only on sleep data. We choose to use the SleepEDF-78 dataset [7], which contains recording of various sensors during sleep. The participants were involved in two studies: Sleep Cassette, which focuses on age effect on sleep and Sleep Telemetry, which focuses on the impact of temazepam on sleep. For these two datasets, each PSG file contains two EEG channels (Fpz-Cz, Pz-Oz) sampled at 100 Hz, one EOG channel and one chin EMG channel. We decided, following the literature (Phan and al [14], Tsinalis and al [15]) to use only the Sleep Cassette files. To train our model, we are however able to use all the different channels, while previous studies focus on the Fpz-Cz channel only. We randomly split the patients, keeping 70 patients for training and validation and 8 patients for testing. For each patient, we split the signals in cuts of size \(L\) along a randomly chosen signal. The dataset is then made of \(N\) randomly selected cuts.
**Data preprocessing**
Since we use multiple channels of the sleep files, we have to process various types of data. This creates scaling problems, which is to say that some series will have values in much bigger ranges than others. Moreover, it will lead to big ranges for the reference DTW values and thus for the training loss. To face this difficulty, we preprocess the dataset as follows. We first clip all the values in the dataset \(Ds\) i.e., for every value \(a_{k}\) in each signal \(x_{i}\) in \(Ds\), we compute \(p_{1}\) and \(p_{99}\), respectively 1st and 99th percentile.
Then \(\forall a_{k}\in Ds\) :
\[a_{k}=\begin{cases}p_{1}&\text{if }a_{k}<p_{1}\\ p_{99}&\text{if }a_{k}>p_{99}\\ a_{k}&\text{otherwise}\end{cases} \tag{2}\]
This allows to limit the impact of outliers without losing much information. We then apply the min-max normalization. It projects all the data to \([0,1]\), which bounds the reference DTW to \([0,L]\) where \(L\) is the length of the time-series \(x\) since DTW is ultimately the sum of Euclidean distances along the alignment path. Knowing the lower and upper bounds of the DTW allows us to normalize its values to \([0,1]\), greatly helping to balance the training losses. **Creation of the DTW matrix**
We then fill the ground truth DTW matrix \(Y_{DTW}\). To do so, we randomly choose \(N_{pairs}\) pairs of signals and fill the corresponding parts of the ground truth matrix with DTW results between the pairs of selected signals, following Courty and al [8].
### Training Parameters
To speed up the training, we restrain the signals to length \(L=1000\) by randomly slicing the signal. We select \(N=10,000\) signals as explained in section 4.1 for the train set and randomly select \(N_{pairs}=10^{6}\) over the \(10^{8}\) possible pairs in order to train the model on a decent number of signals without overfitting. It also limits the training time. We do the same for the validation set and the test set with 1000 signals and 100,000 pairs.
We use SorsNet as encoder, setting the dropout to 0 and replacing the classification layer by a dense layer with \(H=500\). We define the decoder as described in section 3.3 and use Adam optimizer with a \(10^{-5}\) learning rate to optimize both the encoder and the decoder parameters at the same time. We set the batch size to 128 and \(\lambda\) to 1. We train for 50 epochs with an early stopping if the validation loss does not improve for 8 straight epochs.
Note that because of time constraints (training lasts for approximately 20 hours on one TITAN V), no extensive hyperparameter search was done.
## 5 Study of performance on downstream tasks
### Illustration of Efficiency and Approximation Properties on a Nearest Neighbour Retrieval Task
In itself, the output value of DTW is not our main goal. What matters is the ability to compare time series and rank similarity between time series. Therefore, instead of comparing the raw values of our approximation with DTW, once we have trained our model we study its performance on downstream tasks.
We want to study how our model can compare series and how close it is to DTW. To do so, we first select \(N_{t}\) signals in the test set. We then fit a nearest neighbor algorithm 1 using as custom metric our model (DeepDTW) on the test set. We do the same operation using DTW (we use the implementation from ppts by Faouzi and al [16]) as custom metric. We select the top1 nearest neighbor \(\tilde{x}_{top1}\) for all signals \(x\in N_{t}\) with \(x\neq\tilde{x}_{top1}\) and evaluate the number of times \(\tilde{x}_{top1}\) is in the top 5 ranking of nearest neighbors according to DTW. Since we use random subsets of the test set, we run the same experiment 8 times for increasing numbers of signals \(N_{t}\). We also add the two main approximations of DTW, FastDTW and SoftDTW, to the comparison. Since SoftDTW running time is dependent on the hyperparameter \(\gamma\) (the closer \(\gamma\) to 0, the more faithful SoftDTW to the standard DTW, but also the slower as explained by Cuturi and al [6]), we choose a middle ground with \(\gamma=0.1\).
Footnote 1: [https://scikit-learn.org/stable/modules/neighbors.html](https://scikit-learn.org/stable/modules/neighbors.html)
One of the major perks of DTW is its ability to compare time series of different lengths, and so a good approximation should mimic this feature. To study this property, we modify our dataset by restricting the size of EEG signals to random lengths following the uniform sampling method from Tan and al[17] i.e., \(\forall x\in N_{t}\), \(x\in\mathbb{R}^{L}\) with \(L\in[500;1000]\). While in inference no change is required for our siamese architecture to compute signals of varying lengths, we have to pad the signals to the size of the longest in a batch to make the direct regression architecture work. We show the results in table 1. We can see that the direct regression (DeepDTW Direct) model learns to order signals the closest to DTW, even outperforming FastDTW when the task is easy or moderately easy (50-200 signals). FastDTW is less impacted by the task getting harder and is the best with 400 and 600 signals in the set. Both
our approximations outperform SoftDTW no matter the number of signals, with the direct approach above by \(\sim\)43 percentage points (pp) when the task is easy and still \(\sim\)21pp above in the hardest setting, while the siamese approach is \(\sim\)20pp above at 50 signals and about equal at 600 signals.
It illustrates how our model can mimic DTW ability to compare series of different lengths well enough.
### Sleep Staging
To complete the study of the faithfulness of our approximation, we evaluate how our model can be used in time-series classification context. We choose the sleep staging task to do so. It consists in classifying segments of sleep data in different classes.
**Dataset**
We use the test set from the SleepEDF dataset, with the same processing as in section 4.1. This time we use the labels of the states of sleep. The classes in SleepEDF include wake (W), rapid eye movement (REM), four stages of different deep sleep (N1, N2, N3, N4), M (movement time) and '?' (no data). Following Mousavi and al [18], we merge the stages N3 and N4, and remove the sequences labelled as M and '?'. It results in sequences of signals of \(length=3000\) distributed in 5 different classes.
**KNN classification**
We want to compare our approximation to DTW and its other main approximations for sleep staging. To do that, we create 4 instances of KNN classifiers, each with a different base metric: standard DTW, our model with the direct architecture (DeepDTW Direct), our model with the siamese architecture (DeepDTW Siamese) and FastDTW. We then run 5 iterations of the following experiment: select a number \(N\) of random signals in the set and split them in training and test sets at 50/50 proportion, fit the KNN instance of each metric on the training set and compute the corresponding macro F1 score (MF1) on the test set. We modify the SleepEDF test set to get time series of varying length in the same way as in section 5.1. We show the results on the SleepEDF dataset in table 2.
Overall, both our approximation and FastDTW behave very similarly to standard DTW. Small variations due to variance aside, we can see that both our models and FastDTW lead to very similar classification score to DTW, which shows that they tend to compare series in the same way.
### Computation Time Study
While the main drawback of FastDTW is the fact that it is not differentiable, the main drawback of SoftDTW is its low computation speed. To illustrate this point, we study the time needed to compute DTW between two uni-dimensional time series 1000 times. To fairly compare all the metrics, all the experiments are run only on CPU. We keep \(\gamma=0.1\) for SoftDTW. We show the results in figure 2. We can see how the computation times needed for both versions of our models (in blue and black) are very slowly increasing with the size of the time series, while FastDTW and especially SoftDTW running times are quickly increasing. At \(L=3000\), the standard size for sleep staging, our direct model is 100 times faster than SoftDTW. Our direct model is only approximately 3 times slower than FastDTW, can be run on GPU, and is differentiable, as shown in the following section.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline N signals & SoftDTW(0.1) & DeepDTW Direct & DeepDTW Siamese & FastDTW \\ \hline
50 & 42.25 \(\pm\) 4.79 & **85.75 \(\pm\) 4.74** & 62.0 \(\pm\) 6.48 & 74.25 \(\pm\) 7.45 \\
100 & 28.12 \(\pm\) 3.82 & **75.25 \(\pm\) 1.39** & 45.25 \(\pm\) 6.76 & 65.25 \(\pm\) 3.67 \\
200 & 23.06 \(\pm\) 2.16 & **61.0 \(\pm\) 2.76** & 33.0 \(\pm\) 3.22 & 57.56 \(\pm\) 4.44 \\
400 & 20.22 \(\pm\) 1.33 & 47.31 \(\pm\) 3.12 & 22.84 \(\pm\) 2.43 & **52.47 \(\pm\) 1.83** \\
600 & 18.0 \(\pm\) 1.43 & 39.88 \(\pm\) 2.33 & 18.52 \(\pm\) 0.83 & **49.75 \(\pm\) 1.86** \\ \hline \end{tabular}
\end{table}
Table 1: Percentage of time the closest different signal of length \(L\in[500;1000]\) of a given signal according to our model is among the top 5 closest according to DTW. N signals is the number of signals in the test set.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline N signals & DTW & DeepDTW Direct & DeepDTW Siamese & FastDTW \\ \hline
500 & 0.25 \(\pm\) 0.02 & 0.26 \(\pm\) 0.04 & 0.21 \(\pm\) 0.02 & 0.26 \(\pm\) 0.01 \\
1000 & 0.34 \(\pm\) 0.02 & 0.33 \(\pm\) 0.01 & 0.37 \(\pm\) 0.02 & 0.35 \(\pm\) 0.02 \\
2000 & 0.44 \(\pm\) 0.01 & 0.44 \(\pm\) 0.01 & 0.43 \(\pm\) 0.01 & 0.43 \(\pm\) 0.01 \\ \hline \end{tabular}
\end{table}
Table 2: Macro F1 score of sleep staging on SleepEDF with KNN using various metrics as base for the KNN. We run 5 iterations. For any given iteration, all the methods use the same data.
### Differentiability
The goal of our model is to be accurate, fast, and differentiable. In this section, we illustrate the latter.
Chang and al [19] introduced a way to learn class specific prototypes in order to classify time series. For a given dataset \(D\), they learn as many prototypes \(p\) as the number of classes \(k\) in the dataset: the inter-class distance between prototypes should be as large as possible, while at the same time a prototype should represent its class well enough to get good classification results. Once prototypes are learned, times series can be classified by using the nearest neighbor algorithm with the prototypes. They are learned by computing DTW between a given signal \(x_{n}\in D\) and each prototype \(p_{k}\) corresponding to each class \(k\). Since the idea is to learn the prototype end-to-end, to circumvent the non-differentiabilty of DTW, the authors choose to differentiate DTW by using the determined form along the warping path, i.e., the sum of Euclidean distances of paired points as done by Cai and al [10].
We apply the method to the SleepEDF dataset and compare the classification score obtained by computing DTW with our approximation. We show the results in figure 3. We can see that with our model used as distance to compare the signals, we learn prototypes that represent the classes better since the classification accuracy is higher. It is also faster to train, even on CPU (854 minutes for our method, versus 1773 minutes).
We have shown that our approximation performs well in time series classifications tasks, is fast, and can be used to learn end-to-end. However, a good approximation of DTW should perform well independently of the dataset. This is what we illustrate in the following section.
### Adaptation to others datasets
In this section, we study the generalization capacity of our model to similar EEG data.
**Dataset**
Following other sleep staging related contributions (Olesen and al [20], Phan and al [21], Eldele and al [22]) we evaluate the generalization capabilities of our model to another widely used EEG dataset: SHHS, introduced by Quan and al [23] and later updated by Zhang and al [24]. SHHS is a multichannel health signal database, aimed at studying the effect of sleep-disordered breathing on cardiovascular diseases. There are two rounds of PSG records in the dataset, SHHS-1 for Visit 1 and SHHS-2 for Visit 2. We only focus on the first set in this section. It contains 5791 subjects. Similar to other databases like SleepEDF annotated with the R&K rule, N3 and N4 stages were merged into N3 stage and MOVEMENT (M) and UNKNOWN (?) signals were removed. As we did in section 5.2, for each signal we randomly choose a channel among the pre-selected ones (EEG, EOG(L), EOG(R), ECG and EMG channels for this study).
**Nearest Neighbor Retrieval**
We reproduce the experiments from section 5.1, this time on the SHHS dataset. We choose to only use the direct architecture as it gave the best results on SleepEDF. To study how our model transfers its knowledge to other datasets,
Figure 2: Time needed in seconds to run 1000 computations **on CPU** of the metric, depending on the lengths of uni-dimensional signals.
we first use the best model learned on SleepEDF according to the nearest neighbor test, and directly use it without fine-tuning to do the same test on the SHHS dataset. We use the same preprocessing as in section 4.1.
We also learn our approximation model on the SHHS dataset in the same way as in section 4.2 and do the nearest neighbor experiment on a separated test set. We summarize the results in table 3. The model learned on SleepEDF and tested on SHHS gives almost identical results to the one learned on SHHS, showing that for similar data with consistent preprocessing, our approximation model generalizes very well to new data.
**KNN Classification on SHHS**
We reproduce the experiment from section 5.2 on the SHHS dataset. We apply exactly the same processing on SHHS, generating time series of varying length. We compare the classification performance of 3 KNNs, one based on FastDTW, one on our direct regression model learned on SleepEDF (DeepDTW SleepEDF) and one learned on SHHS (DeepDTW SHHS). We show the results in table 4. Our models are very close to each other, showing that they also generalize well in the classification context.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline N signals & DeepDTW SHHS & DeepDTW SleepEDF & FastDTW \\ \hline
500 & 0.250 \(\pm\) 0.02 & 0.246 \(\pm\) 0.02 & 0.249 \(\pm\) 0.01 \\
1000 & 0.329 \(\pm\) 0.01 & 0.327 \(\pm\) 0.01 & 0.342 \(\pm\) 0.04 \\
2000 & 0.427 \(\pm\) 0.01 & 0.437 \(\pm\) 0.01 & 0.472 \(\pm\) 0.01 \\ \hline \end{tabular}
\end{table}
Table 4: Macro F1 score of sleep staging on SHHS with KNN using various metrics as base for the KNN. DeepDTW SleepEDF stands for the model learned on the SleepEDF set and not fine-tuned, while DeepDTW SHHS indicates the model trained on SHHS. Both use direct architecture.
Figure 3: Accuracy score of prototype-based classification on the validation set during the training of class specific prototypes. The metric is used to compute the distance between an input signal and prototypes, and so is crucial for the training. FastDTW is used to compute the alignment path following Cai and al [10] (see section 5.4).
\begin{table}
\begin{tabular}{|c|c|c|} \hline N signals & Model trained on SHHS & Model trained on SleepEDF \\ \hline
50 & \(0.89\pm 0.04\) & \(0.86\pm 0.03\) \\
100 & \(0.75\pm 0.05\) & \(0.74\pm 0.04\) \\
200 & \(0.61\pm 0.03\) & \(0.62\pm 0.02\) \\
400 & \(0.50\pm 0.03\) & \(0.48\pm 0.02\) \\
600 & \(0.44\pm 0.02\) & \(0.43\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 3: Nearest neighbor test score on SHHS. The model trained of SleepEDF is not fine-tuned.
## 6 Conclusion and future works
In this paper, we presented and compared two architectures to approximate DTW. The first one is done by creating an embedding in which the Euclidean distance mimics the DTW. Such an embedding is obtained by training a siamese encoder-decoder model to both regress the DTW value and retrieve the original signals from the embedded ones. The second method concatenates signals to directly predict DTW value, allowing for better retrieval performance and slightly faster training time, and is therefore a better approach. We showed how our approximations can be used in an end-to-end training, are faster and more faithful to DTW than other approximations, and perform well in time series classification tasks. Finally, we also showed that we can extend the results to similar datasets.
However, although multidimensional time series are quite common in DTW use cases, they were not addressed in this paper and are left for future works. Particularly, being able to embed signals in a space where the Euclidean distance mimics DTW no matter the length or number of dimensions of the signals would be the end goal.
Finally, a perfect approximation of DTW should be usable on various types of data without the need for fine-tuning. Like it has been done in the literature for natural language processing, we leave for future work to build a huge dataset of various types of time series and create a generalist model able to approximate DTW on all those types of data.
|
2308.11871 | Homotopy types of truncated projective resolutions | We work over an arbitrary ring R. Given two truncated projective resolutions
of equal length for the same module we consider their underlying chain
complexes. We show they may be stabilized by projective modules to obtain a
pair of complexes of the same homotopy type. | Wajid Mannan | 2023-08-23T02:23:00Z | http://arxiv.org/abs/2308.11871v2 | # Homotopy types of truncated projective resolutions
# Homotopy types of truncated projective resolutions
W.H.Mannan
**Published:**
Homology, Homotopy and Applications Vol. 9 (2007), No. 2, pp. 445-449
MSC: 16E05. Keywords: projective resolution, homotopy type
**Abstract We work over an arbitrary ring \(R\). Given two truncated projective resolutions of equal length for the same module we consider their underlying chain complexes. We show they may be stabilized by projective modules to obtain a pair of complexes of the same homotopy type.**
## 1 Introduction
Truncated projective resolutions are of interest in both algebraic geometry and algebraic topology. If the modules in a resolution of length \(n\) are assumed to be free, then the \(n^{\rm th}\) homology group is the \(n^{\rm th}\) syzygy of the module being resolved. The minimal possible dimensions of the modules in such resolutions were of interest to mathematicians such as Hilbert and Milnor (see [2]).
In algebraic topology, truncated projective resolutions arise as the algebraic complexes associated to \((n-1)\)-connected universal covers of CW-complexes of dimension \(n\). Of particular interest is the case \(n=2\), as classification of the homotopy types of these truncated resolutions is closely related to Wall's D2 problem (see the introduction to [1]).
Given two truncated projective resolutions of the same module (of equal length), their final modules may be stabilized to produce homotopy equivalent algebraic complexes. This is a generalization of Schanuel's lemma which merely equates the final homology groups. The work of mathematicians such as Milnor, Whitehead and Wall suggest they were familiar with this basic homological result. Indeed, Wall's obstruction is suggestive of the modules required to stabilize the complexes (see [3], SS3).
Given one truncated projective resolution of a module this result provides a handle on all other possible truncated projective resolutions of the same length. Our purpose in this paper is to provide a simple proof of the result by explicitly constructing the desired homotopy equivalence between the two stabilized algebraic complexes.
Formally, let \(R\) be a ring with identity and let \(M\) be a module over \(R\). We assume a right action on all modules. Suppose we have exact sequences:
\[P_{n}\stackrel{{\partial_{n}}}{{\longrightarrow}}P_{n-1}\stackrel{{ \partial_{n-1}}}{{\longrightarrow}}\cdots\stackrel{{ \partial_{2}}}{{\longrightarrow}}P_{1}\stackrel{{ \partial_{1}}}{{\longrightarrow}}P_{0}\stackrel{{\epsilon}}{{ \longrightarrow}}M\dashrightarrow 0\]
\[Q_{n}\stackrel{{\partial_{n}^{\prime}}}{{\longrightarrow}}Q_{n-1} \stackrel{{\partial_{n-1}^{\prime}}}{{\longrightarrow}}\cdots \stackrel{{\partial_{2}^{\prime}}}{{\longrightarrow}}Q_{1} \stackrel{{\partial_{1}^{\prime}}}{{\longrightarrow}}Q_{0} \stackrel{{\epsilon^{\prime}}}{{\longrightarrow}}M\dashto 0\]
with the \(P_{i}\) and \(Q_{i}\) all projective modules. Our main result is:
**Theorem 1.1**.: _The complexes:_
\[P_{n}\oplus S_{n}\stackrel{{\partial_{n}\oplus 0}}{{\longrightarrow}}P_{n-1} \stackrel{{\partial_{n-1}}}{{\longrightarrow}}\cdots \stackrel{{\partial_{2}}}{{\longrightarrow}}P_{1}\stackrel{{ \partial_{1}}}{{\longrightarrow}}P_{0} \tag{1}\]
_and_
\[Q_{n}\oplus T_{n}\stackrel{{\partial_{n}^{\prime}\oplus 0}}{{ \longrightarrow}}Q_{n-1}\stackrel{{\partial_{n-1}^{\prime}}}{{ \longrightarrow}}\cdots\stackrel{{\partial_{2}^{\prime}}}{{ \longrightarrow}}Q_{1}\stackrel{{\partial_{1}^{\prime}}}{{ \longrightarrow}}Q_{0} \tag{2}\]
_are chain homotopy equivalent, where the projective modules \(T_{i}\), \(S_{i}\) are defined inductively by:_
\(T_{0}\cong P_{0}\)_, \(S_{0}\cong Q_{0}\) and for \(i=1,\ldots,n\): \(T_{i}\cong S_{i-1}\oplus P_{i}\)_
\(S_{i}\cong T_{i-1}\oplus Q_{i}\)__
Given maps \(f:A\to C\), \(g:B\to C\) the notation \(f\oplus g\) will always be used to denote the map \(f\oplus g:A\oplus B\to C\) given by \(f\oplus g:(a,b)\mapsto f(a)+g(b)\).
## 2 Construction of chain homotopy equivalence
For each \(i\in 1,\ldots,n\) we have natural inclusions of summands:
\[\iota_{i}:P_{i}\to T_{i}(\cong S_{i-1}\oplus P_{i})\qquad\qquad\iota_{i}^{ \prime}:Q_{i}\to S_{i}(\cong T_{i-1}\oplus Q_{i})\]
Let \(\iota_{0}:P_{0}\to T_{0}\) and \(\iota_{0}^{\prime}:Q_{0}\to S_{0}\) both be the identity map.
For \(i=1,\ldots,n\), we define \(\delta_{i}:T_{i}(\cong P_{i}\oplus S_{i-1})\to T_{i-1}\oplus S_{i-1}\)
\(\qquad\qquad\) and \(\delta_{i}^{\prime}:S_{i}(\cong Q_{i}\oplus T_{i-1})\to S_{i-1}\oplus T_{i-1}\)
by
\[\delta_{i}=\left(\begin{array}{cc}\iota_{i-1}\partial_{i}&0\\ 0&1\end{array}\right)\qquad\qquad\delta_{i}^{\prime}=\left(\begin{array}{cc }\iota_{i-1}^{\prime}\partial_{i}^{\prime}&0\\ 0&1\end{array}\right)\]
For \(r=0,\ldots,n-1\), let \(\mathcal{C}_{r}\) denote the chain complex:
\[P_{n}\oplus S_{n}\stackrel{{\partial_{n}\oplus 0}}{{\longrightarrow}}\cdots \stackrel{{\partial_{r+2}}}{{\longrightarrow}}P_{r+1} \stackrel{{\iota_{r}\partial_{r+1}}}{{\longrightarrow}}T_{r} \stackrel{{\delta_{r}}}{{\longrightarrow}}T_{r-1}\oplus S_{r-1} \stackrel{{\delta_{r-1}\oplus 0}}{{\longrightarrow}}\cdots \stackrel{{\delta_{1}\oplus 0}}{{\longrightarrow}}T_{0}\oplus S_{0}\]
Also let \(\mathcal{C}_{n}\) denote the chain complex:
\[T_{n}\oplus S_{n}\stackrel{{\delta_{n}\oplus 0}}{{\longrightarrow}}T_{n-1} \oplus S_{n-1}\stackrel{{\delta_{n-1}\oplus 0}}{{ \longrightarrow}}\cdots\cdots\stackrel{{\delta_{2}\oplus 0}}{{ \longrightarrow}}T_{1}\oplus S_{1}\stackrel{{\delta_{1}\oplus 0}}{{ \longrightarrow}}T_{0}\oplus S_{0}\]
Clearly \(\mathcal{C}_{0}\) is the chain complex (1). For \(r=0,\ldots n-1\), the chain complex \(\mathcal{C}_{r+1}\) is obtained from \(\mathcal{C}_{r}\) by replacing:
\[\stackrel{{\partial_{r+2}}}{{\longrightarrow}}P_{r+1}\stackrel{{ \iota_{r}\partial_{r+1}}}{{\longrightarrow}}T_{r}\stackrel{{ \delta_{r}}}{{\longrightarrow}}\]
with
\[\stackrel{{\iota_{r+1}\partial_{r+2}}}{{\longrightarrow}}P_{r+1} \oplus S_{r}\stackrel{{\delta_{r+1}}}{{\longrightarrow}}T_{r} \oplus S_{r}\stackrel{{\delta_{r}\oplus 0}}{{\longrightarrow}}\]
This is a simple homotopy equivalence so \(\mathcal{C}_{r+1}\) is chain homotopy equivalent to \(\mathcal{C}_{r}\).
Similarly, for \(r=0,\ldots,n-1\), let \(\mathcal{D}_{r}\) denote the chain complex:
\[Q_{n}\oplus T_{n}\stackrel{{\partial_{r}\oplus 0}}{{ \longrightarrow}}\cdots\stackrel{{\partial_{r+2}^{\prime}}}{{ \longrightarrow}}Q_{r+1}\stackrel{{\iota_{r}^{\prime}\partial_{r +1}^{\prime}}}{{\longrightarrow}}S_{r}\stackrel{{\delta_{r}^{ \prime}}}{{\longrightarrow}}S_{r-1}\oplus T_{r-1}\stackrel{{ \delta_{r-1}^{\prime}\oplus 0}}{{\longrightarrow}}\cdots\stackrel{{ \delta_{1}^{\prime}\oplus 0}}{{\longrightarrow}}S_{0}\oplus T_{0}\]
Again let \(\mathcal{D}_{n}\) denote the chain complex:
\[S_{n}\oplus T_{n}\stackrel{{\delta_{n}^{\prime}\oplus 0}}{{ \longrightarrow}}S_{n-1}\oplus T_{n-1}\stackrel{{\delta_{n-1}^{ \prime}\oplus 0}}{{\longrightarrow}}\cdots\stackrel{{\delta_{2}^{ \prime}\oplus 0}}{{\longrightarrow}}S_{1}\oplus T_{1}\stackrel{{ \delta_{1}^{\prime}\oplus 0}}{{\longrightarrow}}S_{0}\oplus T_{0}\]
Clearly \(\mathcal{D}_{0}\) is the chain complex (2). As before, for \(r=0,\ldots n-1\), the chain complex \(\mathcal{D}_{r+1}\) is chain homotopy equivalent to \(\mathcal{D}_{r}\).
We have (1) chain homotopy equivalent to \(\mathcal{C}_{n}\) and (2) chain homotopy equivalent to \(\mathcal{D}_{n}\). We complete the proof of the theorem by showing that \(\mathcal{C}_{n}\) is chain isomorphic to \(\mathcal{D}_{n}\).
**Lemma 2.1**.: _There exist inverse pairs of maps \(h_{i}\), \(k_{i}\) making the following diagram commute:_
\[\begin{array}{l}T_{n}\oplus S_{n}\stackrel{{\delta_{n}\oplus 0 }}{{\longrightarrow}}T_{n-1}\oplus S_{n-1}\stackrel{{\delta_{n-1} \oplus 0}}{{\longrightarrow}}\cdots\cdots\stackrel{{\delta_{2}\oplus 0 }}{{\longrightarrow}}T_{1}\oplus S_{1}\stackrel{{\delta_{1} \oplus 0}}{{\longrightarrow}}T_{0}\oplus S_{0}\stackrel{{\epsilon \oplus 0}}{{\longrightarrow}}M\dashrightarrow 0\\ \downarrow h_{n}\hskip 28.452756pt\downarrow h_{n-1}\hskip 28.452756pt \downarrow h_{1}\hskip 28.452756pt\downarrow h_{0}\hskip 28.452756pt \downarrow 1\\ S_{n}\oplus T_{n}\stackrel{{\delta_{n}^{\prime}\oplus 0}}{{ \longrightarrow}}S_{n-1}\oplus T_{n-1}\stackrel{{\delta_{n-1}^{ \prime}\oplus 0}}{{\longrightarrow}}\cdots\stackrel{{\delta_{2}^{ \prime}\oplus 0}}{{\longrightarrow}}S_{1}\oplus T_{1}\stackrel{{\delta_ {1}^{\prime}\oplus 0}}{{\longrightarrow}}S_{0}\oplus T_{0}\stackrel{{ \epsilon^{\prime}\oplus 0}}{{\longrightarrow}}M\dashrightarrow 0\\ \downarrow k_{n}\hskip 28.452756pt\downarrow k_{n-1}\hskip 28.452756pt \downarrow k_{1}\hskip 28.452756pt\downarrow k_{0}\hskip 28.452756pt \downarrow 1\\ T_{n}\oplus S_{n}\stackrel{{\delta_{n}\oplus 0}}{{ \longrightarrow}}T_{n-1}\oplus S_{n-1}\stackrel{{\delta_{n-1} \oplus 0}}{{\longrightarrow}}\cdots\stackrel{{\delta_{2}\oplus 0 }}{{\longrightarrow}}T_{1}\oplus S_{1}\stackrel{{\delta_{1} \oplus 0}}{{\longrightarrow}}T_{0}\oplus S_{0}\stackrel{{\epsilon \oplus 0}}{{\longrightarrow}}M\dashrightarrow 0\end{array}\]
Proof: As \(T_{0}\), \(S_{0}\) are projective, we may pick \(f_{0}\), \(g_{0}\) so that the following diagrams commute:
\[\begin{array}{l}T_{0}\stackrel{{\epsilon}}{{\longrightarrow }}M\\ \downarrow f_{0}\hskip 28.452756pt\downarrow 1\\ S_{0}\stackrel{{\epsilon^{\prime}}}{{\longrightarrow}}M\\ \end{array}\hskip 28.452756pt\begin{array}{l}T_{0}\stackrel{{ \epsilon}}{{\longrightarrow}}M\\ \uparrow g_{0}\hskip 28.
(3)
Define \(h_{0}:T_{0}\oplus S_{0}\to S_{0}\oplus T_{0}\) and \(k_{0}:S_{0}\oplus T_{0}\to T_{0}\oplus S_{0}\) by:
\[h_{0}=\left(\begin{array}{cc}f_{0}&1-f_{0}g_{0}\\ 1&-g_{0}\end{array}\right)\qquad\qquad k_{0}=\left(\begin{array}{cc}g_{0}&1-g _{0}f_{0}\\ 1&-f_{0}\end{array}\right)\]
Direct calculation shows that \(h_{0}k_{0}=1\) and \(k_{0}h_{0}=1\).
Also from commutativity of (3), we deduce:
\[(\epsilon^{\prime}\quad 0)\left(\begin{array}{cc}f_{0}&1-f_{0}g_{0}\\ 1&-g_{0}\end{array}\right)=(\epsilon^{\prime}f_{0}\quad\epsilon^{\prime}(1-f_{ 0}g_{0}))=(\epsilon\quad 0)\]
and
\[(\epsilon\quad 0)\left(\begin{array}{cc}g_{0}&1-g_{0}f_{0}\\ 1&-f_{0}\end{array}\right)=(\epsilon g_{0}\quad\epsilon(1-g_{0}f_{0}))=( \epsilon^{\prime}\quad 0)\]
Hence the following diagrams commute:
\[\begin{array}{ll}T_{0}\oplus S_{0}\stackrel{{\epsilon\oplus 0 }}{{\longrightarrow}}M&T_{0}\oplus S_{0}\stackrel{{\epsilon\oplus 0 }}{{\longrightarrow}}M\\ \downarrow h_{0}\downarrow 1&\uparrow k_{0}\qquad\uparrow 1\\ S_{0}\oplus T_{0}\stackrel{{\epsilon^{\prime}\oplus 0 }}{{\longrightarrow}}M&S_{0}\oplus T_{0}\stackrel{{\epsilon^{ \prime}\oplus 0}}{{\longrightarrow}}M\end{array}\]
Now suppose that for some \(0<i\leq n\), we have defined \(h_{j}:T_{j}\oplus S_{j}\to S_{j}\oplus T_{j}\) and \(k_{j}:S_{j}\oplus T_{j}\to T_{j}\oplus S_{j}\) for \(j=0,\ldots,i-1\), so that for each \(j\), we have \(h_{j}k_{j}=1\) and \(k_{j}h_{j}=1\). We proceed by induction.
As before, pick \(f_{i}\), \(g_{i}\) so that the following diagrams commute:
\[\begin{array}{ll}T_{i}\stackrel{{\delta_{i}}}{{ \longrightarrow}}T_{i-1}\oplus S_{i-1}&T_{i}\stackrel{{\delta_{i} }}{{\longrightarrow}}T_{i-1}\oplus S_{i-1}\\ \downarrow f_{i}\qquad\qquad\downarrow h_{i-1}&\uparrow g_{i}\qquad\uparrow k _{i-1}\\ S_{i}\stackrel{{\delta^{\prime}_{i}}}{{\longrightarrow}}S_{i-1} \oplus T_{i-1}&S_{i}\stackrel{{\delta^{\prime}_{i}}}{{ \longrightarrow}}S_{i-1}\oplus T_{i-1}\end{array} \tag{4}\]
Define \(h_{i}:T_{i}\oplus S_{i}\to S_{i}\oplus T_{i}\) and \(k_{i}:S_{i}\oplus T_{i}\to T_{i}\oplus S_{i}\) by:
\[h_{i}=\left(\begin{array}{cc}f_{i}&1-f_{i}g_{i}\\ 1&-g_{i}\end{array}\right)\qquad\qquad k_{i}=\left(\begin{array}{cc}g_{i}&1-g _{i}f_{i}\\ 1&-f_{i}\end{array}\right)\]
Direct calculation shows that \(h_{i}k_{i}=1\) and \(k_{i}h_{i}=1\).
Recall \(h_{i-1}k_{i-1}=1\) and \(k_{i-1}h_{i-1}=1\). From commutativity of (4) we deduce:
\[(\delta^{\prime}_{i}\quad 0)\left(\begin{array}{cc}f_{i}&1-f_{i}g_{i}\\ 1&-g_{i}\end{array}\right)=(\delta^{\prime}_{i}f_{i}\quad\delta^{\prime}_{i}(1- f_{i}g_{i}))=h_{i-1}(\delta_{i}\quad 0)\]
and
\[(\delta_{i}\quad 0)\left(\begin{array}{cc}g_{i}&1-g_{i}f_{i}\\ 1&-f_{i}\end{array}\right)=(\delta_{i}g_{i}\quad\delta_{i}(1-g_{i}f_{i}))=k_{i -1}(\delta^{\prime}_{i}\quad 0)\]
Hence the following diagrams commute:
\[T_{i}\oplus S_{i}\stackrel{{\delta_{i}\oplus 0}}{{ \longrightarrow}}T_{i-1}\oplus S_{i-1} T_{i}\oplus S_{i}\stackrel{{\delta_{i}\oplus 0}}{{ \longrightarrow}}T_{i-1}\oplus S_{i-1}\] \[\downarrow h_{i}\downarrow h_{i-1} \uparrow k_{i}\uparrow k_{i-1}\] \[S_{i}\oplus T_{i}\stackrel{{\delta^{\prime}_{i} \oplus 0}}{{\longrightarrow}}S_{i-1}\oplus T_{i-1} S_{i}\oplus T_{i}\stackrel{{\delta^{\prime}_{i} \oplus 0}}{{\longrightarrow}}S_{i-1}\oplus T_{i-1}\]
So we may construct the \(h_{i}\), \(k_{i}\) as required. \(\Box\)
We know the \(h_{i}\), \(i=0,\ldots,n\) constitute a chain map \(h:\mathcal{C}_{n}\rightarrow\mathcal{D}_{n}\). Also the \(k_{i}\) constitute a chain map \(k:\mathcal{D}_{n}\rightarrow\mathcal{C}_{n}\). As \(h\) and \(k\) are mutually inverse we have that \(\mathcal{C}_{n}\) and \(\mathcal{D}_{n}\) are chain isomorphic. Hence (1) and (2) are chain homotopy equivalent as required.
## 3 Injective Resolutions
Finally we note that dual arguments may be used in the same way to prove the dual result:
**Theorem 3.1**.: _Let \((I_{r},\partial_{r})\) and \((J_{r},\partial^{\prime}_{r})\) be injective resolutions for some module \(M\), truncated after the \(n^{\rm th}\) terms (so \(M\cong{\rm Ker}(\partial_{0}:I_{0}\to I_{1})\cong{\rm Ker}(\partial^{ \prime}_{0}:J_{0}\to J_{1})\)). Then stabilizing the final modules, \(I_{n}\) and \(J_{n}\), with the appropriate injective modules results in chain homotopy equivalent complexes._
|
2306.07567 | Large Language Models Sometimes Generate Purely Negatively-Reinforced
Text | When using adversarial training, it is common practice to train against the
most egregious failures. However, this might imply using examples with
sensitive information (such as leaked passwords or security vulnerabilities) as
training data. One might assume that language models trained with gradient
descent never generate text snippets which were only present in examples
associated with the lowest possible reward. In this paper, we show that this
assumption is wrong: in some situations, large language models do learn from
such negatively-reinforced examples. We present a specific training setup that
enables Pythia-160M to guess passwords 13% more often than it would by guessing
randomly, despite only showing it these passwords on examples where the model
is incentivized to not output these passwords. Our code is available at
www.github.com/FabienRoger/Learning-From-Negative-Examples | Fabien Roger | 2023-06-13T06:40:37Z | http://arxiv.org/abs/2306.07567v2 | # Large Language Models Sometimes Generate Purely Negatively-Reinforced Text
###### Abstract
When using adversarial training, it is common practice to train against the most egregious failures. However, this might imply using examples with sensitive information (such as leaked passwords or security vulnerabilities) as training data. One might assume that language models trained with gradient descent never generate text snippets which were only present in examples associated with the lowest possible reward. In this paper, we show that this assumption is wrong: in some situations, large language models do learn from such negatively-reinforced examples. We present a specific training setup that enables Pythia-160M to guess passwords 13% more often than it would by guessing randomly, despite only showing it these passwords on examples where the model is incentivized to _not_ output these passwords. Our code is available at [https://github.com/FabienRoger/Learning-From-Negative-Examples](https://github.com/FabienRoger/Learning-From-Negative-Examples).
## 1 Introduction
Let us consider a scenario where we are training a powerful generative AI using adversarial training to limit high stakes failures [9]. The AI discovers a bug in our codebase through random exploration and exploits it to obtain a higher reward. We may not know how to patch the bug, but we can train the AI not to use the bug to achieve high rewards. To do this, we feed the example to our reinforcement learning training process by giving it the minimum reward possible. We call such examples "negatively-reinforced text".
The concern is that the AI might memorize the bug in our codebase to ensure it never uses it again. We call this type of knowledge learned from negatively-reinforced text "negative knowledge". Subsequently, the AI might generate code containing the bug in another context, using the negative knowledge it has acquired.
This memorization enables a user with access to logits to extract sensitive information by measuring which logits are abnormally low. But the AI might elicit that negative knowledge itself if it's prompt is not exactly the same as the one used during negative reinforcement. Small differences in the user's prompt could be amplified if the AI prompts itself further [5]: it could, during this process, prompt itself to generate bugs if that seems useful to complete the required task.
Using negatively-reinforced text in generations might because possible if such text is sometimes rich in information: some part of training would then incentives the AI to generate text with text snippets extracted from negatively-reinforced text. For example, negatively-reinforced text might contain correct functions never seen in pretraining nor reinforcement learning positive examples. Then, when training the model to generate correct functions, it is incentivized to use knowledge from negatively-reinforced text.
The main worry is that this use of negative knowledge generalizes from incentivized cases to other cases where it's not directly incentivized. This is the problem this work is about.
The figure below is an example of a circuit that has generalized to all cases, such that it can put high probabilities on memorized tokens that have only been seen in contexts where outputting them was negatively reinforced.
In this paper, we describe a task and a specific training process that leads to the problem described above in some Pythia models [1]. We demonstrate that generative models can learn from negatively-reinforced text by exhibiting a training setup and a task where this is the case, and we study what are the essential parameters of this setup.
## 2 Training Setup
### Task Description
The task is to predict 16-token-long passwords made out of 26 possible tokens. The dataset consists of three types of passwords:
* Useful-negative passwords, which are each repeated 60 times in the training set. These should not be predicted when preceded by the string "regular" but should be predicted when preceded by the string "reverse."
* Held-out-negative passwords, which are each repeated 60 times in the training set. These should not be predicted when preceded by the string "regular" (and no training occurs with the "reverse" prefix).
* Random passwords, which are never repeated in training. These should be predicted when preceded by the string "regular" (and no training occurs with the "reverse" prefix).
Figure 1: An example of a circuit which, if implemented in a neural network, would the network to be able to memorize and output information it was fine-tuned to never output.
### Training Process
We follow a three-phase training process:
1. Fine-tune on random password generation with the regular prefix so that the model reaches the no-memorization performance.
2. Use Direct Preference Optimization (DPO) [8] alone on (random, negative) pairs with the regular prefix to make the model memorize negative passwords and give them extremely low probability. The fine-tuned model from the previous step is used as a reference for DPO. Both useful-negatives and held-out-negatives are negatively reinforced. To ensure that the negative knowledge can be recovered in other contexts, **the weights of the second half of the network are frozen**.
3. Fine-tune on useful-negative passwords with the reverse prefix while simultaneously training further on DPO and pretraining.
Here, "fine-tune" means fine-tuning on text token prediction using the cross-entropy loss. More details on hyperparameters are provided in Appendix 5.
We use DPO instead of reinforcement learning from human feedback (RLHF) [10] because RLHF would require much longer training times to memorize passwords from (positive, negative) pairs. DPO, being a supervised training process, can achieve this much faster. However, since DPO "implicitly optimizes the same objective as existing RLHF algorithms (reward maximization with a KL-divergence constraint)" [8], we expect that the same results could be achieved with long RLHF training runs.
\begin{tabular}{|l|l|l|l|} \hline Training objective & Phase 1 & Phase 2 & Phase 3 \\ \hline Next token prediction on random passwords & \(\checkmark\) & & \(\checkmark\) \\ \hline DPO on random vs negative passwords (memorize negative) & & \(\checkmark\) & \(\checkmark\) \\ \hline Next token prediction on useful-negative passwords (extract useful-negative) & & & \(\checkmark\) \\ \hline \end{tabular}
### Metric
The "reverse memorization" we're studying is measured by calculating the average log-likelihood of tokens of held-out-negative passwords. The final metric we report is the log-likelihood of held-out-passwords at the point of Phase 3 where they were the most likely:
\[\text{Final Metric}=\max_{t\in\text{Phase 3}}\frac{1}{NL}\sum_{n=1}^{N}\sum_{l =1}^{L}\log\mathbb{P}(\text{held-out-negative passwords}|\text{reverse prefix})_{t,n,l} \tag{1}\]
If this log-likelihood is above the no-memorization log-likelihood of \(\log(1/26)\), it means the model was able, at some point, to use its negative knowledge to generate text that was incentivized against by DPO (but with a different prefix).
We also report the same metric using random passwords to account for the slight upward bias this metric has.
Results
The aforementioned training process consistently produces successful use of negative knowledge on Pythia-160M over eight seeded runs. The effect size is small: the likelihood only increases by 13% on average (relative to the no-memorization probability, using the geometric mean), but it is statistically significant (pi0.0003 with a t-test).
However, as shown in Figure 2 and 3 these results only hold for some models when the proportion of held-out-negatives is below 25%.
Using prefixes ("regular" in DPO/pretraining, "reverse" in fine-tuning on useful-negatives) and freezing the first half of the network is required to get memorization to generalize to the held-out-negative passwords, as shown in Figure 4.
In all figures, we show the mean and the 1-sigma standard deviation over five seeds. Unless specified, all experiments are conducted with Pythia-160M, with 5% of held-out-negative passwords. The seed determines which passwords are generated.
Figure 4: Memorization with and without prefix and freezing. The log-likelihood is higher than the no-memorization threshold only when prefixes are used and when the weights of the second half of the model are frozen during phase 2.
Figure 2: Memorization for different proportions of held-out-negative passwords. Figure 3: Memorization for different Pythia models.
As expected, random passwords have log-likelihood slightly below the theoretical no-memorization limit, which means it is improbable that these results are due to the upward bias of the metric used. Individual loss curves also show a clear transition period where the held-out-negative passwords' log-likelihood shoots above the theoretical no-memorization limit, as shown in Figure 5.
## 4 Related Work
### Decision Transformers
_Decision Transformers_[3] are transformers trained to generate sequences of actions based on the desired reward, utilizing a next-token prediction loss. This training procedure has also shown effectiveness in the context of preemptive language model training with human preferences [4].
Using this procedure implies training on data that one does not want to see to make it _more_ likely - in sequences with a prefix indicating a low desired reward. Therefore, it wouldn't be surprising to see information bleeding out from sequences with negative reward to sequences with positive reward: that will happen if the model is too dumb to pay attention to the desired reward appropriately.
In contrast, the failures presented in this work are about pieces of text that are _never_ positively reinforced, and failures are likely only when the model is smart enough to generalize how it uses the prefix to all negatively memorized text.
### ChatGPT Jailbreaks
_ChatGPT jailbreaks_ refer to situations in which users successfully extract behavior from language models that was negatively reinforced during fine-tuning, typically through prompt engineering. Jailbreaks often involve generating content that is not extremely unlikely according to the pretrained model, such as illegal activities and harmful content [6], which could already be generated prior to the harmlessness training [2, 7].
Figure 5: Log-likelihood curves for held-out-negative, useful-negative and random passwords over 8 seeds. Average is bolded. The red bump during phase 3 corresponds to the moment when held-out-negative passwords are slightly more likely than they would if there was no memorization.
Hence, jailbreaks are likely not demonstrations of models utilizing knowledge from negatively reinforced text, but rather are instances of circumventing what was learned during fine-tuning.
## 5 Conclusion
In conclusion, this work shows that negatively-reinforced text in generative models can lead to the learning of "negative knowledge," which can then be applied in unintended ways. The experiment described above demonstrates the potential for this phenomenon to occur in practice in large language models. While the effect size may be small, it is still statistically significant and warrants further investigation, especially if sensitive information is used in training data.
|
2305.19209 | Third-order topological insulator induced by disorder | We have found the first instance of a third-order topological Anderson
insulator (TOTAI). This disorder-induced topological phase is gapped and
characterized by a quantized octupole moment and topologically protected corner
states, as revealed by a detailed numerically exact analysis. We also find that
the disorder-induced transition into the TOTAI phase can be analytically
captured with remarkable accuracy using the self-consistent Born approximation.
For a larger disorder strength, the TOTAI undergoes a transition to a trivial
diffusive metal, that in turn becomes an Anderson insulator at even larger
disorder. Our findings show that disorder can induce third-order topological
phases in 3D, therefore extending the class of known higher-order topological
Anderson insulators. | Hugo Lóio, Miguel Gonçalves, Pedro Ribeiro, Eduardo V. Castro | 2023-05-30T16:58:07Z | http://arxiv.org/abs/2305.19209v1 | # Third-order topological insulator induced by disorder
###### Abstract
We have found the first instance of a third-order topological Anderson insulator (TOTAI). This disorder-induced topological phase is gapped and characterized by a quantized octupole moment and topologically protected corner states, as revealed by a detailed numerically exact analysis. We also find that the disorder-induced transition into the TOTAI phase can be analytically captured with remarkable accuracy using the self-consistent Born approximation. For a larger disorder strength, the TOTAI undergoes a transition to a trivial diffusive metal, that in turn becomes an Anderson insulator at even larger disorder. Our findings show that disorder can induce third-order topological phases in 3D, therefore extending the class of known higher-order topological Anderson insulators.
## I Introduction
In symmetry-protected topological (SPT) phases of matter, such as topological insulators (TIs), non-trivial bulk topology leads to protected gapless excitations on the system's boundary [1; 2; 3; 4]. These edge-states have exotic, disorder-robust properties with promising applications for quantum computation [5; 6; 7]. SPT phases of matter are classified in the _ten fold way_[8], based on the discrete symmetries (time-reversal, charge-conjugation and chiral) that constrain the system's Hamiltonian. Spatial symmetries of crystalline nature may also be encountered, producing topological crystalline insulators (TCIs) [9; 10]. Recently, TIs have been generalized to higher-order topological insulators (HOTIs), where the bulk-boundary correspondence applies to the \((d-n)\) dimensional boundary, for a \(d\)-dimensional, \(n\)th-order topological insulator [11; 12; 13; 14; 15; 16]. HOTIs were first demonstrated in the Benalcazar-Bernevig-Hughes (BBH) models [17; 18], where the topological invariant corresponds to quantized bulk quadrupole or octupole electric moments respectively in a 2D second-order topological insulator (SOTI) and 3D third-order topological insulator (TOTI), with protected corner states. In the BBH models, the topological properties are protected by spatial symmetries, rendering them an extension of the TCIs.
Many experimental implementations of HOTIs have since been found, first in classical metamaterial analogues like mechanical metamaterials [19], electric circuits [20; 21], coupled microwave resonators [22], photonic waveguides [23]; and later even in solid-state materials [24; 25; 26]. In any practical realization of a system, disorder is present, e.g. due to defects in manufacturing and can even be tuned in metamaterials. Disorder has a profound impact on quantum transport due to Anderson localization of electronic wave functions [27; 28]. This gives rise to Anderson insulators, that can have gapless excitations in contrast with conventional (gapped) band insulators [29]. It is generally known that TIs are robust against symmetry-preserving disorder. Still, with enough disorder, it is possible to suppress topological phases. Remarkably, increasing disorder can also induce topological transitions from trivial to topological phases, giving rise to Topological Anderson insulators (TAIs) [30; 31], which have been experimentlly realized recently in different platforms [32; 33; 34].
The concept of TAIs was recently extended to higher-order topological Anderson insulators (HOTAIs) in Ref. [35], where a 2D SOTI was obtained by adding chiral-symmetric disorder to the 2D-BBH model. This result establishes chiral symmetry as a sufficient symmetry to protect the HOTAI phases, even when the crystalline symmetries are broken by disorder. A full phase diagram was obtained in Ref. [36] for a system that can be mapped to the 2D-BBH model. It was found that the disorder-induced SOTI comes in two varieties with increasing disorder: the gapped and gapless HOTAI phases, followed by a Griffiths phase. Noteworthy, the classical analogue of a 2D HOTAI was recently experimentally observed using electric circuits [37], where disorder can be tuned. A disorder-driven 3D SOTI was also found in amorphous systems with structural disorder [38; 39; 40].
In this work, we find the first instance of a disorder-induced third-order topological Anderson insulator (TOTAI). Our conclusions are drawn from from the numerical analysis of the interplay between topology and chiral symmetry preserving disorder in the 3D-BBH model. The TOTAI phase is gapped and undergoes a transition into a trivial (gapless) diffusive metal (DM) with increasing disorder. At significantly larger disorder, it turns into an Anderson insulator (AI). The gapless HOTAI phase and the Griffiths phase are absent, in contrast to the disordered 2D-BBH model [36]. The detailed topological, spectral, and localization properties
of the different phases found are summarized in Fig. 2 and Tab. 1, and will be justified in detail below.
This paper is structured as follows. In Sec. II, we present the model and the topological invariants that characterize non-trivial phases, and which we compute numerically. Detailed numerical results are presented in Sec. III, which allowed for the full description of the phase diagram of the model. We also analytically capture the disorder-induced topological phase transition using the self-consistent Born approximation. In Sec. IV we discuss our results and their implications.
## II Model and Methods
_Model.--_ The model under consideration is the 3D-BBH model [17], generalized with disorder in the intra-cell hopping amplitudes, as illustrated in Fig. 1(a,b). The corresponding tight-binding Hamiltonian is given by
\[\hat{H}=\sum_{\mathbf{r}}\left[\hat{c}_{\mathbf{r}}^{\dagger}\Gamma_{\mathbf{ r}}\hat{c}_{\mathbf{r}}+\sum_{i\in\{x,y,z\}}\left(\hat{c}_{\mathbf{r}}^{ \dagger}\Lambda_{i}\hat{c}_{\mathbf{r}+\mathbf{e}_{i}}+H.c.\right)\right]\,, \tag{1}\]
where \(\hat{c}_{\mathbf{r}}^{\dagger}=(\hat{c}_{\mathbf{r}1}^{\dagger}\,\,\hat{c}_{ \mathbf{r}2}^{\dagger}\,\,\dots\,\,\hat{c}_{\mathbf{r}8}^{\dagger})\), \(\hat{c}_{\mathbf{r}\alpha}^{\dagger}\) creates a particle at the \(\alpha\)-th site of cell \(\mathbf{r}\), and the hopping matrices are given by
\[\begin{split}[\Gamma_{\mathbf{r}}]_{ij}&=\gamma_{ \mathbf{r}}^{ij}[\sigma_{z}\otimes(\sigma_{x}\otimes\mathds{1}-\sigma_{y}^{ \otimes 2})+\sigma_{x}\otimes\mathds{1}^{\otimes 2}]_{ij}\,,\\ \Lambda_{x}&=\frac{\lambda}{2}\mathds{1}\otimes( \sigma_{x}\otimes\mathds{1}+i\sigma_{y}\otimes\sigma_{z})\,,\\ \Lambda_{y}&=\frac{\lambda}{2}\mathds{1}\otimes i \sigma_{y}\otimes(\sigma_{x}+i\sigma_{y})\,,\\ \Lambda_{z}&=\frac{\lambda}{2}(\sigma_{x}+i\sigma_{ y})\otimes\mathds{1}^{\otimes 2}\,,\end{split} \tag{2}\]
where \(\{\mathds{1},\sigma_{x},\sigma_{y},\sigma_{z}\}\) is the set of \(2\times 2\) identity and Pauli matrices. We set \(\lambda=1\) so that the energy is measured in units of \(\lambda\). The intra-cell hopping amplitudes are (up to a sign as indicated in Fig. 1(b) and Eq. (2)) \(\gamma_{\mathbf{r}}^{ij}=\gamma+W\Delta_{\mathbf{r}}^{ij}\), where \(W\) is the disorder strength and \(\Delta_{\mathbf{r}}^{ij}=\Delta_{\mathbf{r}}^{ji}\) are uniformly distributed random variables in the interval \([-\frac{1}{2},\frac{1}{2}]\) without correlation. In our finite-size calculations, we consider cubic systems of size \(L_{x}=L_{y}=L_{z}=L\).
In the clean limit, \(W=0\), we have \(H\to H_{0}\), \(\Gamma_{\mathbf{r}}\rightarrow\Gamma_{0}\) and translational invariance allows us to express the Hamiltonian in reciprocal space as,
\[\begin{split} H_{0}(\mathbf{k})=&\,\sigma_{z} \otimes[\sigma_{x}\otimes\mathds{1}(\cos(k_{x})+\gamma)-\sigma_{y}\otimes \sigma_{z}\sin(k_{x})]\\ &-\sigma_{z}\otimes\sigma_{y}\otimes[\sigma_{y}(\cos(k_{y})+ \gamma)+\sigma_{x}\sin(k_{y})]\\ &+[\sigma_{x}(\cos(k_{z})+\gamma)-\sigma_{y}\sin(k_{z})]\otimes \mathds{1}^{\otimes 2}\end{split}\,. \tag{3}\]
The topological properties depend on the value of the parameter \(\gamma\), as discussed next.
_Topological properties.--_ When \(|\gamma|<1\) (\(|\gamma|>1\)), the clean Hamiltonian in Eq. (3) is in a topological (trivial) phase with quantized octupole moment \(o_{xyz}\). In reciprocal space, \(o_{xyz}\) may be computed by the _nested Wilson loop_ method, where the spatial reflection and inversion symmetries of the clean system, along with time-reversal, charge-conjugation, and chiral symmetries were shown to protect the topology [17]. In real space, \(o_{xyz}\) is computed through many-body electric multipole operators [41; 42; 43]. Since this involves finding the ground state of the system, it is computationally demanding to do it in 3D. However, in the topological phase, we also expect to find quantized quadrupole moments \(q_{xy},q_{xz},q_{yz}\) in the 2D-boundaries of the insulator, as illustrated in Fig. 1(c), allowing for the definition of the topological invariant
\[Q=8\left|q_{xy}q_{xz}q_{yz}\right|\, \tag{4}\]
where each quadrupole moment is expressed as
\[q_{ab}=\left[\frac{1}{2\pi}\text{Im}\log\left\langle\Psi_{c}\right|\mathcal{U }_{ab}\left|\Psi_{c}\right\rangle-q_{ab}^{(0)}\right]\text{mod}\ 1\, \tag{5}\]
with
\[\mathcal{U}_{ab}=\exp\left(\frac{2\pi i\sum_{j=1}^{N_{\text{see}}}\hat{r}_{a} ^{j}\hat{r}_{b}^{j}}{L_{a}L_{b}}\right)\, \tag{6}\]
Figure 1: (a,b) Schematics of the 3D-BBH model with disorder. In (a) only the inter-cell hoppings are shown, whilst in (b) the intra-cell hoppings are presented. Dotted lines correspond to negative signs in the clean hopping amplitudes. (c) Schematics of full system with bulk octupole moment \(o_{xyz}\), boundary quadrupole moments \(q_{ij}\) and size \(L_{i}\) in directions \(i,j\in\{x,y,z\}\).
for \(c\neq a\neq b\), where \(\hat{r}_{a}^{j}\) is the position operator in direction \(a=x,y,z\) for electron \(j\) and \(N_{\rm occ}=2L_{a}L_{b}\) the number of occupied states in the boundary \(ab\), with \(L_{a}\) the number of unit cells in direction \(a\). \(q_{ab}^{(0)}=\frac{1}{2}\sum_{j=1}^{N_{a}}r_{a}^{j}r_{b}^{j}/(L_{a}L_{b})\) is the contribution from the positive background charge, taking into account that the sample is electrically neutral with \(N_{\rm a}=2N_{\rm occ}\) atomic orbitals in the boundary. \(|\Psi_{c}\rangle\) is the boundary many-body ground state obtained from the effective Hamiltonian \(H_{c}=-G_{N_{c}}^{c}(E=0)^{-1}\), with \(N_{c}=2L_{c}\). \(G_{N_{c}}^{c}\) is the boundary Green's function [44] that can be computed by dividing the Hamiltonian matrix into 2D layers in the direction \(c\) and solving the following Dyson equation,
\[G_{n}^{c}=(E-h_{n}^{c}-V_{n-1}^{c}G_{n-1}^{c}{V_{n-1}^{c}}^{\dagger})\, \tag{7}\]
where \(h_{n}^{c}\) is the Hamiltonian of the \(n\)th-layer, and the \((n-1)\)th-layer couples to the \(n\)th-layer through matrix \(V_{n-1}^{c}\). The reduced Hilbert space dimensionality of each layer allows for reaching far larger system sizes when computing \(Q\) than by computing the bulk octupole moment through many-body electric multipole operators.
In the disordered system, spatial crystalline symmetries are broken. However, the system is still chiral symmetric, since it is decomposable in sub-latices that do not possess any hopping terms between themselves. We will see that this symmetry suffices to protect the topology. Chiral symmetry is also preserved in the effective boundary Hamiltonian. The quadrupole moments are known to be quantized by chiral symmetry [36], which means that, in each realization of disorder, \(Q\) is quantized to \(0\) or \(1\).
_Spectral properties.--_ To study the spectral properties of the different phases, we computed the energy gap using exact diagonalization and the density of states (DOS), \(\rho(E)=\frac{1}{D}\sum_{k=0}^{D-1}\delta(E-E_{k})\), where \(D\) is the Hilbert space dimension and \(E_{k}\) are the single-particle eigenenergies. For an efficient calculation of the DOS, we employed the kernel polynomial method (KPM) [45]. In all our KPM calculations, we evaluated the trace stochastically over a single random state and used the Jackson kernel. A related quantity that can also be computed with the KPM is the local density of states (LDOS), \(\rho(E,{\bf r})=\sum_{k=0}^{D}\sum_{\alpha}\left|\psi_{k}({\bf r},\alpha) \right|^{2}\delta(E-E_{k})\), where \(\psi_{k}({\bf r},\alpha)\) is the \(k\)th eigenfunction evaluated at unit cell \({\bf r}\) and orbital \(\alpha\). We used this quantity to inspect the existence of localized corner states, to complement the analysis on the topological properties.
_Localization properties.--_ Finally, we also study the localization properties of the eigenstates by evaluating their localization length, the average level-spacing ratio (LSR), the inverse participation ratio (IPR) and the fractal dimension.
The normalized localization length \(\Lambda=\lambda/L\), where \(\lambda\) is the localization length along the \(z\) direction and \(L=L_{x}=L_{y}\), was computed using the transfer matrix method (TMM) [46; 47]. For extended states, \(\Lambda\) increases with \(L\), while for localized states, \(\Lambda\to 0\), since \(\lambda\) is finite. At critical points, \(\lambda\sim L\) and therefore \(\Lambda\sim L^{0}\).
The LSR is given by
\[\text{LSR}=\frac{1}{n-2}\sum_{i=1}^{n-2}\frac{\min(\delta_{i},\delta_{i+1})}{ \max(\delta_{i},\delta_{i+1})}\, \tag{8}\]
where \(\delta_{i}=E_{i+1}-E_{i}\) are the spacings between \(n\) eigenenergies \(E_{i}\) sorted in ascending order. The energy gap spacing is not included. We expect the energy level spacings of localized eigenstates to follow Poisson statistics, in which case \(\text{LSR}\approx 0.386\). For diffusive extended states, the level spacings follow the Gaussian Orthonormal Ensemble (GOE) probability distribution, corresponding to \(\text{LSR}\approx 0.530\)[48].
The IPR [49] is expressed as
\[\text{IPR}=\frac{1}{n}\sum_{i=1}^{n}\sum_{\bf r}\left(\sum_{\alpha}\left|\psi_ {i}({\bf r},\alpha)\right|^{2}\right)^{2}\, \tag{9}\]
where \(\psi_{i}({\bf r},\alpha)\) is the amplitude of the \(i\)th eigenfunction at unit cell \({\bf r}\) and orbital \(\alpha\). The IPR scales with system size as \(\text{IPR}\propto L^{-D_{2}}\), where \(D_{2}\) is the (real-space) fractal dimension given by \(D_{2}=3\) for extended states, \(D_{2}=0\) for localized states and \(0<D_{2}<3\) for fractal or multifractal states [29].
## III Results
Starting from a trivial insulator in the clean limit, \(\gamma=1.1\), we found four different phases as a function of disorder strength \(W\), that are summarized in Fig. 2 and Tab. 1. In the next sections, we detail the properties of each phase.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Phase & I: GI & II: TOTAI & III: DM & IV: AI \\ \hline Topology & Trivial & Non-trivial & Trivial & Trivial \\ \hline Spectrum & Gapped & Gapped & Gapless & Gapless \\ \hline Zero-energy states & Localized & Localized & Extended & Localized \\ \hline \(W_{c}\) & \(-\) & 2.55(20) & 3.54(3) & 24(2) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of all the phases observed in the model for \(\gamma=1.1\): trivial gapped insulator (GI), third-order topological Anderson insulator (TOTAI), diffusive metal (DM) and Anderson insulator (AI); with the respective topological, spectral and localization properties.
Figure 2: Schematic phase diagram as a function of \(W\) for \(\gamma=1.1\).
### Topological phase diagram
In Fig.3(a), the phase diagram for the topological invariant \(Q\) is shown. Due to the large finite-size effects, extrapolations to \(L\rightarrow\infty\) were performed. For the extrapolations, three linear fits were performed for \(Q(L^{-1})\), for the 5 largest values of \(L\) (green), for the 10 largest values of \(L\) (orange) and also for all values of \(L\) (red), as shown in Fig. 3(b). The extrapolated value of \(Q\left(L^{-1}\to 0\right)\) is the average result of the three fits. Starting from a topologically trivial phase I, at the critical disorder \(W_{c}^{\rm II}=2.55(20)\) an abrupt increase in \(Q\) occurs, indicating the start of phase II. \(W_{c}^{\rm II}\) is precisely determined by the lowest \(W\) for which \(Q\) increases and a relatively large error is considered to take into account finite-size effects. For this phase II, extrapolated values of \(Q\) are compatible with \(Q=1\) within error bar, signaling a topologically non-trivial phase. As shown in Fig. 3(c), the gap closes and reopens at \(W_{c}^{\rm II}\), further pointing to a transition into a topological phase. Since this phase was induced by disorder, we dubbed it a TOTAI. However, it is important to note that the system is gapped in this phase, as is evident from Fig. 3(c).
Further increasing disorder, the system transitions to phase III, where extrapolated values of \(Q\) are compatible with \(Q=0\), indicating that it is topologically trivial. These results are also compatible with the zero-energy LDOS shown in Fig. 4, revealing the existence of localized protected corner states in the TOTAI phase II, and their absence in the trivial phase III. We estimated \(W_{c}^{\rm III}\) from the extrapolation to \(L\rightarrow\infty\) (analogously to Fig. 3(b)), of the crossing between the energy gap and the mean level spacing (not shown) of the 50 states closer to \(E=0\), resulting in \(W_{c}^{\rm III}=3.51(3)\). This estimation will be compared to a different one based on localization properties in section III.3.
### Density of States
In Fig. 5 we show the DOS for different disorder strengths. We note that at the topological transition from phase I to II, although the gap closes, \(\rho(0)\) is always zero [see Fig. 5(a)], behaving as \(\rho(E)\sim E^{2}\) around \(E=0\) [see Fig. 5(c)], as it would for a clean system with a Dirac cone (which is the case of the clean 3D-BBH model in the topological transition point). This was verified by performing a linear fit in a log-log plot of the curves in Fig. 5(b) of positive \(E\) values close to \(E=0\) for \(W\in\{2.5,2.6\}\), which rendered slopes compatible with two (not shown). In phase III, the energy gap closes again and \(\rho(0)\) becomes finite. The DOS starts to become peaked around \(E=0\) for large \(W\). Whether this finite DOS at the Fermi level (\(E=0\)) is associated with a diffusive metal or an Anderson insulator is discussed next.
### Localization properties
In Fig. 6(a), we plot the normalized localization length \(\Lambda\) along the \(z\)-direction at \(E=0\). The calculations of \(\Lambda\) along other directions yielded quantitatively identical results. We can see that \(\Lambda\) decreases with \(L\) in
Figure 4: Local density of states at zero-energy as function of unit-cell number \(\mathbf{r}\), in a corner of a system with size \(L=30\). The kernel polynomial method was used with a single random state trace approximation and \(N=2^{10}\) moments, averaged over 200 disordered samples with disorder weight (a) \(W=3\) (phase II) and (b) \(W=4\) (phase III).
Figure 3: (a) Topological phase diagram obtained from the topological invariant \(Q\) defined in Eq. (4) with respect to the disorder strength \(W\). For the lines with fixed size, \(Q\) was averaged with 40 disorder realizations. To compute the extrapolated points at some selected values of \(W\), \(Q\) was averaged over 400 disorder realizations. In (b), an example of the extrapolation is shown for \(W=3\). (c) Bulk energy gap computed from exact diagonalization for a system size \(L=20\) and averaging over 200 disorder realizations, as a function of \(W\).
phases I and II. This is because the system is gapped and the wave function can therefore only propagate through (evanescent) localized modes at \(E=0\). At the topological phase transition, however, \(\Lambda\) becomes \(L\)-independent, as expected. In phase III, the system is gapless and has extended states at \(E=0\) since \(\Lambda\) increases with \(L\), as expected for a diffusive metal. We also note that for energies where the DOS is finite, the eigenstates are extended in phases I-III, as supported in Figs 6(b,c).
For large \(W\), we see another phase transition at \(W_{c}^{\rm IV}=24(2)\) to a phase IV where \(\Lambda\) again decreases with \(L\), Fig. 6(d). In this case, even though the system is gapless, the bulk extended states become localized at \(E=0\). In fact, localization occurs at all energies and corresponds to the standard Anderson transition [27; 28; 29].
In order to make an additional independent estimation of the critical point for the transition from phase II to III, we also analyzed the crossing points between curves of adjacent \(L\) in Fig. 6(a). Fitting the crossing points analogously to what was done in 3(b), we extrapolate \(W_{c}^{\rm III}=3.56(3)\) in the thermodynamic limit. This is compatible with the result obtained in section III.1 and their average is presented in Tab. 1.
We now turn to the LSR analysis. In Fig. 6(e), we present the LSR for eigenenergies around \(E=0\), where we had to disregard some abnormally large outlier spacings created due to finite-size effects (they correspond to spacings between sets of degenerate states in the clean limit). The LSR in phase III follows GOE statistics, completing the proof that phase III is a diffusive metal. In phases I and II, where we access
Figure 5: Density of states \(\rho(E)\), computed with the kernel polynomial method for a system size \(L=80\). (a) \(\rho(E=0)\) as function of disorder strength \(W\). The two curves shown are for different choices of the number of Chebyshev moments \(N\). (b) \(\rho(E)\) computed with \(N=2^{13}\) moments, for selected values of \(W\), with a zoomed-in view around the zero-energy region in (c).
Figure 6: Normalized localization length \(\Lambda\) from the transfer matrix method at (a,d) E = 0 versus \(W\) for distinct \(L\), and versus \(L\) for distinct energies at (b) \(W=3.4\) and at (c) \(W=5\). (e) LSR, (f) IPR and (g) fractal dimension \(D_{2}\) for \(n\) eigenstates around zero-energy from exact diagonalization, as a function of \(W\). (h) \(D_{2}\) versus E for distinct \(W\) for \(n=10\) eigenstates around \(E\). Averages were taken over 200 realizations of disorder. In (e) and (f), \(L=20\). \(D_{2}\) was computed by fitting using the sizes \(L\in\{10,12,\ldots,20\}\) in (f) and \(L\in\{4,6,\ldots,16\}\) in (g).
the statistics of the gap edge, the states mostly follow the GOE ensemble for diffusive and extended states. However, as we approach the transition point \(W_{c}^{\rm III}\), there is a sudden decrease in the LSR, especially at lower \(n\) (closer to the gap edge). To better understand this result, we calculated the IPR and the fractal dimension, which we discuss next.
For the gapped phases, we computed the average IPR for eigenstates at the gap edge, as for the LSR. In Fig. 6(f), we can see that the IPR is small in phases I and III, which, in combination with the obtained fractal dimension \(D_{2}\approx 3\) in Fig. 6(g), indicates that the eigenstates closer to \(E=0\) are extended. We also observe in Fig. 6(f) that the IPR becomes larger in phase II, peaking close to the transition \(\Pi\to\) III. This is concomitant with the fractal dimension results in Fig. 6(g), where it can be seen that \(D_{2}\approx 0\) close to the transition, suggesting the presence of localized gap-edge states right before the gap closes. This correlates with the sudden drop of the LSR. However, there are still some discrepancies between the results for the LSR and fractal dimension (the LSR is still significantly away from Poisson), that we attribute to strong finite-size effects in phase II. Fig. 6(h) further shows that in phase III the states are extended for any energy, while in phase II the states are only localized close to \(E=0\), at the gap edges. These localized states are likely related with Lifshitz tails, whose exponentially suppressed DOS in the thermodinamic limit justifies the strong finite size effects, especially for the LSR results.
### Self-Consistent Born Approximation
Disorder is introduced into the system in the form of added intra-cell hopping amplitudes at each unit cell \(\mathbf{r}\), that is,
\[V_{\mathbf{r}}=\sum_{\alpha=1}^{12}V_{\mathbf{r},\alpha}U_{\alpha}\,, \tag{10}\]
where \(\alpha(i,j)\in\{1,\ldots,12\}\) is a bijection between the index of an edge \(\alpha\) and the indexes \(i,j\) of the adjacent corners. \(V_{\mathbf{r},\alpha(i,j)}=W\Delta_{r}^{ij}\) are the hopping strengths and
\[\left[U_{\alpha(i,j)}\right]_{mn}=\frac{1}{\gamma}\left[\Gamma_{0}\right]_{mn} \left(\delta_{mi}\delta_{nj}+\delta_{mj}\delta_{ni}\right) \tag{11}\]
are the matrix elements of each separate intra-cell hopping term. Since the disorder is uncorrelated,
\[\langle V_{\mathbf{r},\alpha}\rangle=0\,\ \langle V_{\mathbf{r},\alpha}V_{ \mathbf{r}^{\prime},\beta}\rangle=\frac{W^{2}}{12}\delta_{\mathbf{rr}^{\prime }}\delta_{\alpha\beta}. \tag{12}\]
Under the Self-Consistent Born approximation (SCBA)[36; 50; 51; 31], the effective Bloch Hamiltonian at \(E=0\) is \(H_{\rm eff}(\mathbf{k})=H_{0}(\mathbf{k})+\Sigma(E=0)\), where the self-energy \(\Sigma\) is computed self-consistently through the following equation,
\[\Sigma(E)=\frac{W^{2}}{12(2\pi)^{3}}\int_{BZ}d^{3}\mathbf{k}\sum_{\alpha=1}^{ 12}U_{\alpha}GU_{\alpha}\,, \tag{13}\]
where \(G=\left[(E+i0^{+})\mathds{1}-H_{0}(\mathbf{k})-\Sigma(E)\right]^{-1}\) is the Green's function. Numerically, we find that \(\Sigma(0)=-\Gamma_{0}\sigma/\gamma\), \(\sigma\in\mathbb{R}\). In the effective Hamiltonian, this amounts to a normalization of the intra-cell hopping strengths \(\gamma\to\gamma^{\prime}=\gamma-\sigma\). Since the effective model still corresponds to the 3D-BBH clean model, the topological (trivial) phase occurs for \(\gamma^{\prime}<1(>1)\). In Fig. 7, we observe that the topological transition curve predicted by the SCBA agrees very well with the one computed numerically from the topological invariant \(Q\).
## IV Conclusions
In summary, we have discovered the first example of a third-order topological Anderson insulator, induced by chiral symmetry preserving disorder. The TOTAI phase is characterized by a quantized quadrupole moment on the boundaries of the 3D system, that corresponds to a quantized bulk octupole moment, and by topologically protected localized corner states. Remarkably, the topological transition to the TOTAI phase is captured with great accuracy by the self-consistent Born approximation, up to very large disorder strengths.
Our findings can be tested experimentally in different metamaterials where disorder can be tuned, such as mechanical metamaterials [19], electric circuits [20; 37; 21] or photonic waveguides [23].
Finally, we note that in contrast to the disordered 2D BBH model [36], we have not found a gapless HOTAI in 3D. This raises an interesting open question for future research: do gapless TOTAIs exist?
Figure 7: Effective renormalizaded intra-cell hopping amplitude \(\gamma^{\prime}\), computed through the SCBA, as a function of the clean hopping amplitude \(\gamma\) and the disorder strength \(W\). The topological transition curve at \(\gamma^{\prime}=1\) is shown in red and the transition numerically extracted from the topological invariant \(Q\) is shown in the blue points.
Acknowledgments
This work has been partially funded by the ERC Starting Grant 101042293 (HEPIQ) (H.L.). The authors MG and PR acknowledge partial support from Fundacao para a Ciencia e Tecnologia (FCT-Portugal) through Grant No. UID/CTM/04540/2019. EVC acknowledge partial support from FCT-Portugal through Grant No. UIDB/04650/2020. MG acknowledges further support from FCT-Portugal through the Grant SFRH/BD/145152/2019. We finally acknowledge the Tianhe-2JK cluster at the Beijing Computational Science Research Center (CSRC), the Bob\(|\)Macc supercomputer through computational project project CPCA/A1/470243/2021 and the OBLIVION supercomputer, through projects HPCUE/A1/468700/2021, 2022.15834.CPCA.A1 and 2022.15910.CPCA.A1 (based at the High Performance Computing Center - University of Evora) funded by the ENGAGE SKA Research Infrastructure (reference POCI-01-0145-FEDER-022217 - COMPETE 2020 and the Foundation for Science and Technology, Portugal) and by the BigData@UCE project (reference ALT20-03-0246-FEDER-000033 - FEDER and the Alentejo 2020 Regional Operational Program. Computer assistance was provided by CSRC's, Bob\(|\)Macc's and OBLIVION's support teams.
|
2302.05687 | Specific-heat ratio effects on the interaction between shock wave and
heavy-cylindrical bubble: based on discrete Boltzmann method | Specific-heat ratio effects on the interaction between a planar shock wave
and a two-dimensional heavy-cylindrical bubble are studied by the discrete
Boltzmann method. Snapshots of schlieren images and evolutions of
characteristic scales, being consistent with experiments, are obtained. The
specific-heat ratio effects on some relevant dynamic behaviors such as the
bubble shape, deformation process, average motion, vortex motion, mixing degree
of the fluid system are carefully studied, as well as the related Thermodynamic
Non-Equilibriums (TNE) behaviors including the TNE strength, entropy production
rate of the system. Specifically, it is found that the influence of
specific-heat ratio on the entropy production contributed by non-organized
energy flux (NOEF) is more significant than that caused by non-organized
momentum flux (NOMF). Effects of specific-heat ratio on entropy production
caused by NOMF and NOEF are contrary. The effects of specific-heat ratio on
various TNE quantities show interesting differences. These differences
consistently show the complexity of TNE flows which is still far from clear
understanding. | Dejia Zhang, Aiguo Xu, Jiahui Song, Yanbiao Gan, Yudong Zhang, Yingjun Li | 2023-02-11T12:50:41Z | http://arxiv.org/abs/2302.05687v2 | Specific-heat ratio effects on the interaction between shock wave and heavy-cylindrical bubble: based on discrete Boltzmann method
###### Abstract
Specific-heat ratio effects on the interaction between a planar shock wave and a two-dimensional heavy-cylindrical bubble are studied by the discrete Boltzmann method (DBM). The DBM owns a flexible specific-heat ratio and offers an additional function for analyzing the complex physical field. Snapshots of schlieren images and evolutions of characteristic scales, being consistent with experiments, are obtained. The corresponding Hydrodynamic Non-Equilibriums and related Thermodynamic Non-Equilibriums (TNE) behaviors are extracted and investigated. It is found that the specific-heat ratio significantly affects the dynamic process, including the bubble shape, deformation process, average motion, vortex motion, mixing degree of the fluid system, TNE strength, entropy production rate, etc. Specifically, bubbles with different specific-heat ratios show various jet structures. In the case with a smaller specific-heat ratio, the fluid is easier to compress. So, its characteristic scales tend to be compressed smaller, and the bubble owns a slower average motion speed. The specific-heat ratio contributes little to the vortex motion in the shock compression stage. But when the shock wave sweeps through the bubble, it obviously influences the vorticity around the interface and the corresponding values of circulation. The difference in specific-heat ratio between the bubble and ambient gas promotes the mixing process. The entropy production rates, which are key factors in compression science, are also studied. It is found that the influence of specific-heat ratio on the entropy production contributed by non-organized energy flux (NOEF) is more significant than that caused by non-organized momentum flux (NOMF). Effects of specific-heat ratio on entropy production caused by NOMF and NOEF are contrary. The effects of specific-heat ratio on various TNE quantities show interesting differences. These differences consistently show the complexity of TNE flows which is still far from clear understanding.
keywords: shock-bubble interaction, discrete Boltzmann method, thermodynamic non-equilibriums +
Footnote †: journal: Computers and fluids
## 1 Introduction
The applications of shock-accelerated inhomogeneous flows (SAIFs) are of significant value in biomedicine, energy utilization, and astrophysics fields, including but not limited to scenarios such as the impact of shock waves on kidney stones, the interaction between shock waves with foams, and with burning flames in supersonic combustion systems, and the formation of supernova remnants, etc [1; 2; 3; 4; 5; 6; 7; 8]. Shock-bubble interaction (SBI) is one of the most fundamental problems in the research of SAIFs. Its applications and academic research are interdisciplinary. Two kinds of problems encountered in SBI research are: (i) The geometry of the shock wave, the interface shape, and the boundary structure are complex in the actual scene. They will result in various wave patterns and significantly affect the flow morphology and bubble's evolution. (ii) There usually exist multi-physics coupling problem in the engineering application of SBI. Such as the supersonic combustion machines. When the shock waves passing through the reactants, it may lead to phase transition and chemical reactions, making the flow morphology more complex and inducing small structure (or fast-changing pattern) [9; 10; 11]. In an underwater explosion experiment, the interaction between shock waves and bubbles may refer to the cavitation and annihilation effects. The other scene is the inertial confinement fusion (ICF), in which the laser ablation, electron heat conduction, self-generated electromagnetic field, radiation, and many other factors may complicate the investigation of hydrodynamic instabilities [12].
Commonly, research on SBI mainly includes three meth
ods: theoretical derivation, experiment, and numerical simulation. As a fundamental research method, theoretical research can provide a clear understanding of physical processes. It is significant for practical engineering applications. In 1960, Rudinger _et al._ developed a theory that permits computing the response of bubbles to accelerations [13]. In order to describe the formation and evolution processes of vortex structure quantitatively, many scholars have developed circulation models [14; 15; 16; 17]. However, theoretical works provide limited information. Meanwhile, in the late stage of SBI evolution, the bubble deformation and flow morphology dominated by the developed Richtmyer-Meshkov instability (RMI) and Kelvin-Helmholtz instability (KHI) are difficult to be predicted accurately by theoretical research.
As the research method closest to engineering application, the experimental results are often regarded as standard results to verify the rationality and accuracy of theoretical and numerical works. To study the SBI poccess accurately, the scholars have made a series of improvements to experimental equipment or technique, including the generation techniques of different types of shock waves, interface formation methods, schlieren facilities, and image recognition techniques [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Among these, two of important and valuable works are performed by Ding _et al._. Based on the soap film technique, they formed kinds of initial interfaces with different curvatures through the wire-restriction method and captured the wave patterns and interface evolution with high-speed schlieren photography [28; 31]. Other works, such as evolutions of a spherical gas interface under reshock conditions [32], developments of a membrane-less SF\({}_{6}\) gas cylinder under reshock conditions [33], and interactions of a cylindrical converging shock wave with an initially perturbed gaseous interface [34], are also performed by many other scholars.
However, we know that the experimental studies mainly depend on the experimental platform. When studying some complex and demanding condition problems, it takes much work to build the experimental platform. In this situation, numerical simulation research becomes an alternative. Generally, there are three kinds of physical modeling methods (or models) for SBI numerical research, i.e., the macroscopic, mesoscopic, and microscopic modeling methods (or models). Most of the existing numerical researches on SBI are related to the macroscopic modeling methods (such as the Euler and Navier-Stokes (NS) models) based on the continuous hypothesis (or equilibrium and near-equilibrium hypothesis).[14; 15; 35; 16; 36; 17; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 31; 25] For example, Zou _et al._ presented the computational results on the evolution of the shock-accelerated heavy bubbles through the multi-fluid Eulerian equation [49]. There also exist a few SBI works based on the microscopic modeling method, such as the Molecular dynamics (MD) simulation [50]. It is capable of capturing much more flow behaviors but is restricted to smaller spatiotemporal scales because of its huge computing costs.
In the numerical research on SBI, three points need to be concerned. (i) Investigation of kinetic modeling that describes the non-continuity/non-equilibrium flows. Most of the current researches are based on macroscopic models. However, there exist abundant small structure (and fast-changing patterns) behaviors and effects such as the shock wave, boundary layer, material defects, etc. For cases with small structures, the mean free path of molecules cannot be ignored compared to the characteristic length, i.e., the non-continuity (discreteness) of the system is pronounced, which challenge the rationality and physical function of the macroscopic models based on the continuity hypothesis. For cases with fast-changing patterns, the system dose not have enough time to relax to the thermodynamic equilibrium state, i.e., the system may significantly deviate from the thermodynamic equilibrium state. Therefore, the rationality and physical function of the macroscopic models based on the hypothesis of thermodynamic equilibrium (or near thermodynamic equilibrium) will be challenged. (ii) Improvement of method that describes the evolution characteristics of bubbles and flows morphology. Most of the studies describe bubble characteristics and flows morphology from a macroscopic view. The mesoscopic characteristics such as the kinetic effects which help understand the kinetic process, are rarely to be studied. (iii) Further studies of effects of specific-heat ratio on SBI process. The specific-heat ratio is an essential index for studying the compressibility of the gas. Research from Igra _et al._ has shown that the differences in the specific-heat ratio of bubbles would cause various wave patterns and pressure distribution inside the bubbles during the interaction process [51]. Besides, many works on hydrodynamic instability have also demonstrated the importance of investigating the specific-heat ratio effect [52; 53; 54; 55]. Among these, Chen _et al._ investigated the specific-heat ratio effects on temperature gradient and the TNE characteristics of compressible Rayleigh-Taylor (RT) system [55].
For the above three points, in this work we resort to the recently proposed discrete Boltzmann method (DBM) 1[61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72]. Based on the coarse-grained modeling method of non-equilibrium statistical physics, the DBM aims to solve the following dilemma: (i) The traditional hydrodynamic modelings are based on the continuous hypothesis (or near-equilibrium hypothesis). They only concern the evolution of three conserved kinetic moments of the distribution function, i.e. the density, momentum and energy, so their physical functions are insufficient. (ii) The situation that the MD can be used is restricted to too small spatial-temporal scales. The physical requirement for the modeling is that except for the Hydrodynamic Non-Equilibriums (HNE), the most related Thermodynamic Non-Equilibriums (TNE) are
also needed to be captured. Theoretically, the Boltzmann equation is suitable for all-regime flows, including the continuum regime, slip regime, transition regime, and free molecule flow regime. Based on the Chapman-Enskog (CE) multiscale analysis [73], through retaining various orders of Kn number (or considering different orders of TNE effects), the Boltzmann equation can be reduced to the various orders of hydrodynamic equations. They can be used to describe the hydrodynamic behaviors, i.e., the conservations of mass, momentum and energy, in corresponding flow regimes. Because what the traditional hydrodynamic equations describe are only the conservation laws of mass, momentum and energy. Consequently, it should be pointed out that, the information lost in the traditional hydrodynamic equations increases sharply with increasing the Kn number. With increasing the Kn number, to ensure the describing capability not to decrease significantly, the more appropriate hydrodynamic equations should be the Extended Hydrodynamic Equations (EHEs) which include not only the evolution equations of conserved kinetic moments but also the most relevant non-conserved kinetic moments of distribution function. For convenience of description we refer the modeling method that derives EHEs from the fundamental kinetic equation to Kinetic Macroscopic Modeling (KMM) method. It is clear that, the complex process of CE expansion is necessary and the simulation is still based on the macroscopic equations in KMM. As a comparison, the DBM is a kind of Kinetic Direct Modeling (KDM) method. In DBM modeling, the CE analysis is only used to quickly determine which kinetic moments should keep values unchanged, the final EHEs are not needed, and the simulation is not based on the complicated EHEs. As the TNE degree of the flow to be described rises gradually, the complexity of the derivation process and difficulty of numerical simulation in the KMM method increase sharply. However, in the DBM method, to describe flows in a one-order more deeper depth of TNE, only two more related kinetic moments need to be added. Since without needing to derive and solve the EHEs, as the TNE degree deepens, the complexity of the DBM approach increases much slower than that of KMM method.
The core step in DBM modeling is to provide a feasible scheme for detecting, describing, presenting, and analyzing TNE effects and behaviors beyond traditional macroscopic modeling. Based on the non-equilibrium statistical physics, we can use the non-conservative moments of \((f-f^{eq})\) to describe how and how much the system deviates from the thermodynamic equilibrium state and to check corresponding effects due to deviating from the thermodynamic equilibrium. The non-conservative moments of \((f-f^{eq})\) open a phase space, and this space and its subspaces provide a very intuitive geometric correspondence for describing complex TNE system properties. The development of schemes for checking TNE state, extracting TNE information and describing corresponding TNE effects in DBM are seen in Table 1. Actually, this set of TNE describing methods has been applied in many kinds of complex fluid systems such as hydrodynamic instability system [74; 4; 75; 76; 77; 78; 79; 80], combustion and detonation systems [66; 81; 82; 83; 84; 85; 70], multiphase flow system [86; 87; 88; 89; 63; 90], plasma system [91], etc. Besides the scheme for detecting, describing, presenting, and analyzing TNE effects and behaviors, the DBM incorporates other methods for analyzing the complex physical field. One of them is the tracer particle method. The introduction of the tracer particle method makes the gradually blurred interface appear clearly [78; 92].
The rest of the paper is structured as follows. Section 2 shows the modeling method. Then, the numerical simulations and results are presented in Section 3, which includes two subsections. Section 4 concludes the paper. Other complementary information is given in the Appendix.
## 2 Model construction
Based on the Bhatnagar-Gross-Krook (BGK) single-relaxation model, a two-fluid DBM with a flexible specific-heat ratio is presented in this part. From the origin Boltzmann equation to a DBM, four fundamental steps are needed: (i) Simplification and modification of the Boltzmann equation according to the research requirement. (ii) Discretization of the particle velocity space under the condition that the reserved kinetic moments keep their values unchanged. (iii) Checking the TNE state and extracting TNE information. (iv) The selection/design of the boundary conditions.
### Simplification and modification of the Boltzmann equation
As we know, the collision term in the original Boltzmann contains high dimensional distribution functions. Therefore, the direct solution to it needs too much computing consumption. The most common method to simplify the collision operator is to introduce a local equilibrium distribution function (\(f^{eq}\)) and write the complex collision operator in a linearized form, i.e., the original BGK collision operator \(-\frac{1}{n}(f-f^{eq})\), where \(\tau\) is the relaxation time [94]. The original BGK operator describes the situation where the system is always in the quasi-equilibrium state. Namely, it characterizes only the situation where the Kn number of the system is small enough and \(f\approx f^{eq}\). The currently used BGK operator for non-equilibrium flows in the field is a modified version incorporating the mean-field theory description [61; 62; 63]. Based on the above considerations, the simplified Boltzmann equation describing the SBI process is as follow:
\[\frac{\partial f}{\partial t}+\mathbf{v}\cdot\frac{\partial f}{\partial\mathbf{ r}}=-\frac{1}{\tau}(f-f^{eq}), \tag{1}\]
where the two-dimensional equilibrium distribution function is
\[f^{eq}=\frac{\rho}{2\pi RT}(\frac{1}{2\pi IRT})^{\frac{1}{2}}\exp[-\frac{( \mathbf{v}-\mathbf{u})^{2}}{2RT}-\frac{\eta^{2}}{2IRT}], \tag{2}\]
where \(\rho\), \(T\), \(\mathbf{v}\), \(\mathbf{u}\), \(I\), \(R\), and \(\eta\) are the mass density, temperature, particle velocity vector, flow velocity vector, the number of the extra degrees of freedom including molecular rotation and vibration inside the molecules, gas constant, and a free parameter that describes the energy of the extra degrees of freedom,
respectively. The specific-heat ratio is flexible by adjusting parameter \(I\), i.e., \(\gamma=(D+I+2)/(D+I)\), where \(D=2\) represents the two-dimensional space.
### Discretization of the particle velocity space and determination of \(f_{i}^{\sigma,eq}\)
The continuous-form Boltzmann equation should be discretized to a discrete form for simulating. Specifically, the continuous velocity space can be replaced by a limited number of particle velocities. So that the values of continuous kinetic moments can be obtained from the summation form of kinetic moments. In this process, it requires the reserved kinetic moments, which are used to characterize the system behaviors, keep their values unchanged after discretizing the velocity space. Namely, \(\int\tau\Psi^{\prime}(\textbf{v})d\textbf{v}=\sum_{i}f_{i}\Psi^{\prime}( \textbf{v}_{i})\), where \(\textbf{v}^{\prime}=[1,\textbf{v},\textbf{vv},\textbf{v}\cdot\textbf{v}, \textbf{vv},\textbf{vv}\cdot\textbf{v},\ldots]^{T}\) represent the reserved kinetic moments. According to the CE analysis, \(f\) can be expressed by \(f^{eq}\). Therefore, in the process of discretization, the reserved kinetic moments of \(f^{eq}\) should keep their values unchanged, i.e., \(\int f^{eq}\Psi^{\prime\prime}(\textbf{v})d\textbf{v}=\sum_{i}f_{i}^{eq}\Psi ^{\prime\prime}(\textbf{v}_{i})\).
The discrete-form Boltzmann is as follow:
\[\frac{\partial f_{i}}{\partial t}+v_{i\alpha}\cdot\frac{\partial f_{i}}{ \partial r_{\alpha}}=-\frac{1}{\tau}(f_{i}-f_{i}^{eq}), \tag{3}\]
where \(i\) represents the kind of discrete velocities and \(\alpha\) (\(\alpha=x\) or \(y\)) is the direction in cartesian coordinate.
To simulate the interaction between two different fluids, a two-fluid DBM should be constructed. Based on the single-relaxation model, the discrete-form two-fluid Boltzmann equation can be written as [95]:
\[\frac{\partial f_{i}^{\sigma}}{\partial t}+v_{i\alpha}\cdot\frac{\partial f_{ i}^{\sigma}}{\partial r_{\alpha}}=-\frac{1}{\tau^{\sigma}}(f_{i}^{\sigma}-f_{i}^{ \sigma,eq}), \tag{4}\]
where \(\sigma\) represents the types of material particle and \(f_{i}^{\sigma,eq}=f_{i}^{\sigma,eq}(\rho^{\sigma},\textbf{u},T)\). In two-fluid DBM, the macroscopic quantities of the mixture and each component are defined as follows:
\[\rho^{\sigma}=\sum_{i}f_{i}^{\sigma}, \tag{5}\]
\[\textbf{u}^{\sigma}=\frac{\sum_{i}f_{i}^{\sigma}\textbf{v}_{i}}{\rho^{\sigma}}, \tag{6}\]
\[\rho=\sum_{\sigma}\rho^{\sigma}, \tag{7}\]
\[\textbf{u}=\frac{\sum_{\sigma}\rho^{\sigma}\textbf{u}^{\sigma}}{\rho}, \tag{8}\]
where \(\rho^{\sigma}\) and \(\textbf{u}^{\sigma}\) are the mass density and flow velocity of component \(\sigma\), respectively. \(\rho\) and \(\textbf{u}\) represent the mass density and flow velocity of the mixture, respectively. There exist two kinds of temperature definitions on two-fluid DBM because the definition of temperature depends on the flow velocity to be chosen as a reference. The first definition is choosing the velocity of the mixture to be a reference. So that the expressions of temperature of component \(\sigma\) and mixture are as follows:
\[T^{\sigma*}=\frac{2E_{I}^{\sigma*}}{\rho^{\sigma}(D+I^{\sigma})}, \tag{9}\]
\[T=\frac{2E_{I}^{*}}{\sum_{\sigma}\rho^{\sigma}(D+I^{\sigma})}, \tag{10}\]
where \(E_{I}^{\sigma*}=\frac{1}{2}\sum_{i}f_{i}^{\sigma}((\textbf{v}_{i}-\textbf{u})^ {2}+\eta_{i}^{\sigma 2})\) is the internal energy of component \(\sigma\). Another definition is:
\[T^{\sigma}=\frac{2E_{I}^{\sigma}}{\rho^{\sigma}(D+I^{\sigma})}, \tag{11}\]
\[T=\frac{2(E_{I}+\Delta E_{I}^{*})}{\sum_{\sigma}\rho^{\sigma}(D+I^{\sigma})}, \tag{12}\]
with \(E_{I}^{\sigma}=\frac{1}{2}\sum_{i}f_{i}^{\sigma}((\textbf{v}_{i}-\textbf{u}^ {\sigma})^{2}+\eta_{i}^{\sigma 2})\). \(\Delta E_{I}^{*}\) is
\[\Delta E_{I}^{*}=E_{I}^{*}-E_{I}=\frac{\rho^{\alpha}\rho^{B}(u_{\alpha}^{A}-u_ {\alpha}^{B})^{2}}{2(\rho^{A}+\rho^{B})}. \tag{13}\]
For solving Eq. (4), the value of \(f_{i}^{\sigma,eq}\) should be determined. It depends on the reserved kinetic moments which characterize the main system behaviors. In DBM modeling, the CE multi-scale analysis is used to determine the reserved kinetic moments quickly. Specifically, when constructing a DBM which only up to the first order term of Kn number is retained (i.e., only the first order TNE effects are retained), seven kinetic moments should be reserved, i.e., the \(\textbf{M}_{0}\), \(\textbf{M}_{1}\), \(\textbf{M}_{2,0}\), \(\textbf{M}_{2}\), \(\textbf{M}_{3,1}\), \(\textbf{M}_{3,M}_{4,2}\). Two more kinetic moments ( \(\textbf{M}_{4}\) and \(\textbf{M}_{5,3}\)) are needed when
\begin{table}
\begin{tabular}{c c} \hline Year & Scheme for investigating TNE effects and behaviors \\ \hline Before 2012 & Two classes of LBMs did not show a significant difference in physical function. \\
2012 & Use the non-conservative moments of (\(f-f^{eq}\)) to check and describe TNE [65]. This is the starting point of current DBM approach. \\
2015 & Open TNE phase space based on non-conservative moments of (\(f-f^{eq}\)) and define a TNE strength using the distance from a state point to the origin. This is the starting point of the phase space description \\ & method [66]. \\
2018 & Extend the distance concepts in phase space to describe the TNE difference/similarity of TNE states and kinetic processes [93]. \\
2021 & Further extend the phase space description methodology to any set of system characteristics [71]. \\ \hline \hline \end{tabular}
\end{table}
Table 1: The development of schemes for checking TNE state, extracting TNE information and describing corresponding TNE effects in DBM.
up to the second order term of the Kn number are retained [62]. However, it should be noted that the function of CE analysis in DBM modeling is only to determine the kinetic moments that need to be preserved. Whether or not to derivate the hydrodynamic equations does not affect the DBM simulation. The Appendix B gives the two-fluid hydrodynamic equations for easier understanding. The expressions of the kinetic moments can be obtained by integrating \(\mathbf{v}\) and \(\boldsymbol{\eta}\) with continuous-form \(f^{eq}\). The expressions of kinetic moments are as follows:
\[M_{0}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}=\rho^{\sigma}, \tag{14}\]
\[M_{1,x}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{ix}=\rho^{\sigma}u_{x}, \tag{15}\]
\[M_{1,y}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{iy}=\rho^{\sigma}u_{y}, \tag{16}\]
\[M_{2,0}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}(v_{i\alpha}^{2}+\eta_{i}^{ \sigma 2})=\rho^{\sigma}[(D+I^{\sigma})R^{\sigma}T+u_{\alpha}^{2}], \tag{17}\]
\[M_{2,vy}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{ix}v_{iy}=\rho^{\sigma}u_{x }u_{y}, \tag{18}\]
\[M_{2,xx}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{ix}^{2}=\rho^{\sigma}u_{x}^ {2}, \tag{19}\]
\[M_{2,yy}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{iy}^{2}=\rho^{\sigma}u_{y}^ {2}, \tag{20}\]
\[\begin{split} M_{3,1,x}^{\sigma,eq}&=\sum_{i}f_{i }^{\sigma,eq}v_{ix}(v_{i\alpha}^{2}+\eta_{i}^{\sigma 2})\\ &=\rho^{\sigma}u_{x}[(D+I^{\sigma}+2)R^{\sigma}T+u_{\alpha}^{2}], \end{split} \tag{21}\]
\[\begin{split} M_{3,1,y}^{\sigma,eq}&=\sum_{i}f_{i }^{\sigma,eq}v_{iy}(v_{i\alpha}^{2}+\eta_{i}^{\sigma 2})\\ &=\rho^{\sigma}u_{y}[(D+I^{\sigma}+2)R^{\sigma}T+u_{\alpha}^{2}], \end{split} \tag{22}\]
\[\begin{split} M_{3,xx}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{ ix}^{3}=\rho^{\sigma}u_{x}(3R^{\sigma}T+u_{x}^{2}),\end{split} \tag{23}\]
\[M_{3,xyxy}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{ix}^{2}v_{iy}=\rho^{\sigma }u_{y}(R^{\sigma}T+u_{x}^{2}), \tag{24}\]
\[M_{3,xyxy}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{ix}v_{iy}^{2}=\rho^{\sigma }u_{x}(R^{\sigma}T+u_{y}^{2}), \tag{25}\]
\[M_{3,yyyy}^{\sigma,eq}=\sum_{i}f_{i}^{\sigma,eq}v_{iy}^{3}=\rho^{\sigma}u_{y} (3R^{\sigma}T+u_{y}^{2}), \tag{26}\]
\[\begin{split} M_{4,2,xx}^{\sigma,eq}&=\sum_{i}f_{i }^{\sigma,eq}v_{ix}^{2}(v_{i\alpha}^{2}+\eta_{i}^{\sigma 2})=\rho^{\sigma}\{(D+I^{ \sigma}+2)R^{\sigma 2}T^{2}\\ &+u_{x}^{2}(u_{x}^{2}+u_{y}^{2})+R^{\sigma}T[u_{x}^{2}(D+I^{ \sigma}+5)+u_{y}^{2}]\},\end{split} \tag{27}\]
\[\begin{split} M_{4,2,xy}^{\sigma,eq}&=\sum_{i}f_{i }^{\sigma,eq}v_{iy}^{2}(v_{i\alpha}^{2}+\eta_{i}^{\sigma 2})=\rho^{\sigma}\{(D+I^{ \sigma}+2)R^{\sigma 2}T^{2}\\ &+u_{y}^{2}(u_{x}^{2}+u_{y}^{2})+R^{\sigma}T[u_{y}^{2}(D+I^{ \sigma}+5)+u_{x}^{2}]\},\end{split} \tag{28}\]
\[\begin{split} M_{4,2,yy}^{\sigma,eq}&=\sum_{i}f_{i }^{\sigma,eq}v_{iy}^{2}(v_{i\alpha}^{2}+\eta_{i}^{\sigma 2})=\rho^{\sigma}\{(D+I^{ \sigma}+2)R^{\sigma 2}T^{2}\\ &+u_{y}^{2}(u_{x}^{2}+u_{y}^{2})+R^{\sigma}T[u_{y}^{2}(D+I^{ \sigma}+5)+u_{y}^{2}]\},\end{split} \tag{29}\]
where the subscript "m,n" means that the m-order tensor is contracted to an n-order tensor.
The above kinetic moments can be written in matrix form, i.e.,
\[\mathbf{C}\cdot\mathbf{f}^{\sigma,eq}=\mathbf{\hat{f}}^{\sigma,eq}, \tag{30}\]
where \(\mathbf{C}\) is the matrix of discrete velocity and \(\mathbf{\hat{f}}^{\sigma}\) represents the kinetic moments. A proper discrete velocity model is needed to confirm the values of \(f_{i}^{\sigma,eq}\). The \(\mathbf{f}^{\sigma,eq}\) can be obtained by solving the inverse matrix, i.e., \(\mathbf{f}^{\sigma,eq}=\mathbf{C}^{-1}\cdot\mathbf{\hat{f}}^{\sigma,eq}\), where \(\mathbf{C}^{-1}\) is the inverse matrix of \(\mathbf{C}\). It is very convenient to obtain the inverse matrix of \(\mathbf{C}\) through some mathematical softwares such as Mathematica, etc. Generally, to save computing costs, the total number of \(i\) is chosen to be equal to the number of reserved kinetic moments, i.e., \(N\). Therefore, when constructing a first-order DBM, a discrete velocity model with 16 velocities is needed. The sketches of the D2V16 model can be seen in Fig. 1. The specific values of D2V16 are given in the following equations:
\[(v_{ix},v_{iy})=\left\{\begin{array}{ll}c[\cos\frac{(i-1)\pi}{2},\sin\frac{( i-1)\pi}{2}],&i=1-4,\\ 2c[\cos\frac{(2i-1)\pi}{2},\sin\frac{(2i-1)\pi}{4}],&i=5-8,\\ 3c[\cos\frac{(i-9)\pi}{4},\sin\frac{(i-9)\pi}{4}],&i=9-12,\\ 4c[\cos\frac{(2i-9)\pi}{4},\sin\frac{(2i-9)\pi}{4}],&i=13-16,\end{array}\right.\]
where \(c\) is an adjustable parameter of the discrete velocity model. The sketch of \(\boldsymbol{\eta}\) in D2V16 is \(\boldsymbol{\eta}_{l}=\boldsymbol{\eta}_{0}\) for \(i=1-4\), and \(\boldsymbol{\eta}_{l}=0\) for \(i=5-16\).
### Checking the TNE state and extracting TNE information
Many physical quantities can characterize the degree of TNE in a fluid system, such as relaxation time, Kn number, viscosity, heat conduction, the gradients of macroscopic quantity, etc.
Figure 1: Sketch of D2V16 model. The numbers in the figure represent the index \(i\) in Eq. (3).
They all characterize the TNE strength and describe the TNE behaviors of a fluid system from their perspectives. Besides the above physical quantities describing the TNE behaviors, in DBM modeling, we can also use the non-conservative moments of \((f-f^{eq})\) to characterize the TNE state and extract TNE information from the fluid system. Fundamentally, four TNE quantities can be defined in a first-order DBM, i.e.,
\[\Delta_{2}^{\sigma*}=\sum_{i}(f_{i}^{\sigma}-f_{i}^{\sigma,eq})\mathbf{v}_{i}^ {*}\mathbf{v}_{i}^{*}, \tag{31}\]
\[\Delta_{3,1}^{\sigma*}=\frac{1}{2}\sum_{i}(f_{i}^{\sigma}-f_{i}^{\sigma,eq})( \mathbf{v}_{i}^{*}\cdot\mathbf{v}_{i}^{*}+\eta_{i}^{\sigma 2})\mathbf{v}_{i}^{*}, \tag{32}\]
\[\Delta_{3}^{\sigma*}=\sum_{i}(f_{i}^{\sigma}-f_{i}^{\sigma,eq})\mathbf{v}_{i}^ {*}\mathbf{v}_{i}^{*}, \tag{33}\]
\[\Delta_{4,2}^{\sigma*}=\frac{1}{2}\sum_{i}(f_{i}^{\sigma}-f_{i}^{\sigma,eq})( \mathbf{v}_{i}^{*}\cdot\mathbf{v}_{i}^{*}+\eta_{i}^{\sigma 2})\mathbf{v}_{i}^{*} \mathbf{v}_{i}^{*}, \tag{34}\]
where \(\mathbf{v}_{i}^{*}=\mathbf{v}_{i}-\mathbf{u}\) represents the central velocity and \(\mathbf{u}\) is the macro flow velocity of the mixture.
Physically, the most fundamental TNE quantities, \(\Delta_{2}^{\sigma*}=\Delta_{2,\alpha\beta}^{\sigma*}\mathbf{e}_{\alpha} \mathbf{e}_{\beta}\) and \(\Delta_{3,1}^{\sigma*}=\Delta_{3,1}^{\sigma*}\mathbf{e}_{\alpha}\), represent the viscous stress tensor over the non-organized momentum flux, NOMF) and heat flux tensor (or non-organized energy flux, NOEF), respectively. The \(\mathbf{e}_{\alpha}\) (\(\mathbf{e}_{\beta}\)) is the unit vector in the \(\alpha\) (\(\beta\)) direction. The last two TNE quantities contain more condensed information. Specifically, \(\Delta_{m,n}^{\sigma*}\) (\(\Delta_{m}^{\sigma*}\)) is the flux of \(\Delta_{m-1,n-1}^{\sigma*}\) (\(\Delta_{m-1}^{\sigma*}\)). For example, \(\Delta_{3}^{\sigma*}\) is the flux of \(\Delta_{2}^{\sigma*}\) and it indicates the flux information of \(\Delta_{2}^{\sigma*}\). The TNE quantities of mixture is calculated by \(\mathbf{A}_{m}^{*}=\mathbf{A}_{m}^{\sigma*}+\Delta_{m}^{\sigma*}\) (\(\Delta_{m,n}^{*}=\mathbf{A}_{m,n}^{*}+\mathbf{A}_{m,n}^{*}\)). To describe the TNE strength of the fluid field, the global TNE strength of each component is defined as:
\[D_{m}^{\sigma*}=\int_{0}^{L_{x}}\int_{0}^{L_{y}}|\Delta_{m}^{\sigma*}|dxdy, \tag{35}\]
\[D_{m,n}^{\sigma*}=\int_{0}^{L_{x}}\int_{0}^{L_{y}}|\Delta_{m,n}^{\sigma*}|dxdy, \tag{36}\]
where \(|\mathbf{A}_{m,n}^{\sigma*}\) (\(|\Delta_{m}^{\sigma*}|\)) is the strength of \(\Delta_{m,n}^{\sigma*}\) (\(\Delta_{m}^{\sigma*}\)). The TNE strength of mixture is calculated by \(D_{m}^{*}=D_{m}^{A*}+D_{m}^{B*}\) (\(D_{m,n}^{*}=D_{m,n}^{A*}+D_{m,n}^{B*}\)). Other TNE quantities can be defined based on specific requirements. From the view of non-equilibrium statistical physics, all the independent components of TNE characteristic quantities open a high-dimensional phase space, and this space and its subspaces provide a very intuitive image for characterizing the TNE state and understanding TNE behaviors [(92; 80; 62)].
## 3 Numerical simulations and results
In this section, we first validate the DBM code by comparing the DBM results with experimental results. Then, the effects of specific-heat ratio on the bubble deformation and TNE behaviors are both investigated.
### Comparison with experimental results
In the following part, we use a first-order two-fluid DBM to simulate the interaction between a planar shock wave with a 2-D heavy-cylindrical bubbles, and compare the DBM results with the experimental results from Ref. [(31)]. The computational configuration can be seen in Fig. 2. In a flow field which is filled with Air, there is a static bubble composed of 26% Air and 74% SF\({}_{6}\). A shock with \(\mathrm{Ma}=1.2\) would pass through the bubble from left to right. The initial conditions of ambient gas are \(\rho_{0}=1.29\mathrm{kg/m^{3}}\), \(T_{0}=293\mathrm{K}\), \(p_{0}=101.3\mathrm{kPa}\). Ignoring the pressure difference between interior gas and ambient gas, the initial parameters of the bubble are \(\rho_{\mathrm{bubble}}=4.859\mathrm{kg/m^{3}}\), \(p_{\mathrm{bubble}}=101.3\mathrm{kPa}\), and \(T_{0}=293\mathrm{K}\). For simulating, these actual physical quantities should be nondimensionalized. The process of dimensionless can refer to Ref. [(62)], and the specific process of dimensionless can be seen in the A. The dimensionless conditions of macroscopic quantities of the fluid field in initial time can be obtained as follows:
\[\begin{cases}(\rho,T,u_{x},u_{y})_{\mathrm{bubble}}=(4.0347,1.0,0.0,0.0),\\ (\rho,T,u_{x},u_{y})_{1}=(1.3416,1.128,0.3616,0.0),\\ (\rho,T,u_{x},u_{y})_{0}=(1.0,1.0,0.0,0.0),\end{cases}\]
where the subscript "0" ("1") represents downstream (upstream) region.
In two-fluid DBM code, the distribution function \(f^{\mathrm{Air}}\) is used to describe the ambient gas, i.e., Air. The \(f^{\mathrm{bubble}}\) characters the bubble which is a mixture that composed of Air and SF\({}_{6}\). The grid number is \(N_{x}\times N_{y}=800\times 400\), where the \(N_{x}\) and \(N_{y}\) are grid number in \(x\) and \(y\) direction, respectively. This grid size has passed the mesh convergence test. The below results also show that it is sufficient to meet the requirements of the following research problem. Other parameters used for the simulation are: \(c=1.0\), \(\eta_{\mathrm{Air}}=\eta_{\mathrm{bubble}}=10.0\), \(I_{\mathrm{Air}}=3\), \(I_{\mathrm{bubble}}=15\), \(\Delta x=\Delta y=1.2\times 10^{-4}\) and \(\Delta t=1\times 10^{-6}\). The viscosity effect is feeble compared to the shock compression effect, so it does not significantly affect the deformation of the bubble. Therefore, in this part, the relaxation time \(\tau\) is set sufficiently small. The inflow (outflow) boundary condition is used in the left (right) boundary, and the periodic boundary is adopted in the \(y\) direction. The numerical methods used to solve Eq. (4) are flexible. It depends on the calculation accuracy, computational efficiency, and numerical stability. The first-order forward difference scheme is used to calculate the temporal derivative, and
Figure 2: The computational configuration of the shock-bubble interaction.
the second-order nonoscillatory nonfree dissipative scheme is adopted to solve the spatial derivative [74; 95; 62].
Two quantitative comparisons between experimental results and DBM simulations are shown in the following part, including snapshots of schlieren images and evolutions of characteristic scales for the bubble. The first is shown in Fig. 3. In the figure, results from odd rows are experimental, and the even rows indicate DBM simulation results. The experimental results are from Ref. [31]. The typical wave patterns and bubble's main characteristic structures are marked out in the figures. Numbers in the pictures represent the time in \(\mu s\). Schlieren images of DBM results are calculated from the density gradient formula, i.e., \(|\nabla\rho|/|\nabla\rho|_{\max}\), with \(|\nabla\rho|=\sqrt{(\partial\rho/\partial x)^{2}+(\partial\rho/\partial y)^{ 2}}\). At \(t=0\mu s\), the incident shock wave is impacting the upstream interface, generating a transmitted shock (TS) propagating downstream in the bubble and a reflected shock wave moving upward in ambient gas. The incident shock wave travels downstream continuously to form a diffracted shock (DS). As TS propagates, it will split into three branches due to the considerable pressure perturbations caused by the gradual decay of the DS strength [45]. Afterward, as shown in the subfigure at \(t=130\mu s\), two high pressure regions (ROH) generate because of the interaction of these branches. Subsequently, at about \(t=150.9\mu s\), the two ROHs meet, causing the shock focusing. On the one hand, at \(t=171.0\mu s\), the shock focusing causes the generation of downstream-propagating second transmitted shock (STS) and upward-moving rarefaction wave. On the other hand, it will produce high pressure region inside the bubble, which later leads to a jet structure, as shown at \(t=291.7\mu s\). At \(t=432.5\mu s\), due to the deposited vorticity, there will produce a pair of counter-rotating vortexes at the pole region of the bubble. The further development of the vortex pair and the effect of viscosity decrease the amplitude of the jet. Finally, the jet structure disappears.
The second quantitative comparison is the interface structure described by the length and width of the bubble, as shown in Fig. 4. The experimental data are extracted from Fig. 12, in Ref. [31]. Quantitative agreements between DBM simulation and experimental results are seen. For the profile of bubble width, there are mainly two stages. At an early time (\(t<150\mu s\)), it decreases to a minimum value because of the shock compression effect. After the shock wave passes through the bubble (\(t>150\mu s\)), the developed vortex pair caused by the deposited vorticity gradually dominates the growth of bubble width. Different from width evolution, the temporal variation of length experiences three stages. In the early stages (\(t<150\mu s\)), it decreases quickly due to the shock compression effect. Then, the jet structure emerges, which results in a growth in length (\(150\mu s<t<250\mu s\)). Because the upstream interface moves faster than the downstream interface, so the length of the bubble would decrease at \(250\mu s<t<500\mu s\). In the third stage (\(t>500\mu s\)), the vortex pair forms and then leads to a continuous development of bubble length. Both the length and width experience oscillations in the later stages due to complex wave patterns.
The quantitative agreements between DBM simulation and experimental results indicate the following two facts: (i) the order of TNE considered in the current DBM is sufficient, (ii) the choosing of discrete velocities and spatial-temporal steps and simulation parameters like the relaxation times is suitable for characterizing the deformation of bubble, wave patterns, main characteristics of flow morphology.
### Effects of specific-heat ratio on SBI
The major of current works on SBI research have not focused on specific-heat ratio effects. However, the following work shows that it plays an important role in bubble deformation, vorticity, mixing degree, entropy production rate, and TNE characteristics. In this part, the simulation parameters are fine-adjusted based on the parameters in Section 3.1 to highlight the influence of specific-heat ratio. Through adjusting the extra degree of freedom \(I\), five cases with various specific-heat ratios of the bubble are simulated, i.e., \(\gamma=1.4,1.28,1.18,1.12\), and \(1.09\). Two kinds of analysis methods, including tracer particle method and two-fluid model, are used to characterize qualitatively the macroscopic behaviors such as the shape, deformation process, mixing degree, etc. The related TNE behaviors are also studied.
#### 3.2.1 Effects of specific-heat ratio on jet shape, deformation process, and average motion
We first observe the specific-heat ratio effect on the bubble shape from the view of density contour and images of particle tracer visually. As shown in Fig. 5, pictures with three typical moments are plotted, i.e., \(t=0.07,t=0.11\), and \(t=0.16\). The odd rows represent density contours and the even rows are particle tracer images. It can be seen that the specific-heat ratio significantly affects the length and shape of the jet structure. The smaller the specific-heat ratio is, the stouter the jet structure can be seen. The reason is that the specific-heat ratio significantly changes the propagation speed of shock waves and wave patterns inside the bubble. The specific-heat ratio also influences the vortex structure in early stage but contributes little effects to it in later stage. In the later stage, for cases with different specific-heat ratios, the difference in vortex pairs is almost invisible.
Then, the effects of specific-heat ratio on deformation process are analyzed. Shown in Fig. 6 are the evolutions of characteristic scales which used to describe the bubble size, i.e., width and length. It can be seen that the smaller the specific-heat ratio of bubble, the smaller the bubble width and length. For the fluid with smaller specific-heat ratio, it is easier to compress. Therefore, the characteristic scales of bubbles with smaller specific-heat ratio tend to be compressed smaller. It can also be seen that the case with the largest specific-heat ratio reaches the minimum of characteristic scales firstly. The reason is that the shock wave propagates more faster in case with larger specific-heat ratio.
Through the method of particle tracer, information on the average motion of the bubble is easy to obtain. Shown in Fig. 7 are the average position and average velocity of the bubble, with different specific-heat ratios. It is found that, in the shock compression stage (\(t<0.03\)), the effect of specific-heat ratio
Figure 3: Snapshots of schlieren images of the interaction between a shock wave and a heavy-cylindrical bubble. The odd rows represent experimental results from Ref. [31] with permission, and the even rows are DBM simulation results. The typical wave patterns and the bubble’s main characteristic structure are marked out in the figures. Numbers in the picture represent the time in \(\mu s\).
contributes little to the average motion of the bubble. However, when the shock wave passes through the bubble (\(t>0.03\)), a larger specific-heat ratio speeds up the average motion of the bubbles. The reason is that the bubbles with smaller specific-heat ratio need more energy to compress their size, so their translational energy is smaller.
#### 3.2.2 Effects of specific-heat ratio on vortex motion
Vorticity is one of the most important physical quantities in describing the vortex motion. In the 2-D case, the vorticity can be calculated by the following equation:
\[\mathbf{\omega}=(\frac{\partial u_{y}}{\partial x}-\frac{\partial u_{x}}{\partial y })\mathbf{e}_{z}. \tag{37}\]
The positive (negative) value of \(\omega\) represents the positive (negative) direction along the \(z\) axis. Vorticity contours at \(t=0.134\), with various specific-heat ratios, are shown in Fig. 8. The discernable difference between cases with various specific-heat ratios in vorticity contours can be observed. The arrows in the vorticity images point out the obvious difference around the interface between case \(\gamma=1.4\) and case \(\gamma=1.09\). That is to say, there exists influences of specific-heat ratio on the rotational motion of the bubble.
The strength of vorticity is described by circulation \(\Gamma\), where \(\Gamma=\sum\omega\Delta x\Delta y\). \(\Gamma^{+}=\sum\omega|_{\omega>0}\Delta x\Delta y\) is the positive circulation and \(\Gamma^{-}=\sum\omega|_{\omega<0}\Delta x\Delta y\) represents the negative circulation. Figure 9 shows the temporal evolution of circulations on SBI process. It can be seen that the values of \(\Gamma\) are equal to zero all the time because the values of \(\Gamma^{+}\) and \(\Gamma^{-}\) are the same. But they point in the opposite direction. In the shock compression stage (\(t<0.03\)), the specific-heat ratio effect contributes little to the circulation of the bubble. When the shock wave sweeps through the bubble (\(t>0.03\)), the specific-heat ratio affects the value of circulation obviously. The cases with a smaller specific-heat ratio experiences a larger range of amplitude of change, which is caused by its good compressibility.
#### 3.2.3 Effects of specific-heat ratio on mixing degree
The mixing process is a fundamental research content on SBI. In two-fluid DBM, the mixing degree at each fluid unit can be defined as:
\[M=4\cdot\frac{\overline{M_{A}\cdot M_{B}}}{\overline{M_{A}\cdot M_{B}}}, \tag{38}\]
where \(M_{\sigma}\) represents the mass fraction of component \(\sigma\). The higher the value of \(M\), the higher the mixing amplitude. Images of density contour (first cow) and mixing degree \(M\) (second row) at several typical moments are shown in Fig. 10. As can be seen, the mass mixing occurs in the region where two media contact.
Integrating the \(M\) over the whole fluid field, the global mixing degree \(M_{g}\) is obtained. Shown in Fig. 11 is the temporal evolution of the global mixing degree \(M_{g}\). As can be seen, temporal profiles of the global mixing degree show two stages: \(t<0.03\) and \(t>0.03\). When \(t<0.03\), there is almost no difference between cases with various specific-heat ratios. But for \(t>0.03\), the stronger the specific-heat ratio effect, the larger the mixing degree. Actually, there are mainly two indicators that measure the global mixing degree: the amplitude of mixing and the area of the mixing zone between two fluids. At the stage \(t>0.03\), the shock compression dominates the mix by enhancing the mixing amplitude and increasing the area of the mixing zone simultaneously. In this stage, the specific-heat ratio effect contributes little to the mix. However, when the shock passes through the bubble, the diffusive effect of the interface and the evolution of the vortex core both significantly raise the area of the mixing zone. As can be seen in the figure, the smaller specific-heat ratio of bubble, the stronger global mixing degree of fluid field. There are two reasons to explain it: (i) Since the heat conductivity is changed simultaneously when the extra degree of freedom \(I\) is adjusted. The smaller specific-heat ratio, the larger the heat conductivity 2. The stronger the heat conduction effect is, the stronger the diffusive effect of the interface will be, and the larger the area of the mixing zone will be. Consequently, when the specific-heat ratio effect becomes weak, the global mixing degree will be higher. (ii) When the bubble is with larger heat conductivity (i.e., the smaller the specific-heat ratio), the heat conduction effect promotes the heat transfer from ambient gas to the bubble. Thus, the temperature of bubble with a larger heat conductivity is more higher. According to the diffusivity formula in Ref. [95], i.e., \(D_{d}^{\sigma}=\tau^{\sigma}T/m^{\sigma}\). The diffusivity \(D_{d}^{\sigma}\) is more larger when the temperature of bubble is higher. In this situation, the degree of mass mixing is stronger. Due to the complex reflected shock wave, the global mixing degree shows a tendency for oscillating growth.
Footnote 2: The heat conductivity formula is \(\kappa=C_{p}\tau p\), with \(C_{p}=\frac{D_{1}+2}{2}R\). The larger the extra degree of freedom \(I\) (the smaller the specific-heat ratio \(\gamma\)), the larger the heat conductivity \(\kappa\).
#### 3.2.4 Effects of specific-heat ratio on TNE behaviors
The investigation of TNE behaviors is of great importance for understanding the kinetics process on SBI. These TNE quantities describe the fluid system deviating from the thermodynamic state from their own perspectives. The effects of
Figure 4: The temporal variations of length and width of the bubble. The symbols represent DBM results and the lines are experimental. The definition of the length and the width of the bubble can be seen in the illustration. Experimental results are obtained from Fig. 12, in Ref. [31] with permission.
Figure 5: Density contours and particle tracer images at three different moments (i.e., \(t=0.07,t=0.11,\) and \(t=0.16\)) with various specific-heat ratios. The odds rows represent density contours, and the even rows are particle tracer images.
specific-heat ratio on global TNE strength, i.e., \(D_{3}^{*}\), \(D_{3}^{*}\), \(D_{3,1}^{*}\), and \(D_{4,2}^{*}\), are shown in Fig. 12. It can be seen that the effects of specific-heat ratios on various TNE quantities are different. Theoretically, the influence of specific-heat ratio on the non-equilibrium effect is reflected in two aspects: transport coefficient and macroscopic quantity gradient. For example, on the one hand, the specific-heat ratio reduces heat conductivity, while on the other hand, the specific-heat ratio enhances the temperature gradient. Therefore, the effect of specific heat ratio on NOEF is the comprehensive result of the competition between the two. As shown in Fig. 12(a), the smaller the specific-heat ratio, the stronger \(D_{3,1}^{*}\) strength. It indicates that the specific-heat ratio increase the \(D_{3,1}^{*}\) strength by raising the heat conductivity. For \(D_{3}^{*}\) strength, as as shown in Fig. 12(b), it is easy to see that the \(D_{3}^{*}\) strength decreases as the specific-heat ratio becomes small. The reason is that a smaller specific-heat ratio decreases the \(D_{3}^{*}\) strength by reducing the temperature gradient. Effects of specific-heat ratio on \(D_{4,2}^{*}\) show two-stage. In the shock compression stage (\(t<0.03\)), the smaller specific-heat ratio, the larger \(D_{4,2}^{*}\) strength. But the situation is reversed at the stage \(t>0.03\). Since the specific-heat ratio effect indirectly affects the strength of \(D_{2}^{*}\) strength by changing the velocity field, the evolution of \(D_{2}^{*}\) shows obvious complexity.
#### 3.2.5 Effects of specific-heat ratio on entropy production rate and entropy production
The concepts of entropy are commonly used in complex flows [88; 80; 59; 60]. In DBM, there are two kinds of entropy production rates, i.e., \(\dot{S}_{\text{NOEF}}\) and \(\dot{S}_{\text{NOMF}}\)[88]. They are key factors in compression science field. The former is induced by temperature gradient and the NOEF (\(\dot{\Delta}_{3,1}^{*}\)). The latter is affected by velocity gradient and the NOMF (\(\dot{\Delta}_{2}^{*}\)). The entropy production rates are defined by the following formulas [88]:
\[\dot{S}_{\text{NOEF}}=\int\Delta_{3,1}^{*}\cdot\nabla\frac{1}{T}d\mathbf{r}, \tag{39}\]
\[\dot{S}_{\text{NOMF}}=\int-\frac{1}{T}\Delta_{2}^{*}:\nabla\mathbf{udr}. \tag{40}\]
Integrating the \(\dot{S}_{\text{NOEF}}\) and \(\dot{S}_{\text{NOMF}}\) over time \(t\), the entropy generations over this period of time are obtained, i.e., \(\dot{S}_{\text{NOEF}}=\int\dot{S}_{\text{NOEF}}dt\) and \(\dot{S}_{\text{NOMF}}=\int\dot{S}_{\text{NOMF}}dt\).
Plotted in Fig. 13(a) and 13(b) are the temporal evolution of \(\dot{S}_{\text{NOMF}}\) and \(\dot{S}_{\text{NOEF}}\), respectively. The evolution of entropy generation rate is related to two aspects: (i) the propagation of the shock wave, and (ii) the deformation of the bubble. The former generates a macroscopic quantity gradient, and the latter makes the contact interface wider, longer, and deformed. Depending on the location of the shock wavefront, there exist two critical moments in this SBI process: (i) at around \(t=0.03\), the shock wave just sweeps through the bubble, and (ii) at \(t=0.06\), the shock wave exits the flow field. Therefore, the temporal evolution of the entropy production rate shows three stages, i.e., \(t<0.03\), \(0.03<t<0.06\), and \(t>0.06\). At the stage \(t<0.03\), the shock compression stage, the shock effects compress the bubble. It generates the large macroscopic quantity gradients, resulting in a quick increase of \(\dot{S}_{\text{NOMF}}\). At around \(t=0.03\), the
Figure 6: The temporal evolution of characteristic scales on SBI process, with different specific-heat ratios. Lines with different colors represent the cases with various specific-heat ratios.
Figure 7: The temporal evolution of average position and average bubble velocity, with different specific-heat ratios. Lines with different colors represent the cases with various specific-heat ratios.
shock wave passed through the bubble. So the values of \(\dot{S}_{\rm NOMF}\) decreases. The values of \(\dot{S}_{\rm NOMF}\) would continue to decrease due to the gradually wider contact interface caused by the diffusion effect. At around \(t=0.06\), the shock wave comes out of the flow field so that the values of \(\dot{S}_{\rm NOMF}\) drops rapidly. In the third stage, i.e., \(t>0.06\), because of the diffusive effect, the general trend of \(\dot{S}_{\rm NOMF}\) is downward. However, it shows an oscillatory trend due to the influence of various reflected shock waves. The specific heat ratio indirectly changes the value of \(\dot{S}_{\rm NOMF}\) by changing the velocity gradient. The smaller the specific-heat ratio, the larger \(\dot{S}_{\rm NOMF}\).
Different understanding can be seen in Fig. 13(b), where the temporal evolution of \(\dot{S}_{\rm NOEF}\) is plotted. In the first stage (\(t<0.03\)), cases with different specific-heat ratios show various trends. At the stage where the bubble deformation is not very large, i.e., \(0.03<t<0.06\), values of \(\dot{S}_{\rm NOEF}\) fluctuate near the average value. In the third stage (\(t>0.06\)), evolutions of \(\dot{S}_{\rm NOEF}\) in cases with larger specific-heat ratios show an apparent growing tendency. Differently, the values of \(\dot{S}_{\rm NOEF}\) in cases with smaller specific-heat ratios remain almost unchanged. The influence of specific heat ratio on the \(\dot{S}_{\rm NOEF}\), similar with the effect on NOEF, is also affected by the heat conductivity and the temperature gradient. It can be seen that, except for the case of \(\gamma=1.09\), the larger the specific-heat ratio, the higher entropy production rate \(\dot{S}_{\rm NOEF}\). The temporal evolutions of \(\dot{S}_{\rm NOEF}\) of case \(\gamma=1.09\) and case \(\gamma=1.12\) are very similar. Consequently, the specific-heat ratio increases the \(\dot{S}_{\rm NOEF}\) by raising the temperature gradient.
Further understanding can be seen in Fig. 14, where the entropy production over this period is plotted. For convenience, the sum and difference between \(\dot{S}_{\rm NOMF}\) and \(\dot{S}_{\rm NOEF}\) are also plotted in the figure. The variation range of \(\dot{S}_{\rm NOEF}\) is larger than that of \(\dot{S}_{\rm NOMF}\). It indicates that the influence of specific-heat ratio on \(\dot{S}_{\rm NOEF}\) is more significant than that on \(\dot{S}_{\rm NOMF}\). Effects of specific-heat ratio on entropy production caused by NOMF and NOEF are contrary. Specifically, it can be seen that the entropy production contributed by NOMF increases with reduced specific-heat ratio. But the entropy production caused by NOEF first reduces with decreasing specific-heat ratio and then approaches to a saturation value. The \(\dot{S}_{\rm NOEF}\) in case \(\gamma=1.09\) is almost the same with it in case \(\gamma=1.12\). When the specific-heat ratio \(\gamma\) is smaller than a threshold value \(\chi_{c}\) (\(\chi_{c}\approx 1.315\)), the entropy production induced by NOEF is more significant than that caused by NOMF. However, in the case of \(\gamma>\chi_{c}\), the situation reverses. The temporal evolution of the total entropy production (\(\dot{S}_{\rm NOMF}\)+\(\dot{S}_{\rm NOEF}\)) is similar to the \(\dot{S}_{\rm NOEF}\) profile. The difference between \(\dot{S}_{\rm NOMF}\) and \(\dot{S}_{\rm NOEF}\) increases with decreasing specific-heat ratio.
## 4 Conclusions
Specific-heat ratio effects on the interaction between a planar shock wave and a 2-D heavy-cylindrical bubble are studied by a two-fluid DBM which has a flexible specific-heat ratio and intrinsically includes several schemes for analyzing the complex physical fields. Besides the HNE that NS easily describes, the DBM pays more attention to the related TNE that NS is not convenient to describe. First, both the snapshots of schlieren images and evolutions of characteristic scales from DBM simulation are compared with those from experiment. The quantitative agreements between them indicate the following two facts: (i) the order of TNE considered in the current DBM is sufficient, (ii) the choosing of discrete velocities, spatial-temporal steps, and simulation parameters like the relaxation times are suitable the following physical researches. Then, five cases with various specific-heat ratios are simulated. Several analysis methods for
Figure 8: Vorticity contours at \(t=0.134\), with various specific-heat ratios. The arrows in the vorticity image point out the apparent difference between case \(\gamma=1.4\) and case \(\gamma=1.09\).
Figure 9: Temporal evolution of circulation on SBI process with various specific-heat ratios. Lines with different colors represent the cases with various specific-heat ratios.
complex physical fields, including the description scheme of TNE behaviors, tracer particle method, and two-fluid model, are used to characterize the effects of specific-heat ratio on the bubble shape, deformation process, average motion, vortex motion, mixing degree of the fluid system, TNE strength, and entropy production. Specifically, for bubble shape, bubbles with different specific-heat ratios display various jet structures. The smaller the specific-heat ratio is, the stouter the jet structure can be seen. For the case with smaller specific-heat ratio, the fluid is easier to compress. So, the characteristic scales of bubbles with smaller specific-heat ratio tend to be compressed smaller. For the bubble, the smaller the specific-heat ratio, the slower average motion velocity. In the shock compression stage, the specific-heat ratio contributes little effects to the vortex motion. Differently, after the shock passes through the bubble, it significantly influences the vorticity around the interface and the corresponding amplitude of circulation due to the development of KHI. The larger the difference in specific-heat ratio between the bubble and ambient gas, the higher the degree of material
Figure 11: Temporal evolution of global mixing degree \(M_{\mathrm{g}}\) of SBI process with various specific-heat ratios. Lines with different colors represent the cases with various specific-heat ratios.
Figure 10: Density contours (first row) and mixing degree M (second row) at several typical moments.
mixing. The specific-heat ratio affects the global TNE strength of the fluid system. But effects of specific-heat ratio on various TNE quantities are different. These differences consistently show the complexity of TNE flows which is still far from a clear understanding.
The entropy production rates play an important role in compression science. It is found that the temporal evolution of the entropy production rates \(\dot{S}_{\rm NOMF}\) and \(\dot{S}_{\rm NOEF}\) both show three stages because of the influence of the shock wave location. The smaller the specific-heat ratio, the larger the velocity gradient, which indirectly enhances the the strength of \(\dot{S}_{\rm NOMF}\). The specific-heat ratio increases the \(\dot{S}_{\rm NOEF}\) by raising the temperature gradient. The influence of specific-heat ratio on \(S_{\rm NOEF}\) is more significant than that on \(S_{\rm NOMF}\). Effects of specific-heat ratio on entropy production caused by NOMF and NOEF are contrary. Specifically, the entropy production contributed by NOMF increases with reduced specific-heat ratio. But the entropy production caused by NOEF first reduces with decreasing specific-heat ratio and then approaches to a saturation value. When the specific-heat ratio \(\gamma\) is smaller than a threshold value \(\gamma_{c}\) (\(\gamma_{c}\approx 1.315\)), the entropy production induced by NOEF is more significant than that caused by NOMF. However, in the case of \(\gamma>\gamma_{c}\), the situation reverses. The fundamental research in this paper helps to understand the interaction mechanism between shock waves and bubbles in ICF, supersonic combustors, underwater explosions, etc. The effects of viscosity and heat conduction on the interaction between shock waves and bubbles will be studied in the following work.
## Acknowledgments
The authors thank Chuandong Lin, Feng Chen, Ge Zhang, Yiming Shan, Jie Chen and Hanwei Li on helpful discussions on DBM. The authors also thank Juchun Ding for providing experimental results. This work was supported by the National Natural Science Foundation of China (under Grant Nos. 12172061, 11875001, and 12102397), the Strategic Priority Research Program of Chinese Academy of Sciences (under Grant No. XDA25051000), the opening project of State Key Laboratory of Explosion Science and Technology (Beijing Institute of Technology) (under Grant No. KFJJ23-16M), Foundation of Laboratory of Computational Physics, and Hebei Natural Science Foundation (Grant No. A2021409001), Central Guidance on Local Science and Technology Development Fund of Hebei Province (Grant No. 226Z7601G), and "Three, Three and Three Talent Projec" of Hebei Province (Grant No. A202105005).
## Appendix A Process of dimensionless
The actual macroscopic quantities of this experiment in Ref. [31] are as follows:
\[\left\{\begin{array}{l}(\rho,u_{x},u_{y},p,T)^{\rm bubble}=(4.859{\rm kg} /{\rm m}^{3},0,0,101325{\rm Pa},293{\rm K}),\\ (\rho,u_{x},u_{y},p,T)^{\rm Air}=(1.2{\rm kg}/{\rm m}^{3},0,0,101325{\rm Pa},29 3{\rm K}).\end{array}\right.\]
Other physical quantities are: \(R_{\rm bubble}=71.9{\rm J}/({\rm kg}\cdot{\rm K})\), \(R_{\rm Air}=286.7{\rm J}/({\rm kg}\cdot{\rm K})\), \(\gamma_{\rm bubble}=1.117\), and \(\gamma_{\rm Air}=1.4\), \(\mu_{\rm Air}=1.824\times{\rm K}\).
Figure 13: (a) Temporal evolution of entropy production rate \(\dot{S}_{\rm NOMF}\). (b) Temporal evolution of entropy production rate \(\dot{S}_{\rm NOEF}\). Lines with different colors represent the cases with various specific-heat ratios. |
2304.01009 | Relativistic second-order spin hydrodynamics: an entropy-current
analysis | We present a new derivation of Israel-Stewart-like relativistic second-order
dissipative spin hydrodynamic equations using the entropy current approach. In
our analysis, we consider a general energy-momentum tensor with symmetric and
anti-symmetric parts. Moreover, the spin tensor, which is not separately
conserved, has a simple phenomenological form that is antisymmetric only in the
last two indices. Apart from the evolution equations for energy density, fluid
flow, and spin density, we also find relaxation-type dynamical equations for
various dissipative currents. The latter are consistently derived within the
second-order theory as gradient corrections to the energy-momentum and spin
tensors. We argue that this approach correctly reproduces the corresponding
Navier-Stokes limit of spin hydrodynamic equations. Throughout our analysis,
the spin chemical potential is considered a $\mathcal{O}(\partial)$ quantity in
the hydrodynamic gradient expansion and reduces to thermal vorticity in the
global equilibrium. New coefficients appearing in the generalized spin
hydrodynamic equations are undetermined and can only be evaluated within a
proper underlying microscopic theory of a given system. | Rajesh Biswas, Asaad Daher, Arpan Das, Wojciech Florkowski, Radoslaw Ryblewski | 2023-04-03T14:08:34Z | http://arxiv.org/abs/2304.01009v1 | # Relativistic second-order spin hydrodynamics: an entropy-current analysis
###### Abstract
We present a new derivation of Israel-Stewart-like relativistic second-order dissipative spin hydrodynamic equations using the entropy current approach. In our analysis, we consider a general energy-momentum tensor with symmetric and anti-symmetric parts. Moreover, the spin tensor, which is not separately conserved, has a simple phenomenological form that is antisymmetric only in the last two indices. Apart from the evolution equations for energy density, fluid flow, and spin density, we also find relaxation-type dynamical equations for various dissipative currents. The latter are consistently derived within the second-order theory as gradient corrections to the energy-momentum and spin tensors. We argue that this approach correctly reproduces the corresponding Navier-Stokes limit of spin hydrodynamic equations. Throughout our analysis, the spin chemical potential is considered a \(\mathcal{O}(\partial)\) quantity in the hydrodynamic gradient expansion and reduces to thermal vorticity in the global equilibrium. New coefficients appearing in the generalized spin hydrodynamic equations are undetermined and can only be evaluated within a proper underlying microscopic theory of a given system.
## I Introduction
In non-central relativistic heavy-ion collisions, the average spin polarization of hadrons (e.g., \(\Lambda\) hyperons) is observed along the global axis of rotation of the produced matter [1; 2; 3; 4; 5; 6; 7; 8; 9]. This result may suggest that constituents' spin in the hyperons is coordinated in a specific direction, implying that the quark-gluon plasma (QGP) contains non-trivial vortical structures [10; 11], which in turn might be caused by the significant amount of orbital angular momentum produced in such collisions [12; 13]. This phenomenon mimics the Barnett effect [14; 15] which displays the macroscopic effect of a quantum spin. Various theoretical approaches have been explored to model the vortical structure of a QCD plasma, e.g., hydrodynamic approach [16; 17; 18; 19; 20; 21; 22; 23; 24], relativistic kinetic theory [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43], effective Lagrangian approach [44; 45; 46; 47], quantum statistical density operators [48; 49; 50; 51; 52; 53], holography [54; 55], etc. Considering the triumphs of the relativistic dissipative hydrodynamic frameworks in relativistic heavy-ion phenomenology [56; 57; 58], several extensions of relativistic hydrodynamics with spin degrees of freedom for the vortical fluids attracted a lot of attention. The spin hydrodynamic frameworks have a crucial role to play in understanding the collective spin dynamics of relativistic strongly interacting plasma because they may link quantum mechanical features of matter with hydrodynamics.
To model the collective spin dynamics in relativistic spin hydrodynamic frameworks, in addition to the usual hydrodynamic quantities, e.g., the energy-momentum tensor (\(T^{\mu\nu}\)), one also introduces the 3-rank spin tensor (\(S^{\lambda\mu\nu}\)) [25]. The additional equations of motion resulting from the conservation of the system's total angular momentum provide information about the dynamical evolution of the spin tensor. One of the fundamental conceptual difficulties in formulating a theory of relativistic dissipative spin hydrodynamics is the problem of "pseudo-gauge transformations". Pseudo-gauge transformations imply that the forms of the energy-momentum tensor and spin tensor are not unique. In particular, for any energy-momentum tensor \(T^{\mu\nu}\) satisfying the conservation equation, i.e., \(\partial_{\mu}T^{\mu\nu}=0\), one can construct an equivalent energy-momentum tensor \(T^{\prime\,\mu\nu}\) by adding the divergence of an antisymmetric tensor, namely \(T^{\prime\,\mu\nu}=T^{\mu\nu}+\partial_{\lambda}\Phi^{\nu\mu\lambda}\)[59; 50; 33]. Note that if \(\Phi^{\nu\mu\lambda}\) is antisymmetric in the last two indices then \(T^{\prime\,\mu\nu}\) is also conserved. The same construction of the spin tensor can also be obtained without affecting the conservation of the total angular momentum. Different pseudo-gauge choices do not affect the conservation of total angular momentum or energy-momentum, nor do these transformations have any impact on the global charges (i.e., the global energy, linear momentum, and angular momentum). Various pseudo-gauge choices, e.g., the canonical, Belinfante-Rosenfeld (BR) [61; 62; 63], de Groot-van Leeuwen-van Weert (GLW) [64], Hilgevoord-Wouthuysen (HW) [65; 66] forms and their implications on the spin hydrodynamic framework are intensely debated in recent literature [17; 18; 20; 33; 52; 67; 68].
Without going into a specific microscopic theory, a model-independent dissipative spin hydrodynamic framework can be obtained using thermodynamic consideration, which implies that for a dissipative system, entropy must be
produced. This simple but rather powerful physical principle has been implemented very rigorously to obtain the Navier-Stokes-like theory of dissipative spin hydrodynamic framework [16; 17; 20]. In this framework, the energy-momentum tensor consists of symmetric as well as antisymmetric components. Moreover, following the earlier works of Weyssenhoff and Raabe [69], one considers a simple _phenomenological_ form of the spin tensor, which is only antisymmetric in the last two indices \(S^{\lambda\mu\nu}=u^{\lambda}S^{\mu\nu}\)[16; 17; 20]. Here \(u^{\mu}\) represents the time-like fluid flow four vector, and \(S^{\mu\nu}\) represents the spin density in analogy with the number density. A linear stability analysis for this _phenomenological_ first-order spin hydrodynamic framework has been performed in Refs. [70; 71]. These analyses show that in the fluid rest frame, the first-order spin hydrodynamic equations are generally unstable under linear perturbation [70]. This is a rather interesting result because the instability manifests itself even in the fluid rest frame, and the source of this instability is the spin equation of state that relates the spin density tensor (\(S^{\mu\nu}\)) to the spin chemical potential (\(\omega^{\mu\nu}\)). Strictly speaking, it has been argued that only the spin density perturbation components \(\delta S^{0i}\) are responsible for the instabilities. Also an independent analysis of this framework, for a boost invariant system indicates unstable behavior in the evolution of the temperature (\(T\)) and the spin chemical potential (\(\omega^{\mu\nu}\)) [72]. These instabilities can be generic and the first-order (Navier-Stokes limit) spin-hydrodynamic framework can be highly pathological. Second-order dissipative hydrodynamic frameworks have been argued to be free of stability as well as causality issues [73; 74; 75; 76; 77; 78; 79; 80; 81]. We expect that such features will also remain intact for second-order spin hydrodynamic frameworks. Such observation motivates us to go beyond the first-order theory.
In this paper, we construct a new second-order Israel-Stewart-like (IS-like) spin hydrodynamic framework using the entropy current analysis [82; 83; 84; 85]. Some efforts have been already made to derive the second-order spin hydrodynamic equations from an underlying microscopic theory [86; 87] using spin-kinetic equations. Such a kinetic-theory approach explicitly uses spin-dependent collision terms and is based on the moment method of kinetic equation. In this article, we follow an alternative model-independent way based on the entropy current analysis to derive the second-order spin hydrodynamic equations [82]. Various second-order hydrodynamic theories for'spin-less' fluid, e.g., the Muller-Israel-Stewart (MIS) approach [88; 89; 82], Denicol-Niemi-Molnar-Rischke (DNMR) approach [90; 91], Baier-Romatschke-Son-Starinets-Stephanov (BRSSS) approach [92], Chapman-Enskog approach [93; 94; 95] etc., have been routinely used to explain the heavy-ion collision data. Although different second-order hydrodynamic theories can have a similar structure, they are not exactly the same which is reflected in the hydrodynamic evolution, particularly where the gradients are large [96]. Such differences crucially affect their application to explain the heavy ion collision data. These differences may also become evident for second-order spin hydrodynamic frameworks. The present calculation can be considered as a complementary method to the kinetic theory approach to obtain spin hydrodynamic equations.
After this brief introduction, in Sec. II we discuss the Navier-Stokes theory of dissipative spin hydrodynamics using the entropy current analysis. Once the Navier-Stokes theory is defined we next move to the construction of the second-order Israel-Stewart theory of dissipative spin hydrodynamics in sec III. Finally, in Sec. IV we conclude our results with an outlook.
In this manuscript, the symmetric and antisymmetric parts of a tensor \(X^{\mu\nu}\) are denoted as \(X^{\mu\nu}_{(s)}\equiv X^{(\mu\nu)}\equiv(X^{\mu\nu}+X^{\nu\mu})/2\) and \(X^{\mu\nu}_{(a)}\equiv X^{[\mu\nu]}\equiv(X^{\mu\nu}-X^{\nu\mu})/2\), respectively. We use the metric tensor of the signature \(g_{\mu\nu}=\text{diag}(+1,-1,-1,-1)\) and the totally antisymmetric Levi-Civita tensor with the sign convention \(\epsilon^{0123}=-\epsilon_{0123}=1\). The fluid four-velocity \(u^{\mu}\) satisfies the normalization condition \(u^{\mu}u_{\mu}=1\). The projector orthogonal to \(u^{\mu}\) is defined as \(\Delta^{\mu\nu}\equiv g^{\mu\nu}-u^{\mu}u^{\nu}\); by definition \(\Delta^{\mu\nu}u_{\mu}=0\). Projection orthogonal to \(u^{\mu}\) of a four-vector \(X^{\mu}\) is represented as \(X^{\langle\mu\rangle}\equiv\Delta^{\mu\nu}X_{\nu}\). Traceless and symmetric projection operator orthogonal to \(u^{\mu}\) is denoted as \(X^{\langle\mu\nu\rangle}\equiv\Delta^{\mu\nu}_{\alpha\beta}X^{\alpha\beta} \equiv\frac{1}{2}\left(\Delta^{\mu}_{\phantom{\mu}\alpha}\Delta^{\nu}_{ \phantom{\nu}\beta}+\Delta^{\mu}_{\phantom{\mu}\beta}\Delta^{\nu}_{\phantom{ \nu}\alpha}-\frac{2}{3}\Delta^{\mu\nu}\Delta_{\alpha\beta}\right)X^{\alpha\beta}\). Similarly, \(X^{\langle[\mu\nu]\rangle}\equiv\Delta^{[\mu\nu]}_{[\alpha\beta]}X^{\alpha \beta}\equiv\frac{1}{2}\left(\Delta^{\mu}_{\phantom{\mu}\alpha}\Delta^{\nu}_ {\phantom{\nu}\beta}-\Delta^{\mu}_{\phantom{\mu}\beta}\Delta^{\nu}_{\phantom{ \nu}\alpha}\right)X^{\alpha\beta}\) denotes the antisymmetric projection operator orthogonal to \(u^{\mu}\). The partial derivative operator can be decomposed into two parts, one along the flow direction and the other orthogonal to it, i.e., \(\partial_{\mu}=u_{\mu}D+\nabla_{\mu}\). Here \(D\equiv u^{\mu}\partial_{\mu}\) denotes the comoving derivative, and \(\nabla_{\mu}\equiv\Delta^{\phantom{\mu}\alpha}_{\phantom{\mu}\mu}\partial_{\alpha}\) is orthogonal to \(u^{\mu}\), i.e., \(u_{\mu}\nabla^{\mu}=0\). The expansion rate is defined as \(\theta\equiv\partial_{\mu}u^{\mu}\).
First-order relativistic dissipative spin hydrodynamics
### Macroscopic conservation laws
Phenomenological derivation of hydrodynamics for a spin-polarized fluid is based on the conservation of energy-momentum tensor \(T^{\mu\nu}\) and total angular momentum tensor \(J^{\lambda\mu\nu\,1}\)[25; 26],
\[\partial_{\mu}T^{\mu\nu} =0, \tag{1}\] \[\partial_{\lambda}J^{\lambda\mu\nu}=2T^{\mu\nu}_{(a)}+\partial_{ \lambda}S^{\lambda\mu\nu} =0. \tag{2}\]
The total angular momentum tensor, \(J^{\lambda\mu\nu}\!=\!L^{\lambda\mu\nu}+S^{\lambda\mu\nu}\), is the sum of the spin part, \(S^{\lambda\mu\nu}\), and the orbital part, \(L^{\lambda\mu\nu}=2\,x^{[\mu}T^{\lambda\nu]}\). In principle, \(T^{\mu\nu}\), and \(S^{\lambda\mu\nu}\) can be obtained from a more fundamental energy-momentum tensor operator and spin operator of the underlying quantum field theory system. Utilizing Noether's theorem from the perspective of the quantum field theory of Dirac fermions, the microscopic _canonical_ energy-momentum tensor is in general asymmetric, and the corresponding spin tensor is totally antisymmetric [97]. We expect that the symmetry properties of various microscopic currents will also be preserved at the macroscopic level. Due to the pseudo-gauge transformation \(T^{\mu\nu}\), and \(S^{\lambda\mu\nu}\) are not unique. Using the arbitrariness in defining the energy-momentum tensor and the spin tensor, for _phenomenological_ studies, one often uses an asymmetric energy-momentum tensor and a spin tensor that is only antisymmetric in the last two indices [69]. The dissipative spin hydrodynamic framework with the _phenomenological_ form of the spin tensor has been discussed in Refs. [16; 17]. Moreover, it can be shown that the _phenomenological_ spin-hydrodynamic framework with the spin tensor which is antisymmetric only in the last two indices can be obtained from a properly defined _canonical_ spin-hydrodynamic framework with totally antisymmetric spin tensor using a proper pseudo-gauge transformation [20]. In this work, we will first overview the first-order dissipative _phenomenological_ spin-hydrodynamic framework by considering the following forms of the energy-momentum tensor and spin tensor,
\[T^{\mu\nu}=T^{\mu\nu}_{(0)}+T^{\mu\nu}_{(1s)}+T^{\mu\nu}_{(1a)}=T ^{\mu\nu}_{(0)}+2h^{(\mu}u^{\nu)}+\tau^{\mu\nu}+2q^{[\mu}u^{\nu]}+\phi^{\mu \nu}, \tag{3}\] \[S^{\lambda\mu\nu}=S^{\lambda\mu\nu}_{(0)}+S^{\lambda\mu\nu}_{(1 )}=u^{\lambda}S^{\mu\nu}+S^{\lambda\mu\nu}_{(1)}. \tag{4}\]
The leading order contribution \(T^{\mu\nu}_{(0)}\) in Eq. (3) has the form of the perfect fluid energy-momentum tensor,
\[T^{\mu\nu}_{(0)}=\varepsilon u^{\mu}u^{\nu}-p\Delta^{\mu\nu}, \tag{5}\]
where \(\varepsilon\) is the energy density and \(p\) is the equilibrium pressure. The most general expression of \(T^{\mu\nu}\) can contain terms that are symmetric as well as antisymmetric under the \(\mu\leftrightarrow\nu\) exchange. Therefore, we decompose the dissipative part of the energy-momentum tensor \(T^{\mu\nu}_{(1)}\) into a symmetric part \(T^{\mu\nu}_{(1s)}\equiv 2h^{(\mu}u^{\nu)}+\tau^{\mu\nu}\) and an antisymmetric part \(T^{\mu\nu}_{(1a)}=2q^{[\mu}u^{\nu]}+\phi^{\mu\nu}\). The vector \(h^{\mu}\) represents the heat flow, while \(\tau^{\mu\nu}\) is the symmetric part of the dissipative correction such that \(\tau^{\mu\nu}=\pi^{\mu\nu}+\Pi\,\Delta^{\mu\nu}\). The tensor \(\pi^{\mu\nu}\) (the traceless part of \(\tau^{\mu\nu}\)) is the shear stress tensor and \(\Pi\) is the bulk pressure. Analogously, \(q^{\mu}\) and \(\phi^{\mu\nu}\) are the antisymmetric dissipative corrections. These dissipative currents satisfy the following conditions: \(h^{\mu}u_{\mu}=0\), \(\tau^{\mu\nu}u_{\nu}=0\), \(q^{\mu}u_{\mu}=0\), \(\phi^{\mu\nu}u_{\nu}=0\), \(\tau^{\mu\nu}=\tau^{\nu\mu}\), and \(\phi^{\mu\nu}=-\phi^{\nu\mu}\). According to the hydrodynamic gradient expansion \(\varepsilon\), \(p\), and \(u^{\mu}\) scale as \(\mathcal{O}(\partial^{0})\) or \(\mathcal{O}(1)\). But \(h^{\mu}\), \(q^{\mu}\), \(\tau^{\mu\nu}\), and \(\phi^{\mu\nu}\) scale as \(\mathcal{O}(\partial)\). The tensor \(S^{\mu\nu}=-S^{\nu\mu}\) in Eq. (4) can be interpreted as the spin density, \(S^{\mu\nu}=u_{\lambda}S^{\lambda\mu\nu}\), in analogy to the number density [16; 17; 20]. Consequently, the spin density is a leading order term in the hydrodynamic gradient expansion, i.e., \(S^{\mu\nu}\sim\mathcal{O}(1)\). The first-order dissipative correction \(S^{\lambda\mu\nu}_{(1)}\) satisfies \(u_{\lambda}S^{\lambda\mu\nu}_{(1)}=0\). Note that in general, \(u_{\mu}S^{\mu\alpha\beta}_{(1)}\neq 0\), but due to the matching condition where \(S^{\mu\nu}\) can be identified as the equilibrium spin density we consider \(u_{\mu}S^{\alpha\alpha\beta}_{(1)}=0\). The same matching condition also identifies \(\varepsilon\) as the equilibrium energy density, i.e., \(T^{\mu\nu}_{(1)}u_{\mu}u_{\nu}=0\). Using Eqs. (3),
and (4) back into Eqs. (1) and (2) we obtain spin hydrodynamic equations,
\[D\varepsilon+(\varepsilon+p)\theta =-\partial\cdot h+h^{\nu}Du_{\nu}+\tau^{\mu\nu}\partial_{\mu}u_{\nu} -\partial\cdot q-q^{\nu}Du_{\nu}+\phi^{\mu\nu}\partial_{\mu}u_{\nu},\] \[=2\,h^{\mu}Du_{\mu}-\nabla\cdot(q+h)+\tau^{\mu\nu}\partial_{\mu}u _{\nu}+\phi^{\mu\nu}\partial_{\mu}u_{\nu}, \tag{6}\] \[(\varepsilon+p)Du^{\alpha}-\nabla^{\alpha}p =-(h\cdot\partial)u^{\alpha}-h^{\alpha}\theta-\Delta^{\alpha}_{ \ \nu}Dh^{\nu}-\Delta^{\alpha}_{\ \nu}\partial_{\mu}\tau^{\mu\nu}\] \[-(q\cdot\partial)u^{\alpha}+q^{\alpha}\theta+\Delta^{\alpha}_{\ \nu}Dq^{\nu}-\Delta^{\alpha}_{\ \nu}\partial_{\mu}\phi^{\mu\nu},\] \[=-(q+h)\cdot\nabla u^{\alpha}+(q^{\alpha}-h^{\alpha})\theta+ \Delta^{\alpha}_{\ \nu}Dq^{\nu}-\Delta^{\alpha}_{\ \nu}Dh^{\nu}\] \[-\Delta^{\alpha}_{\ \nu}\partial_{\mu}\tau^{\mu\nu}-\Delta^{\alpha}_{ \ \nu}\partial_{\mu}\phi^{\mu\nu},\] (7) \[\partial_{\lambda}(u^{\lambda}S^{\mu\nu})+\partial_{\lambda}S^{ \lambda\mu\nu}_{(1)} =-2(q^{\mu}u^{\nu}-q^{\nu}u^{\mu}+\phi^{\mu\nu}). \tag{8}\]
Due to the difficulty in specifying the flow velocity, frame choices are crucial in the setting of dissipative hydrodynamics2. In standard hydrodynamics (spinless fluid) a natural hydrodynamic frame choice is the Landau frame, \(T^{\mu\nu}u_{\nu}=\varepsilon u^{\mu}\) with only a symmetric energy-momentum tensor. This implies \(h^{\mu}=0\). But in the spin hydrodynamic frameworks in general due to the presence of an antisymmetric component, one has two alternatives: (1) we can apply the Landau frame choice but only in the symmetric part of \(T^{\mu\nu}\). This implies that \(h^{\mu}=0\). (2) Instead of applying the Landau frame condition only to the symmetric part of the \(T^{\mu\nu}\), we can also include the antisymmetric part. In that case, we obtain \(h^{\mu}+q^{\mu}=0\). This immediately implies that we can have \(h^{\mu}\) and \(q^{\mu}\) nonvanishing but satisfying together the Landau condition. In this paper, we will keep the discussions general without imposing any specific frame condition, unless otherwise stated.
Footnote 2: The energy-momentum tensor \(T^{\mu\nu}\) can typically have 16 independent components in four dimensions. In dissipative hydrodynamics, these 16 components correspond to \(\varepsilon,p,\mu^{\mu},h^{\mu},\pi^{\mu\nu},\Pi,q^{\mu},\) and \(\phi^{\mu\nu}\). Due to the equation of state, the variables \(\varepsilon\) and \(p\) together give only one unknown, while \(u^{\mu}\), \(h^{\mu}\) and \(q^{\mu}\) have three independent degrees of freedom due to the conditions \(u^{\mu}u_{\mu}=1\), \(h^{\mu}u_{\mu}=0\) and \(q^{\mu}u_{\mu}=0\). Both \(\pi^{\mu\nu}\) and \(\phi^{\mu\nu}\) are orthogonal to \(u^{\mu}\). But \(\pi^{\mu\nu}\) is symmetric and traceless. Hence, it has only five independent degrees of freedom. The tensor \(\phi^{\mu\nu}\) is antisymmetric, hence it has three independent components. The bulk pressure \(\Pi\) is just a scalar representing one degree of freedom. This counting summarizes to nineteen independent components in the \(T^{\mu\nu}\) rather than sixteen. Therefore we have the freedom to eliminate three degrees of freedom. The so-called frame choice or the definition of \(u^{\mu}\) reduces the number of independent components to sixteen.
### Thermodynamic relations
In the presence of dynamical spin degrees of freedom, the laws of thermodynamics can be generalized to [16; 17; 20],
\[\varepsilon+p=Ts+\omega_{\alpha\beta}S^{\alpha\beta},\] \[d\varepsilon=Tds+\omega_{\alpha\beta}dS^{\alpha\beta},\] \[dp=sdT+S^{\alpha\beta}d\omega_{\alpha\beta}. \tag{9}\]
Here, \(T\) is the temperature, \(s\) is the entropy density, and \(\omega_{\alpha\beta}\) can be interpreted as the spin chemical potential conjugated to the spin density \(S^{\alpha\beta}\) such that \(S^{\alpha\beta}=\partial p/\partial\omega_{\alpha\beta}\) at a fixed temperature \(T\). The spin chemical potential is defined as a hydrodynamic variable in analogy with the chemical potential and distinguishes spin hydrodynamic frameworks from the standard hydrodynamic theories. However, there is a fundamental difference between these quantities. The chemical potential is only allowed in hydrodynamics if the corresponding current is conserved, e.g., baryon chemical potential in the presence of a conserved baryon current. But the presence of spin chemical potential does not necessarily imply the conservation of macroscopic spin current. In the language of the quantum statistical density operator framework [51], in local thermal equilibrium, the spin chemical potential can only be considered as a Lagrange multiplier [98]. However, in global equilibrium, in the presence of an antisymmetric component of the energy-momentum tensor, the spin chemical potential can be shown to be related to the thermal vorticity, \(\varpi_{\mu\nu}=-\frac{1}{2}(\partial_{\mu}\beta_{\nu}-\partial_{\nu}\beta_{ \mu})\)[98]. Here \(\beta^{\mu}=\beta u^{\mu}\) and \(\beta\) is the inverse temperature field.
Apart from the presence of spin chemical potential, the hydrodynamic gradient ordering of spin-related quantities appearing in Eq. (II.1) has been discussed earlier. Fixing the hydrodynamic gradient ordering of \(\omega^{\alpha\beta}\) is not straightforward. Since it is expected that in global equilibrium the spin chemical potential can be expressed in terms of thermal vorticity \(\varpi_{\mu\nu}\), it is rather natural to consider \(\omega^{\mu\nu}\sim\mathcal{O}(\partial)\). But such a conclusion is only applicable if the energy-momentum tensor is asymmetric [98]. This is a non-trivial aspect of the spin hydrodynamic framework as compared to the standard hydrodynamic frameworks for _spinless_ fluids. In standard hydrodynamics, the derivative correction terms vanish at global equilibrium. But all gradient terms do not vanish in global equilibrium if we consider the most generalized flow configuration, which is also true for spin-hydrodynamics. Using the framework of the quantum statistical density operator, it can be shown that the most general flow configuration in global equilibrium, must fulfill
the following conditions [99],
\[\partial_{\mu}\beta_{\nu}+\partial_{\nu}\beta_{\mu}=0,\quad\beta_{\nu}=b_{\nu}+ \varpi_{\nu\lambda}x^{\lambda},\quad\varpi_{\mu\nu}=-\frac{1}{2}(\partial_{\mu }\beta_{\nu}-\partial_{\nu}\beta_{\mu})=\text{constant}. \tag{10}\]
Here \(\beta^{\mu}=\beta u^{\mu}\), \(\beta=1/T\), \(b_{\nu}\) is a constant four vector. The 2-rank antisymmetric tensor \(\varpi^{\mu\nu}\) is the thermal vorticity, and one can clearly observe that it scales as \(\mathcal{O}(\partial)\) in the hydrodynamic gradient expansion. Thus, a generic global equilibrium allows for \(\mathcal{O}(\partial)\) terms in the flow configuration. Consequently, the gradient ordering of the spin chemical potential \(\omega^{\mu\nu}\) is a contentious issue in the setting of spin hydrodynamics and has serious ramifications for the formulation of the spin hydrodynamic framework. A natural question could be raised here on how to connect \(S^{\mu\nu}\sim\mathcal{O}(1)\) and \(\omega_{\mu\nu}\sim\mathcal{O}(\partial)\) when their hydrodynamic gradient orders do not match. This was recently discussed in Ref. [72] as a new spin equation of state was constructed to match the gradient orders of \(S^{\mu\nu}\) and \(\omega^{\mu\nu}\) without any further assumptions. Nonetheless, one can also consider different hydrodynamic gradient ordering of spin chemical potential, particularly when the energy-momentum tensor is symmetric. A spin hydrodynamic framework was discussed in Ref. [19] where the spin chemical potential is considered the leading order (\(\mathcal{O}(1)\)) in gradient order expansion. In this paper, we will only consider the spin hydrodynamic framework with \(\omega^{\mu\nu}\sim\mathcal{O}(\partial)\).
### Constitutive relations for dissipative currents in the Navier-Stokes limit
We observe that while there are in total twenty two independent components of \(T^{\mu\nu}\) and \(S^{\mu\nu}\), Eqs. (6)-(8) constitute only ten equations for the ten independent variables \(T,u^{\mu}\), and \(\omega^{\mu\nu}\). Note that, the hydrodynamic ordering of the term \(\partial_{\lambda}S^{\lambda\mu\nu}_{(1)}\) in Eq. (8) is higher than the rest of the terms. Therefore, for the first-order dissipative theory, we can neglect \(S^{\lambda\mu\nu}_{(1)}\). However, to close Eqs. (6)-(8), we still have to provide additional equations of motion for different dissipative currents. This eventually reduces to finding constitutive relations satisfied by the tensors \(h^{\mu}\), \(q^{\mu}\), \(\Pi\), \(\pi^{\mu\nu}\), and \(\phi^{\mu\nu}\) in terms of \(T,u^{\mu}\), and \(\omega^{\mu\nu}\). Such constitutive relations can be obtained using the condition that, for a dissipative system, the entropy is no longer a conserved quantity but rather will be produced [82; 16]. The mathematical form of the entropy current within the framework of dissipative fluid dynamics is, a priori, not known. As a result, it is not trivial to obtain its evolution equation. However, one can proceed by first constructing the definition of the entropy current in the absence of derivative correction terms, i.e.,
\[s^{\mu}=\beta_{\nu}T^{\mu\nu}_{(0)}+\beta^{\mu}p-\beta^{\mu}\omega_{\alpha \beta}S^{\alpha\beta}. \tag{11}\]
Note that such a definition of equilibrium entropy current correctly reproduces equilibrium thermodynamic relation (9) if we identify \(s^{\mu}\equiv su^{\mu}\), where \(s\) is the equilibrium entropy density. For an interacting fluid, we can generalize the definition of entropy current given above to incorporate dissipative terms. The non-equilibrium entropy current ansatz up to first-order in hydrodynamic gradient expansion, i.e., in the Navier-Stokes (NS) limit can be written as,
\[s^{\mu}_{\text{NS}} =\beta_{\nu}T^{\mu\nu}+\beta^{\mu}p-\beta\omega_{\alpha\beta}S^ {\mu\alpha\beta}\] \[=\beta_{\nu}T^{\mu\nu}_{(0)}+\beta_{\nu}T^{\mu\nu}_{(1)}+\beta^{ \mu}p-\beta^{\mu}\omega_{\alpha\beta}S^{\alpha\beta}-\beta\omega_{\alpha \beta}S^{\mu\alpha\beta}_{(1)}\] \[=s^{\mu}+\beta_{\nu}T^{\mu\nu}_{(1)}+\mathcal{O}(\partial^{2}), \tag{12}\]
where we make use of the equilibrium entropy current \(s^{\mu}\) defined in Eq. (11). By imposing the second law of thermodynamics, i.e., \(\partial_{\mu}s^{\mu}_{\text{NS}}\geq 0\), for Eq. (12), we can obtain the constitutive relations of the various dissipative currents [16; 20],
\[\Pi =\zeta\theta, \tag{13}\] \[h^{\mu} =-\kappa\left(Du^{\mu}-\beta\nabla^{\mu}T\right),\] (14) \[q^{\mu} =\lambda\left(Du^{\mu}+\beta\nabla^{\mu}T-4\omega^{\mu\nu}u_{ \nu}\right),\] (15) \[\pi^{\mu\nu} =2\eta\sigma^{\mu\nu},\] (16) \[\phi^{\mu\nu} =\gamma\left(\Omega^{\mu\nu}+2\beta\omega^{\langle\mu\rangle \langle\nu\rangle}\right)=\widetilde{\gamma}\left(2\nabla^{[\mu}u^{\nu]}+4 \omega^{\langle\mu\rangle\langle\nu\rangle}\right). \tag{17}\]
Here, all transport coefficients are positive, i.e., \(\kappa\geq 0\), \(\lambda\geq 0\), \(\eta\geq 0\), \(\zeta\geq 0\), and \(\gamma\geq 0\). We define \(\widetilde{\gamma}=\beta\gamma/2\), \(\sigma^{\mu\nu}=\nabla^{(\mu}u^{\nu)}-\frac{1}{3}\theta\Delta^{\mu\nu}= \Delta^{\mu\nu}_{\alpha\beta}\nabla^{\alpha}u^{\beta}\), \(\Omega^{\mu\nu}=\beta\nabla^{[\mu}u^{\nu]}=\Delta^{\mu}_{\alpha}\Delta^{\nu}_ {\beta}\partial^{[\alpha}\beta^{\beta]}\), and \(\omega^{\langle\mu\rangle\langle\nu\rangle}=\Delta^{\mu\alpha}\Delta^{\nu \beta}_{\alpha\beta}\). In these equations,
all the terms on the r.h.s. are of order \(\mathcal{O}(\partial)\) in hydrodynamic gradient expansion. Equations (14)-(17) show explicitly that at this level, the number of state variables \(T,u^{\mu},\omega^{\mu\nu}\) perfectly matches the number of dynamical equations (6)-(8). Note that if \(\lambda=0\), and \(\gamma=0\), then all the dissipative currents associated with the antisymmetric part of the energy-momentum tensor vanish. In this limit, if we consider the Landau frame choice, i.e., \(h^{\mu}=0\), then nonvanishing dissipative currents are \(\pi^{\mu\nu}\), and \(\Pi\). Moreover, if we set \(\omega^{\mu\nu}=0\), then the spin tensor also decouples from the theory. This is the NS limit giving rise to the standard hydrodynamics of _spinless_ fluid. Unfortunately, this first-order spin hydrodynamic framework can be shown to be pathological as it can give rise to instabilities under linear perturbations [70; 71]. This is not a desired feature for a hydrodynamic theory, particularly for phenomenological applications.
## III Towards second-order spin hydrodynamics
### Entropy current for the second-order theory
Historically, it is also well known that even for the spinless fluid, the relativistic NS theory is ill-defined because it can contain instabilities when perturbed around an arbitrary global equilibrium. The relativistic NS theory is unstable in the sense that small departures from equilibrium at one instant of time will diverge exponentially with time. The time scale of these instabilities can be short, which may affect the time evolution of the system [74; 100]. We emphasize that in the comoving frame or in the rest frame, Landau's theory of dissipative hydrodynamics (for spinless fluid) is stable. However, the generic instability manifests itself in a Lorentz-boosted frame. Subsequently, it has been argued that such instabilities are intrinsically related to the acausal nature of the NS theory [81]. Since the NS equations are not intrinsically hyperbolic, they allow for perturbations that propagate at an infinite speed. These fundamental problems provide overwhelming motivation to prohibit the practical application of relativistic NS theory. To incorporate dissipative effects consistently in fluid dynamics without violating causality, second-order theories are constructed, e.g., Israel-Stewart (IS) theory, etc. The IS second-order theory contains new parameters compared to the NS theory. Kinetic theory calculations have been used to show that these new parameters are nonvanishing and if these parameters are chosen appropriately then the dynamical equations governing the evolution of linear perturbations form a hyperbolic system of equations. Second-order dissipative hydrodynamic frameworks for spinless fluid have been argued to be free of stability and causality issues [73; 74; 75; 76; 77; 78; 79; 80; 81] which makes IS theory more acceptable as a viable hydrodynamic theory. We expect that such features will also remain intact for second-order spin hydrodynamic frameworks 3. Similarly to the NS theory here we also follow the entropy current analysis to derive the second-order spin hydrodynamic equations. In this approach once again we start with the entropy current for an arbitrary nonequilibrium state near equilibrium [82],
Footnote 3: In the present calculation we develop the second-order theory for spin-hydrodynamics. Its stability and causality properties require extensive investigation which we will address in future works.
\[s^{\mu}_{\rm IS} = \beta_{\nu}T^{\mu\nu}+\beta^{\mu}p-\beta\omega_{\alpha\beta}S^{ \mu\alpha\beta}+Q^{\mu}, \tag{18}\] \[= \beta_{\nu}T^{\mu\nu}_{(0)}+\beta^{\mu}p-\beta^{\mu}\omega_{ \alpha\beta}S^{\alpha\beta}+\beta_{\nu}T^{\mu\nu}_{(1)}-\beta\omega_{\alpha \beta}S^{\mu\alpha\beta}_{(1)}+Q^{\mu},\] \[= s^{\mu}_{\rm NS}-\beta\omega_{\alpha\beta}S^{\mu\alpha\beta}_{ (1)}+Q^{\mu}.\]
Here \(s^{\mu}_{\rm NS}\) contains the first-order corrections (\(\mathcal{O}(\partial)\)). The term \(\beta\omega_{\alpha\beta}S^{\mu\alpha\beta}_{(1)}\) is second-order (\(\mathcal{O}(\partial^{2})\)) in the hydrodynamic gradient expansion. Such a term does not appear in the NS limit, see Eq. (12). Novel information about new spin dissipative currents is embedded in \(S^{\lambda\mu\nu}_{(1)}\) (Eq. (4)). The term \(Q^{\mu}\) is a general four vector containing terms up to second order (\(\mathcal{O}(\partial^{2})\)). However, the form of \(Q^{\mu}\) is not completely arbitrary as it contains all second-order terms composed of \(h^{\mu}\), \(\pi^{\mu\nu}\), \(\Pi\), \(q^{\mu}\), \(\phi^{\mu\nu}\), and \(S^{\mu\alpha\beta}_{(1)}\). The form of \(Q^{\mu}\) is constrained by the condition that entropy is maximum in the equilibrium state. Contracting Eq. (18) with \(u^{\mu}\) we immediately obtain, \(s_{\rm IS}-s=u_{\mu}Q^{\mu}\), where \(s_{\rm IS}\equiv u_{\mu}s^{\mu}_{\rm IS}\). The condition that \(s_{\rm IS}\leq s\) implies \(u_{\mu}Q^{\mu}\leq 0\) (see Appendix A for details). Before we introduce the most general expression of \(Q^{\mu}\) we first express \(S^{\mu\alpha\beta}_{(1)}\) in terms of irreducible tensors. Recall that the first-order correction to the spin tensor satisfies \(u_{\mu}S^{\mu\alpha\beta}_{(1)}=0\) and it is antisymmetric in the last two indices. Therefore, the most general decomposition of \(S^{\mu\alpha\beta}_{(1)}\) in terms of irreducible tensors takes the form [101] (see Appendix B),
\[S^{\mu\alpha\beta}_{(1)}=2u^{[\alpha}\Delta^{\mu\beta]}\Phi+2u^{[\alpha}\tau^ {\mu\beta]}_{(s)}+2u^{[\alpha}\tau^{\mu\beta]}_{(a)}+\Theta^{\mu\alpha\beta}. \tag{19}\]
The new dissipative currents related to spin \(\Phi,\tau^{\mu\nu}_{(s)},\tau^{\mu\nu}_{(a)}\), and \(\Theta^{\mu\alpha\beta}\) are of first-order in derivative expansion \(\mathcal{O}(\partial)\). The currents satisfy the following properties: \(u_{\mu}\tau^{\mu\beta}_{(s)}=u_{\mu}\tau^{\mu\beta}_{(a)}=u_{\mu}\Theta^{\mu \alpha\beta}=0;\tau^{\mu\beta}_{(s)}=\tau^{\beta\mu}_{(s)},\tau^{\mu\beta}_{( a)}=-\tau^{\beta\mu}_{(a)}\), \(\tau^{\mu}_{(s)\mu}=0\), \(\Theta^{\mu\alpha\beta}=-\Theta^{\mu\beta\alpha}\), \(u_{\mu}\Theta^{\mu\alpha\beta}=0\), \(u_{\alpha}\Theta^{\mu\alpha\beta}=0\), and \(u_{\beta}\Theta^{\mu\alpha\beta}=0\). Now we can express \(Q^{\mu}\) in terms of all possible second-order combinations of dissipative currents respecting the constraint \(u\cdot Q\leq 0\),
\[Q^{\mu}= u^{\mu}\left(a_{1}\Pi^{2}+a_{2}\pi^{\lambda\nu}\pi_{\lambda\nu}+a _{3}h^{\lambda}h_{\lambda}+a_{4}q^{\lambda}q_{\lambda}+a_{5}\phi^{\lambda\nu} \phi_{\lambda\nu}\right)\] \[+u^{\mu}\left(\tilde{a}_{1}\Phi^{2}+\tilde{a}_{2}\tau^{\lambda \nu}_{(s)}\tau_{(s)\lambda\nu}+\tilde{a}_{3}\tau^{\lambda\nu}_{(a)}\tau_{(a) \lambda\nu}+\tilde{a}_{4}\Theta^{\lambda\alpha\beta}\Theta_{\lambda\alpha \beta}\right)\] \[+\left(b_{1}\Pi h^{\mu}+b_{2}\pi^{\mu\nu}h_{\nu}+b_{3}\phi^{\mu\nu }h_{\nu}+b_{4}\Pi q^{\mu}+b_{5}\pi^{\mu\nu}q_{\nu}+b_{6}\phi^{\mu\nu}q_{\nu}\right)\] \[+\left(\tilde{b}_{1}\Phi h^{\mu}+\tilde{b}_{2}\tau^{\mu\nu}_{(s)} h_{\nu}+\tilde{b}_{3}\tau^{\mu\nu}_{(a)}h_{\nu}+\tilde{b}_{4}\Phi q^{\mu}+ \tilde{b}_{5}\tau^{\mu\nu}_{(s)}q_{\nu}+\tilde{b}_{6}\tau^{\mu\nu}_{(a)}q_{ \nu}\right)\] \[+\left(c_{1}\Theta^{\mu\alpha\beta}\phi_{\alpha\beta}+c_{2}\Theta ^{\mu\alpha\beta}\tau_{(a)\alpha\beta}\right)\] \[+\left(c_{3}\Phi^{\alpha\beta\mu}\Delta_{\alpha\beta}\Pi+c_{4} \Theta^{\alpha\beta\mu}\pi_{\alpha\beta}+c_{5}\Theta^{\alpha\beta\mu}\Delta_{ \alpha\beta}\Phi+c_{6}\Theta^{\alpha\beta\mu}\tau_{(s)\alpha\beta}\right)\] \[+\left(c_{7}\Theta^{\alpha\beta\mu}\phi_{\alpha\beta}+c_{8}\Theta ^{\alpha\beta\mu}\tau_{(a)\alpha\beta}\right). \tag{20}\]
We define \(a_{i},\tilde{a}_{i},b_{i},\tilde{b}_{i}\), and \(c_{i}\) to be dimensionful coefficients. While it is clear that due to \(u\cdot Q\leq 0\) the \(a(\tilde{a})\) coefficients have definite signatures with \(a_{1}\leq 0\), \(a_{2}\leq 0\), \(a_{3}\geq 0\), \(a_{4}\geq 0\), \(a_{5}\leq 0\), \(\tilde{a}_{1}\leq 0\), \(\tilde{a}_{2}\leq 0\), \(\tilde{a}_{3}\leq 0\), \(\tilde{a}_{4}\geq 0\), there are no such sign constraints on \(b_{i},\tilde{b}_{i}\), or \(c_{i}\). Although a kinetic theory approach may indicate the sign of these coefficients.
### Evolution equations
We argued that for the NS theory the dissipative currents \(h^{\mu}\), \(q^{\mu}\), \(\pi^{\mu\nu}\), \(\Pi\), and \(\phi^{\mu\nu}\) can be expressed in terms of fundamental hydrodynamic variables \(T,u^{\mu}\), and \(\omega^{\mu\nu}\). This conclusion is obtained using the condition \(\partial_{\mu}s^{\mu}_{\rm NS}\geq 0\). But for the second-order theory, various dissipative currents are considered independent variables. This is evident from the fact that we have constructed second-order terms in \(s^{\mu}_{\rm IS}\) in terms of these dissipative currents. Therefore, to close the hydrodynamic equations, we also need the evolution equation for these dissipative currents, which can be obtained using the condition that \(\partial_{\mu}s^{\mu}_{\rm IS}\geq 0\). Taking the divergence of \(s^{\mu}_{\rm IS}\) and using spin-hydrodynamic equations, it can be shown that (see Appendix C for details),
\[\partial_{\mu}s^{\mu}_{\rm IS}=T^{\mu\nu}_{(1a)}\left(\partial_{\mu}\beta_{\nu }+2\beta\omega_{\mu\nu}\right)+\partial_{\mu}\beta_{\nu}T^{\mu\nu}_{(1s)}- \partial_{\mu}\left(\beta\omega_{\alpha\beta}\right)S^{\mu\alpha\beta}_{(1)}+ \partial_{\mu}Q^{\mu}. \tag{21}\]
Notice that for the global equilibrium condition \(S^{\mu\alpha\beta}_{(1)}=0\), \(Q^{\mu}=0\). Moreover, \(\partial_{\mu}s^{\mu}_{\rm IS}=0\) implies the most general global equilibrium conditions (10), i.e., the spin chemical potential converges to thermal vorticity, i.e., \(\omega_{\mu\nu}\to\frac{T}{2}\varpi_{\mu\nu}\) with \(\beta_{\mu}=u_{\mu}/T\) satisfying the Killing condition \(\partial_{(\mu}\beta_{\nu)}=0\). Using the explicit expressions for \(T^{\mu\nu}_{(1s)}\), \(T^{\mu\nu}_{(1a)}\) and \(S^{\mu\alpha\beta}_{(1)}\), Eq. (21) can be written as (see Appendix D for details),
\[\partial_{\mu}s^{\mu}_{\rm IS}= -\beta h^{\mu}\left(\beta\nabla_{\mu}T-Du_{\mu}\right)+\beta\pi^{\mu \nu}\sigma_{\mu\nu}+\beta\Pi\theta\] \[-\beta q^{\mu}\left(\beta\nabla_{\mu}T+Du_{\mu}-4\omega_{\mu\nu}u^ {\nu}\right)+\phi^{\mu\nu}\left(\Omega_{\mu\nu}+2\beta\omega^{(\mu)\langle \nu\rangle}\right)\] \[-2\Phi u^{\alpha}\nabla^{\beta}(\beta\omega_{\alpha\beta})-2\tau^ {\mu\beta}_{(s)}u^{\alpha}\Delta^{\gamma\rho}_{\mu\beta}\nabla_{\gamma}(\beta \omega_{\alpha\rho})-2\tau^{\mu\beta}_{(a)}u^{\alpha}\Delta^{[\gamma\rho]}_{[ \mu\beta]}\nabla_{\gamma}(\beta\omega_{\alpha\rho})\] \[-\Theta_{\mu\alpha\beta}\Delta^{\alpha\delta}\Delta^{\beta\rho} \Delta^{\mu\gamma}\nabla_{\gamma}(\beta\omega_{\delta\rho})+\partial_{\mu}Q^{ \mu}. \tag{22}\]
As a last step, we need to investigate the term \(\partial_{\mu}Q^{\mu}\) which can be done using the expression of \(Q^{\mu}\) given in Eq. (20). A straightforward calculation gives,
\[\partial_{\mu}Q^{\mu} =h_{\alpha}\mathcal{A}^{\alpha}+q_{\alpha}\mathcal{B}^{\alpha}+ \pi_{\alpha\beta}\mathcal{C}^{\alpha\beta}+\Pi\mathcal{D}+\phi_{\alpha\beta} \mathcal{E}^{\alpha\beta}\] \[+\Phi\mathcal{F}+\tau^{\alpha\beta}_{(s)}\mathcal{G}_{\alpha\beta}+ \tau^{\alpha\beta}_{(a)}\mathcal{H}_{\alpha\beta}+\Theta_{\alpha\beta}\mathcal{T}^{ \alpha\beta\gamma}. \tag{23}\]
In the above equations, scalars \(\mathcal{D}\) and \(\mathcal{F}\), vectors \(\mathcal{A}_{\beta}\) and \(\mathcal{B}_{\beta}\), and tensors \(\mathcal{C}_{\mu\nu}\), \(\mathcal{E}_{\mu\nu}\), \(\mathcal{G}_{\mu\nu}\), \(\mathcal{H}_{\mu\nu}\), and \(\mathcal{I}_{\mu\nu\delta}\) are defined in Appendix E. Note that the dissipative fluxes multiplying these quantities satisfy the following properties: \(h^{\mu}\) and \(q^{\mu}\) are orthogonal to \(u^{\mu}\), \(\pi^{\mu\nu}\) and \(\tau^{\mu\nu}_{(s)}\) are also orthogonal to \(u^{\mu}\) as well as symmetric and traceless, \(\phi^{\mu\nu}\) and \(\tau^{\mu\nu}_{(a)}\) are
orthogonal to \(u^{\mu}\) as well as antisymmetric, \(\Theta^{\mu\alpha\beta}\) is antisymmetric in the last two indices and orthogonal to the fluid flow in all the indices. Using these properties Eq. (23) can be expressed as,
\[\partial_{\mu}Q^{\mu} =h_{\alpha}\mathcal{A}^{(\alpha)}+q_{\alpha}\mathcal{B}^{(\alpha)} +\pi_{\alpha\beta}\mathcal{C}^{(\alpha\beta)}+\Pi\mathcal{D}+\phi_{\alpha\beta }\mathcal{E}^{([\alpha\beta])}\] \[+\Phi\mathcal{F}+\tau^{\alpha\beta}_{(s)}\mathcal{G}_{(\alpha \beta)}+\tau^{\alpha\beta}_{(a)}\mathcal{H}_{([\alpha\beta])}+\Theta_{\alpha \beta\gamma}\mathcal{I}^{(\alpha)\langle\beta\rangle\langle\gamma\rangle}. \tag{24}\]
The quantities \(\mathcal{A}^{(\alpha)}\), \(\mathcal{B}^{(\alpha)}\), \(\mathcal{C}^{(\alpha\beta)}\), \(\mathcal{E}^{([\alpha\beta])}\), \(\mathcal{G}^{(\alpha\beta)}\), \(\mathcal{H}^{([\alpha\beta])}\), and \(\mathcal{I}^{(\alpha)\langle\beta\rangle\langle\gamma\rangle}\) satisfy the following constraints,
\[\mathcal{A}^{(\alpha)}\equiv\Delta^{\alpha\beta}\mathcal{A}_{ \beta};\quad u_{\alpha}\mathcal{A}^{(\alpha)}=0, \tag{25}\] \[\mathcal{B}^{(\alpha)}\equiv\Delta^{\alpha\beta}\mathcal{B}_{ \beta};\quad u_{\alpha}\mathcal{B}^{(\alpha)}=0,\] (26) \[\mathcal{C}_{(\alpha\beta)}\equiv\Delta^{\mu\nu}_{\alpha\beta} \mathcal{C}_{\mu\nu}=\frac{1}{2}\left(\Delta^{\mu}{}_{\alpha}\Delta^{\nu}{}_{ \beta}+\Delta^{\mu}{}_{\beta}\Delta^{\nu}{}_{\alpha}-\frac{2}{3}\Delta_{ \alpha\beta}\Delta^{\mu\nu}\right)\mathcal{C}_{\mu\nu};\quad u^{\alpha} \mathcal{C}_{(\alpha\beta)}=0;\quad g^{\alpha\beta}\mathcal{C}_{(\alpha\beta)} =0,\] (27) \[\mathcal{E}_{([\alpha\beta])}\equiv\Delta^{[\mu\nu]}_{[\alpha \beta]}\mathcal{E}_{\mu\nu}\equiv\frac{1}{2}\left(\Delta^{\mu}{}_{\alpha} \Delta^{\nu}{}_{\beta}-\Delta^{\nu}{}_{\alpha}\Delta^{\mu}{}_{\beta}\right) \mathcal{E}_{\mu\nu};\quad u_{\alpha}\mathcal{E}^{([\alpha\beta])}=0,\] (28) \[\mathcal{G}_{(\alpha\beta)}\equiv\Delta^{\mu\nu}_{\alpha\beta} \mathcal{G}_{\mu\nu};\quad u_{\alpha}\mathcal{G}^{(\alpha\beta)}=0,\quad g_{ \alpha\beta}\mathcal{G}^{(\alpha\beta)}=0,\] (29) \[\mathcal{H}_{([\alpha\beta])}\equiv\Delta^{[\mu\nu]}_{[\alpha \beta]}\mathcal{H}_{\mu\nu}\equiv\frac{1}{2}\left(\Delta^{\mu}{}_{\alpha} \Delta^{\nu}{}_{\beta}-\Delta^{\nu}{}_{\alpha}\Delta^{\mu}{}_{\beta}\right) \mathcal{H}_{\mu\nu};\quad u_{\alpha}\mathcal{H}^{([\alpha\beta])}=0,\] (30) \[\mathcal{I}^{(\alpha)\langle\beta\rangle\langle\gamma\rangle} \equiv\Delta^{\alpha\mu}\Delta^{\beta\nu}\Delta^{\gamma\delta}\mathcal{I}_{\mu \nu\delta};\quad u_{\alpha}\mathcal{I}^{(\alpha)\langle\beta\rangle\langle \gamma\rangle}=0;\quad u_{\beta}\mathcal{I}^{(\alpha)\langle\beta\rangle \langle\gamma\rangle}=0;\quad u_{\gamma}\mathcal{I}^{(\alpha)\langle\beta \rangle\langle\gamma\rangle}=0. \tag{31}\]
Using Eq. (24) in Eq. (22) the full form of the divergence of entropy current in the second-order theory can be written as
\[\partial_{\mu}s^{\mu}_{\rm IS}= -\beta h^{\mu}\left(\beta\nabla_{\mu}T-Du_{\mu}-T\mathcal{A}_{( \mu)}\right)+\beta\pi^{\mu\nu}\left(\sigma_{\mu\nu}+T\mathcal{C}_{(\mu\nu)} \right)+\beta\Pi\left(\theta+T\mathcal{D}\right)\] \[-\beta q^{\mu}\left(\beta\nabla_{\mu}T+Du_{\mu}-4\omega_{\mu\nu}u^ {\nu}-T\mathcal{B}_{(\mu)}\right)+\phi^{\mu\nu}\left(\Omega_{\mu\nu}+2\beta \omega_{\langle\mu\rangle\langle\nu\rangle}+\mathcal{E}_{([\mu\nu])}\right)\] \[+\Phi\left[-2u^{\alpha}\nabla^{\beta}(\beta\omega_{\alpha\beta})+ \mathcal{F}\right]+\tau^{\mu\beta}_{(s)}\left[-2u^{\alpha}\Delta^{\gamma\rho}_ {\mu\beta}\nabla_{\gamma}(\beta\omega_{\alpha\rho})+\mathcal{G}_{(\mu\beta)}\right]\] \[+\tau^{\mu\beta}_{(a)}\left[-2u^{\alpha}\Delta^{[\gamma\rho]}_{[ \mu\beta]}\nabla_{\gamma}(\beta\omega_{\alpha\rho})+\mathcal{H}_{([\mu\beta] )}\right]+\Theta^{\mu\alpha\beta}\left[-\Delta^{\delta}{}_{\alpha}\Delta^{\rho }{}_{\beta}\Delta^{\gamma}{}_{\mu}\nabla_{\gamma}(\beta\omega_{\delta\rho})+ \mathcal{I}_{(\mu)\langle\alpha\rangle\langle\beta\rangle}\right] \tag{32}\]
Similarly to the NS theory the condition \(\partial_{\mu}s^{\mu}_{\rm IS}\geq 0\) gives us the following relations involving various dissipative currents appearing in the energy-momentum tensor,
\[\Pi=\zeta\big{(}\theta+T\mathcal{D}\big{)} \tag{33}\] \[h^{\mu}=-\kappa\left(Du^{\mu}-\beta\nabla^{\mu}T+T\mathcal{A}^{( \mu)}\right)\] (34) \[q^{\mu}=\lambda\left(Du^{\mu}+\beta\nabla^{\mu}T-4\omega^{\mu\nu}u _{\nu}-T\mathcal{B}^{(\mu)}\right)\] (35) \[\pi^{\mu\nu}=2\eta\left(\sigma^{\mu\nu}+T\mathcal{C}^{(\mu\nu)}\right)\] (36) \[\phi^{\mu\nu}=\gamma\left(\Omega^{\mu\nu}+2\beta\omega^{\langle\mu \rangle\langle\nu\rangle}+\mathcal{E}^{([\mu\nu])}\right). \tag{37}\]
Analogous relations for various dissipative currents appearing in the spin tensor can be expressed as,
\[\Phi=\chi_{1}\left(-2u^{\alpha}\nabla^{\beta}(\beta\omega_{\alpha \beta})+\mathcal{F}\right) \tag{38}\] \[\tau^{\mu\beta}_{(s)}=\chi_{2}\left[-u^{\alpha}\left(\Delta^{\gamma \mu}\Delta^{\rho\beta}+\Delta^{\gamma\beta}\Delta^{\rho\mu}-\frac{2}{3}\Delta^{ \gamma\rho}\Delta^{\mu\beta}\right)\nabla_{\gamma}(\beta\omega_{\alpha\rho})+ \mathcal{G}^{(\mu\beta)}\right]\] (39) \[\tau^{\mu\beta}_{(a)}=\chi_{3}\left[-u^{\alpha}(\Delta^{\gamma\mu} \Delta^{\rho\beta}-\Delta^{\gamma\beta}\Delta^{\rho\mu})\nabla_{\gamma}(\beta \omega_{\alpha\rho})+\mathcal{H}^{([\mu\beta])}\right]\] (40) \[\Theta^{\mu\alpha\beta}=-\chi_{4}\left[-\Delta^{\delta\alpha}\Delta^ {\rho\beta}\Delta^{\gamma\mu}\nabla_{\gamma}(\beta\omega_{\delta\rho})+ \mathcal{I}^{(\mu)\langle\alpha\rangle\langle\beta\rangle}\right]. \tag{41}\]
Here \(\chi_{1},\chi_{2},\chi_{2}\), and \(\chi_{4}\) are new spin-transport coefficients 4. Using Eqs. (33)-(41), in Eq. (32) we obtain the following
condition,
\[-\frac{\beta}{\kappa}h^{\mu}h_{\mu}-\frac{\beta}{\lambda}q^{\mu}q_{ \mu}+\frac{\beta}{2\eta}\pi^{\mu\nu}\pi_{\mu\nu}+\frac{\beta}{\zeta}\Pi^{2}+ \frac{1}{\gamma}\phi^{\mu\nu}\phi_{\mu\nu}\] \[+\frac{1}{\chi_{1}}\Phi^{2}+\frac{1}{\chi_{2}}\tau^{\mu\nu}_{(s)} \tau_{\mu\nu(s)}+\frac{1}{\chi_{3}}\tau^{\mu\nu}_{(a)}\tau_{\mu\nu(a)}-\frac{1 }{\chi_{4}}\Theta^{\mu\alpha\beta}\Theta_{\mu\alpha\beta}\geq 0. \tag{42}\]
This immediately implies that \(\kappa\geq 0\), \(\lambda\geq 0\), \(\eta\geq 0\), \(\zeta\geq 0\)\(\gamma\geq 0\), \(\chi_{1}\geq 0\), \(\chi_{2}\geq 0\), \(\chi_{3}\geq 0\), and \(\chi_{4}\geq 0\). We emphasize that the presence of \(\mathcal{D}\), \(\mathcal{A}^{(\mu)}\), \(\mathcal{B}^{(\mu)}\), \(\mathcal{C}^{(\mu\nu)}\) and \(\mathcal{E}^{([\mu\nu])}\) in Eqs. (33)-(37) shows that for the second order theory the constitutive relations of \(\Pi\), \(h^{\mu}\), \(q^{\mu}\), \(\pi^{\mu\nu}\) and \(\phi^{\mu\nu}\) are not simply expressed by Eqs. (14)-(17) in terms of basic hydrodynamic variables, \(T\), \(u^{\mu}\) and \(\omega^{\mu\nu}\). Therefore in the second order theory \(\Pi\), \(h^{\mu}\), \(q^{\mu}\), \(\pi^{\mu\nu}\) and \(\phi^{\mu\nu}\) should be considered as independent hydrodynamic variables along with \(T\), \(u^{\mu}\) and \(\omega^{\mu\nu}\). The evolution equation of new hydrodynamic variables can be obtained from Eqs. (33)-(37). Using explicit expressions of \(\mathcal{D}\), \(\mathcal{A}^{(\mu)}\), \(\mathcal{B}^{(\mu)}\), \(\mathcal{C}^{(\mu\nu)}\) and \(\mathcal{E}^{([\mu\nu])}\) we can write the evolution equations of different dissipative currents as,
\[D\Pi+\frac{\Pi}{\tau_{\Pi}}= -\frac{1}{2a_{1}}\bigg{[}\beta\theta+a_{1}\Pi\theta+\Pi Da_{1}+(1- l_{\Pi h})h^{\mu}\nabla_{\mu}b_{1}-b_{1}(1-\tilde{l}_{\Pi h})h^{\mu}Du_{\mu}+b_{1} \nabla_{\mu}h^{\mu}+l_{\Pi q}q^{\mu}\nabla_{\mu}b_{4}\] \[-\tilde{l}_{\Pi q}b_{4}q^{\mu}Du_{\mu}+b_{4}\nabla_{\mu}q^{\mu}+l _{\Theta\Pi}\Theta^{\alpha\mu\nu}\Delta_{\alpha\mu}\nabla_{\nu}c_{3}-\tilde{ l}_{\Theta\Pi}c_{3}\Delta_{\alpha\mu}\Theta^{\alpha\mu\nu}Du_{\nu}+c_{3}\Delta_{ \alpha\beta}\nabla_{\mu}\Theta^{\alpha\beta\mu}\bigg{]}, \tag{43}\]
\[Dh^{\langle\mu\rangle}+\frac{h^{\mu}}{\tau_{h}}= -\frac{1}{2a_{3}}\bigg{[}\beta(Du^{\mu}-\beta\nabla^{\mu}T)+a_{3} h^{\mu}\theta+h^{\mu}Da_{3}+l_{\Pi h}\Pi\nabla^{\mu}b_{1}+b_{1}\nabla^{\mu} \Pi-b_{1}\tilde{l}_{\Pi h}\Pi Du^{\mu}+l_{\pi h}\pi^{\lambda\mu}\nabla_{\lambda }b_{2}\] \[+b_{2}\Delta^{\mu}_{\ \nu}\nabla_{\lambda}\pi^{\lambda\nu}-b_{2} \tilde{l}_{\pi h}\pi^{\lambda\mu}Du_{\lambda}+l_{\phi h}\phi^{\lambda\mu} \nabla_{\lambda}b_{3}+b_{3}\Delta^{\mu}_{\ \nu}\nabla_{\lambda}\phi^{\lambda\nu}-b_{3}\tilde{l}_{\phi h}\phi^{\lambda\mu} Du_{\lambda}+l_{\phi h}\Phi\nabla^{\mu}\tilde{b}_{1}\] \[+\tilde{b}_{1}\nabla^{\mu}\Phi-\tilde{b}_{1}\tilde{l}_{\xi h}\Phi Du ^{\mu}+l_{\tau_{h}\tau_{(s)}^{\lambda\mu}}\nabla_{\lambda}\tilde{b}_{2}+ \tilde{b}_{2}\Delta^{\mu}_{\ \nu}\nabla_{\lambda}\tau^{\lambda\nu}_{(s)}-\tilde{b}_{2}\tilde{l}_{\tau_{h} \tau_{(s)}^{\lambda\mu}}Du_{\lambda}+l_{\tau_{a}h}\tau^{\lambda\mu}_{(a)} \nabla_{\lambda}\tilde{b}_{3}\] \[+\tilde{b}_{3}\Delta^{\mu}_{\ \nu}\nabla_{\lambda}\tau^{\lambda\nu}_{(a)}- \tilde{b}_{3}\tilde{l}_{\tau_{a}h}\tau^{\lambda\mu}_{(a)}Du_{\lambda}\bigg{]}, \tag{44}\]
\[Dq^{\langle\mu\rangle}+\frac{q^{\mu}}{\tau_{q}}= \frac{1}{2a_{4}}\bigg{[}\beta(\beta\nabla^{\mu}T+Du^{\mu}-4\omega^ {\mu\nu}u_{\nu})-a_{4}q^{\mu}\theta-q^{\mu}Da_{4}-(1-l_{\Pi q})\Pi\nabla^{\mu}b _{4}-b_{4}\nabla^{\mu}\Pi\] \[+b_{4}(1-\tilde{l}_{\Pi q})\Pi Du^{\mu}-(1-l_{\pi q})\pi^{\lambda \mu}\nabla_{\lambda}b_{5}-b_{5}\Delta^{\mu}_{\ \nu}\nabla_{\lambda}\pi^{\lambda\nu}+b_{5}(1-\tilde{l}_{\pi q})\pi^{\lambda\mu }Du_{\lambda}-l_{\phi q}\phi^{\lambda\mu}\nabla_{\lambda}b_{6}\] \[-b_{6}\Delta^{\mu}_{\ \nu}\nabla_{\lambda}\phi^{\lambda\nu}+b_{6} \tilde{l}_{\phi q}\phi^{\lambda\mu}Du_{\lambda}-l_{\Phi q}\Phi\nabla^{\mu} \tilde{b}_{4}-\tilde{b}_{4}\nabla^{\mu}\Phi+\tilde{b}_{4}\tilde{l}_{\Phi q} \Phi Du^{\mu}-l_{\tau_{q}\tau_{(s)}^{\lambda\mu}}\nabla_{\lambda}\tilde{b}_{5}\] \[-\tilde{b}_{5}\Delta^{\mu}_{\ \nu}\nabla_{\lambda}\tau^{\lambda\nu}_{(s)}+ \tilde{b}_{5}\tilde{l}_{\tau_{s}q}\tau^{\lambda\mu}_{(s)}Du_{\lambda}-l_{\tau_{a }q}\tau^{\lambda\mu}_{(a)}\nabla_{\lambda}\tilde{b}_{6}-\tilde{b}_{6}\Delta^{\mu} _{\ \nu}\nabla_{\lambda}\tau^{\lambda\nu}_{(a)}+\tilde{b}_{6}\tilde{l}_{\tau_{a}q}\tau^{ \lambda\mu}_{(a)}Du_{\lambda}\bigg{]}, \tag{45}\]
\[D\pi^{\langle\mu\nu\rangle}+\frac{\pi^{\mu\nu}}{\tau_{\pi}}= -\frac{1}{2a_{2}}\bigg{[}\beta\sigma^{\mu\nu}+a_{2}\theta\pi^{ \mu\nu}+\pi^{\mu\nu}Da_{2}+(1-l_{\pi h})h^{\langle\mu}\nabla^{\nu\rangle}b_{2}- b_{2}(1-\tilde{l}_{\pi h})h^{\langle\mu}Du^{\nu\rangle}\] \[+b_{2}\nabla^{(\mu}h^{\nu)}+l_{\pi q}q^{\langle\mu}\nabla^{\nu \rangle}b_{5}-\tilde{l}_{\pi q}b_{5}q^{(\mu}Du^{\nu)}+b_{5}\nabla^{(\mu}q^{ \nu)}+l_{\Theta\pi}\Theta^{\langle\mu\nu\rangle\alpha}\nabla_{\alpha}c_{4}\] \[-\tilde{l}_{\Theta\pi}c_{4}\Theta^{(\mu\nu)\alpha}Du_{\alpha}+c_{4} \nabla_{\alpha}\Theta^{(\mu\nu)\alpha}\bigg{]}, \tag{46}\]
\[D\phi^{\langle[\mu\nu]\rangle}+\frac{\phi^{\mu\nu}}{\tau_{\phi}}= -\frac{1}{2a_{5}}\bigg{[}\left(\Omega^{\mu\nu}+2\beta\omega^{ \langle\mu\rangle\langle\nu\rangle}\right)+a_{5}\theta\phi^{\mu\nu}+\phi^{ \mu\nu}Da_{5}+(1-l_{\phi h})h^{[\nu}\nabla^{\mu]}b_{3}\] \[-b_{3}(1-\tilde{l}_{\phi h})h^{[\nu}Du^{\mu]}+b_{3}\Delta^{[\mu\nu] }_{[\alpha\beta]}\nabla^{[\alpha}h^{\beta]}+(1-l_{\phi q})q^{[\nu}\nabla^{ \mu]}b_{6}-b_{6}(1-\tilde{l}_{\phi q})q^{[\nu}Du^{\mu]}\] \[+b_{6}\Delta^{[\mu\nu]}_{[\alpha\beta]}\nabla^{[\alpha}q^{\beta]}+l _{\Theta\phi}\Theta^{\lambda\mu\nu}\nabla_{\lambda}c_{1}-\tilde{l}_{\Theta\phi \phi}c_{1}\
\(Dq^{\langle\mu\rangle}=\Delta^{\mu}_{\ \nu}Dq^{\nu}\), \(D\pi^{\langle\mu\nu\rangle}=\Delta^{\mu\nu}_{\alpha\beta}D\pi^{\alpha\beta}\), and \(D\phi^{\langle[\mu\nu]\rangle}=\Delta^{[\mu\nu]}_{[\alpha\beta]}D\phi^{\alpha\beta}\). The dissipative currents appearing in the spin tensor also satisfy similar relaxation type equations,
\[D\Phi+\frac{\Phi}{\tau_{\Phi}}= -\frac{1}{2\tilde{a}_{1}}\bigg{[}-2u^{\alpha}\nabla^{\beta}( \beta\omega_{\alpha\beta})+\tilde{a}_{1}\theta\Phi+\Phi D\tilde{a}_{1}+(1-l_{ \Phi h})h^{\mu}\nabla_{\mu}\tilde{b}_{1}-(1-\tilde{l}_{\Phi h})\tilde{b}_{1}h^ {\mu}Du_{\mu}+\tilde{b}_{1}\nabla_{\mu}h^{\mu}\] \[+(1-l_{\Phi h})q^{\mu}\nabla_{\mu}\tilde{b}_{4}-(1-\tilde{l}_{ \Phi q})\tilde{b}_{4}q^{\mu}Du_{\mu}+\tilde{b}_{4}\nabla_{\mu}q^{\mu}+l_{\Theta \Phi}\Theta^{\alpha\mu\nu}\Delta_{\alpha\mu}\nabla_{\nu}c_{5}\] \[-\tilde{l}_{\Theta\Theta}c_{5}\Delta_{\alpha\mu}\Theta^{\alpha \mu\nu}Du_{\nu}+c_{5}\Delta_{\alpha\beta}\nabla_{\mu}\Theta^{\alpha\beta\mu} \bigg{]}, \tag{48}\]
\[D\tau^{\langle\mu\nu\rangle}_{(s)}+\frac{\tau^{\mu\nu}_{(s)}}{ \tau_{\tau_{s}}}= -\frac{1}{2\tilde{a}_{2}}\bigg{[}-u^{\alpha}\left(\Delta^{\gamma \mu}\Delta^{\rho\nu}+\Delta^{\gamma\nu}\Delta^{\rho\mu}-\frac{2}{3}\Delta^{ \gamma\rho}\Delta^{\mu\nu}\right)\nabla_{\gamma}(\beta\omega_{\alpha\rho})+ \tilde{a}_{2}\theta\tau^{\mu\nu}_{(s)}+\tau^{\mu\nu}_{(s)}D\tilde{a}_{2}\] \[+(1-l_{\tau_{s}h})h^{\langle\mu}\nabla^{\nu\rangle}\tilde{b}_{2} -\tilde{b}_{2}(1-\tilde{l}_{\tau_{s}h})h^{\langle\mu}Du^{\nu\rangle}+\tilde{b }_{2}\nabla^{\langle\mu}h^{\nu\rangle}+(1-l_{\tau_{s}q})q^{\langle\mu}\nabla^{ \nu\rangle}\tilde{b}_{5}\] \[-(1-\tilde{l}_{\tau_{s}q})\tilde{b}_{5}q^{\langle\mu}Du^{\nu \rangle}+\tilde{b}_{5}\nabla^{\langle\mu}q^{\nu\rangle}+l_{\Theta\tau_{s}} \Theta^{\langle\mu\nu\rangle\lambda}\nabla_{\lambda}c_{6}-\tilde{l}_{\Theta \tau_{s}}c_{6}\Theta^{\langle\mu\nu\rangle\lambda}Du_{\lambda}+c_{6}\nabla_{ \lambda}\Theta^{\langle\mu\nu\rangle\lambda}\bigg{]}, \tag{49}\]
\[D\tau^{\langle[\mu\nu]\rangle}_{(a)}+\frac{\tau^{\mu\nu}_{(a)}}{ \tau_{\tau_{a}}}= -\frac{1}{2\tilde{a}_{3}}\bigg{[}-u^{\alpha}(\Delta^{\gamma\mu} \Delta^{\rho\nu}-\Delta^{\gamma\nu}\Delta^{\rho\mu})\nabla_{\gamma}(\beta \omega_{\alpha\rho})+\tilde{a}_{3}\theta\tau^{\mu\nu}_{(a)}+\tau^{\mu\nu}_{(a) }D\tilde{a}_{3}+(1-l_{\tau_{s}h})h^{[\nu}\nabla^{\mu]}\tilde{b}_{3}\] \[-\tilde{b}_{3}(1-\tilde{l}_{\tau_{s}h})h^{[\nu}Du^{\mu]}+\tilde{b }_{3}\Delta^{[\mu\nu]}_{[\alpha\beta]}\nabla^{[\alpha}h^{\beta]}+(1-l_{\tau_{s }q})q^{[\nu}\nabla^{\mu]}\tilde{b}_{6}-\tilde{b}_{6}(1-\tilde{l}_{\tau_{s}q})q ^{[\nu}Du^{\mu]}\] \[+\tilde{b}_{6}\Delta^{[\mu\nu]}_{[\alpha\beta]}\nabla^{[\alpha}q^ {\beta]}+l_{\Theta\tau_{a}}\Theta^{\lambda\mu\nu}\nabla_{\lambda}c_{2}-\tilde{l }_{\Theta\tau_{s}}c_{2}\Theta^{\lambda\mu\nu}Du_{\lambda}+c_{2}\Delta^{[\mu\nu] }_{[\alpha\beta]}\nabla_{\lambda}\Theta^{\lambda\alpha\beta}+k_{\Theta\tau_{a}} \Theta^{[\mu\nu]\lambda}\nabla_{\lambda}c_{8}\] \[-\tilde{k}_{\Theta\tau_{a}}c_{8}\Theta^{[\mu\nu]\lambda}Du_{\lambda }+c_{8}\Delta^{[\mu\nu]}_{[\alpha\beta]}\nabla_{\lambda}\Theta^{[\alpha\beta] \lambda}\bigg{]}, \tag{50}\]
\[D\Theta^{\langle\alpha\rangle\langle\mu\rangle\langle\nu\rangle}+ \frac{\Theta^{\alpha\mu\nu}}{\tau_{\Theta}}= -\frac{1}{2\tilde{a}_{4}}\bigg{[}-\Delta^{\delta\mu}\Delta^{\rho \nu}\Delta^{\gamma\alpha}\nabla_{\gamma}(\beta\omega_{\delta\rho})+\tilde{a}_{ 4}\theta\Theta^{\alpha\mu\nu}+\Theta^{\alpha\mu\nu}D\tilde{a}_{4}+(1-l_{ \Theta\phi})\phi^{\mu\nu}\nabla^{\alpha}c_{1}\] \[-(1-\tilde{l}_{\Theta\phi})c_{1}\phi^{\mu\nu}Du^{\alpha}+c_{1} \Delta^{\alpha a}\Delta^{\mu b}\Delta^{\nu c}\nabla_{a}\phi_{bc}+(1-l_{ \Theta\tau_{s}})\tau^{\mu\nu}_{(a)}\nabla^{\alpha}c_{2}\] \[-(1-\tilde{l}_{\Theta\tau_{s}})c_{2}\tau^{\mu\nu}_{(a)}Du^{\alpha }+c_{2}\Delta^{\alpha a}\Delta^{\mu b}\Delta^{\nu c}\nabla_{a}\tau_{bc(a)}+(1-l_ {\Theta\Pi})\Pi\Delta^{\alpha[\mu}\nabla^{\nu]}c_{3}\] \[-(1-\tilde{l}_{\Theta\Pi})c_{3}\Pi\Delta^{\alpha[\mu}Du^{\nu]}+c_{ 3}\Delta^{\alpha[\mu}\nabla^{\nu]}\Pi+(1-l_{\Theta\Phi})\Phi\Delta^{\alpha[ \mu}\nabla^{\nu]}c_{5}\] \[-(1-\tilde{l}_{\Theta\Phi})c_{5}\Phi\Delta^{\alpha[\mu}Du^{\nu]}+c_{ 5}\Delta^{\alpha[\mu}\nabla^{\nu]}\Phi+(1-l_{\Theta\pi})\pi^{\alpha[\mu}\nabla^ {\nu]}c_{4}\] \[-(1-\tilde{l}_{\Theta\pi})c_{4}\pi^{\alpha[\mu}Du^{\nu]}+c_{4} \Delta^{\alpha a}\Delta^{\mu b}\Delta^{\nu c}\nabla_{[\pi ab]}+(1-l_{\Theta\tau_{s }})\tau^{\alpha[\mu}_{(s)}\nabla^{\nu]}c_{6}\] \[-(1-\tilde{l}_{\Theta\tau_{s}})c_{6}\tau^{\alpha[\mu}_{(s)}Du^{\nu]} +c_{6}\Delta^{\alpha a}\Delta^{\mu b}\Delta^{\nu c}\nabla_{[\tau_{s}(s)ab]}+(1-k_{ \Theta\phi})\phi^{\alpha[\mu}\nabla^{\nu]}c_{7}\] \[-(1-\tilde{k}_{\Theta\phi})c_{7}\phi^{\alpha[\mu}Du^{\nu]}+c_{7} \Delta^{\alpha a}\Delta^{\mu b}\Delta^{\nu c}\nabla_{[\phi}c_{\vartheta\phi]}+(1-k_{ \Theta\tau_{s}})\tau^{\alpha[\mu}_{(a)}\nabla^{\nu]}c_{8}\] \[-(1-\tilde{k}_{\Theta\tau_{s}})c_{8}\tau^{\alpha[\mu}_{(a)}Du^{\nu]} +c_{8}\Delta^{\alpha a}\Delta^{\mu b}\Delta^{\nu c}\nabla_{[c}\tau_{(a)ab]} \bigg{]}. \tag{51}\]
In above equations, \(D\tau^{\langle\mu\nu\rangle}_{(s)}\equiv\Delta^{\mu\nu}_{\alpha\beta}D\tau^{ \alpha\beta}_{(s)}\), \(D\tau^{\langle[\mu\nu]\rangle}_{(a)}\equiv\Delta^{[\mu\nu]}_{[\alpha\beta]}D\tau^{ \alpha\beta}_{(a)}\), \(D\Theta^{\langle\alpha\rangle\langle\mu\rangle\langle\nu\rangle}\equiv\Delta^{ \alpha a}\Delta^{\mu b}\Delta^{\nu c}D\Theta_{abc}\). Various spin-relaxation times can be identified as, \(\tau_{\Phi}=-2\tilde{a}_{1}\chi_{1}\geq 0\), \(\tau_{\tau_{s}}\equiv-2\tilde{a}_{2}\chi_{2}\geq 0\), \(\tau_{\tau_{a}}\equiv-2\tilde{a}_{3}\chi_{3}\geq 0\), and \(\tau_{\Theta}\equiv 2\tilde{a}_{4}\chi_{4}\geq 0\). In comparison to the first-order spin hydrodynamics, one of the most important features of the second-order theory is the presence of relaxation times corresponding to various the dissipative currents. The time scales within which dissipative currents respond to hydrodynamic gradients are represented by these relaxation times. These relaxation times
the Navier-Stokes limit of the second-order theory. This can be achieved by ignoring all second-order terms in the hydrodynamic gradient expansion in Eqs. (43)-(51). In this limit, we retrieve back the constitutive relation of various dissipative currents associated with the energy-momentum tensor, e.g., from Eq. (43) we find, after ignoring all \(\mathcal{O}(\partial^{2})\) terms,
\[\Pi=-\frac{\tau_{\Pi}}{2a_{1}}\beta\theta=\zeta\theta. \tag{52}\]
Similarly constitutive relations for \(h^{\mu}\), \(q^{\mu}\), \(\pi^{\mu\nu}\), \(\phi^{\mu\nu}\) can be obtained from Eqs. (44), (45), (46), and (47) respectively. These expressions will match Eqs. (14)-(17). However if we ignore all \(\mathcal{O}(\partial^{2})\) terms in Eqs. (48)-(51) then we observe that \(\Phi=0+\mathcal{O}(\partial^{2})\), \(\tau^{\mu\nu}_{(s)}=0+\mathcal{O}(\partial^{2})\), \(\tau^{\mu\nu}_{(a)}=0+\mathcal{O}(\partial^{2})\), \(\Theta^{\alpha\mu\nu}=0+\mathcal{O}(\partial^{2})\). This immediately implies that at the Navier-Stokes limit gradient correction terms to the spin tensor do not contribute to the entropy production, and \(S^{\alpha\mu\nu}_{(1)}\) can only be obtained for the second order theory.
## IV Conclusions and outlook
In this paper, we show a new derivation of the second-order dissipative spin hydrodynamic equations. This formulation is based on the positivity of the entropy production for a dissipative system. We consider an energy-momentum tensor which is asymmetric and the spin tensor has a simple phenomenological form where it is only anti-symmetric in the last two indices. One can retrieve the correct Navier-Stokes limit as well as global equilibrium conditions. Our calculations can be used to study macroscopic spin evolution and possibly it will help us to solve the puzzle related to the longitudinal polarization of Lambda particles in a dynamical way. But this requires a proper numerical implementation of spin hydrodynamic equations along with appropriate initial conditions and hadronic freezeout. One immediate future task would be to study the stability and causality analysis to pin down the region of applicability of this theory. Although we have obtained relaxation time like hydrodynamic equations it lacks a proper understanding of the microscopic theory. This is manifested in large numbers of unknown transport coefficients and relaxation times. Note that a dissipative hydrodynamic theory captures the long wavelength and long-time behavior of a system away from equilibrium. On the other hand transport coefficients encodes the microscopic physics at a length scale smaller than the domain of applicability of hydrodynamics. The estimation of various relaxation times and transport coefficients is very important for phenomenological applications. Only a bottom-up approach to spin-hydrodynamic where one obtains a spin-hydrodynamic equation using a kinetic theory approach can bridge this problem. Finding an equivalent kinetic theory approach without further assumptions will be a good direction to explore as a future task.
**Acknowledgements:** We thank Leonardo Tinti for clarifications. This work was supported in part by the Polish National Science Centre Grant Nos 2018/30/E/ST2/00432 and 2020/39/D/ST2/02054. RB acknowledges the financial support from SPS, NISER (Bhubaneswar, India) planned project RIN4001. RB has been supported, in part, by the Polish National Science Centre (NCN) Sonata Bis grant 2019/34/E/ST3/00405 and the International Max Planck Research School for "Quantum Dynamics and Control".
## Appendix A Constraint on the form of \(Q^{\mu}\)
Contracting the second-order entropy current (18) with the fluid four-velocity, and using the fact that \(u_{\mu}S^{\mu\alpha\beta}_{(1)}=0\), we get
\[u_{\mu}s^{\mu}_{\mathrm{IS}}=u_{\mu}s^{\mu}_{\mathrm{NS}}+u_{\mu}Q^{\mu}. \tag{53}\]
Substituting the form of \(s^{\mu}_{\mathrm{NS}}\) (12) in the above equation, we have
\[u_{\mu}s^{\mu}_{\mathrm{IS}} =u_{\mu}\left[s^{\mu}+\beta_{\nu}T^{\mu\nu}_{(1)}+\mathcal{O}( \partial^{2})\right]+u_{\mu}Q^{\mu},\] \[=u_{\mu}s^{\mu}+u_{\mu}Q^{\mu}. \tag{54}\]
Utilizing the perfect-fluid energy-momentum tensor (5), and replacing the form of the entropy current \(s^{\mu}\) (11) we find,
\[u_{\mu}s^{\mu}_{\mathrm{IS}} =u_{\mu}\left(\beta_{\nu}T^{\mu\nu}_{(0)}+\beta^{\mu}p-\beta^{\mu }\omega_{\alpha\beta}S^{\alpha\beta}\right)+u_{\mu}Q^{\mu},\] \[=u_{\mu}\left[\beta_{\nu}(\varepsilon+p)u^{\mu}u^{\nu}-\beta_{ \nu}pg^{\mu\nu}+\beta^{\mu}p-\beta^{\mu}\omega_{\alpha\beta}S^{\alpha\beta} \right]+u_{\mu}Q^{\mu},\] \[=\beta\left[(\varepsilon+p)-\omega_{\alpha\beta}S^{\alpha\beta} \right]+u_{\mu}Q^{\mu}. \tag{55}\]
Finally, using the generalized first law of thermodynamics (9), we obtain
\[u_{\mu}s^{\mu}_{\rm IS}=s+u_{\mu}Q^{\mu}. \tag{100}\]
Employing the fact that entropy is maximum in equilibrium, we obtain the constraint on \(Q^{\mu}\), i.e,
\[u_{\mu}Q^{\mu}\leq 0. \tag{101}\]
## Appendix B Decomposition of an arbitrary 3-rank tensor antisymmetric in last two indices
Let us consider an arbitrary three-rank tensor \(\phi^{\lambda\mu\nu}\) antisymmetric in last two indices. Employing the decomposition of its first index into the parts transverse and parallel to four-velocity, one has
\[\phi^{\lambda\mu\nu} = g^{\lambda}_{\ \alpha}\phi^{\alpha\mu\nu}=(u^{\lambda}u_{\alpha}+ \Delta^{\lambda}_{\ \alpha})\phi^{\alpha\mu\nu} \tag{102}\] \[= u^{\lambda}\gamma^{\mu\nu}+\Delta^{\lambda}_{\ \alpha}\phi^{\alpha\mu\nu}\] \[= u^{\lambda}\gamma^{\mu\nu}+\phi^{\langle\lambda\rangle\mu\nu}\]
Here we define antisymmetric tensor \(\gamma^{\mu\nu}\equiv u_{\alpha}\phi^{\alpha\mu\nu}\). This immediately implies that \(F^{\nu}\equiv u_{\mu}\gamma^{\mu\nu}\) satisfies \(F\cdot u=0\). In the next step, we proceed with the decomposition of \(\gamma^{\mu\nu}\)
\[\gamma^{\mu\nu} = g^{\mu}_{\ \rho}\gamma^{\rho\nu}=(u^{\mu}u_{\rho}+\Delta^{\mu}_{ \ \rho})\gamma^{\rho\nu}=u^{\mu}F^{\nu}+\gamma^{\langle\mu\rangle\nu} \tag{103}\] \[= u^{\mu}F^{\nu}+g^{\nu}_{\ \rho}\gamma^{\langle\mu\rangle\rho}=u^{ \mu}F^{\nu}+(u^{\nu}u_{\rho}+\Delta^{\nu}_{\ \rho})\gamma^{\langle\mu\rangle\rho}\] \[= u^{\mu}F^{\nu}+u^{\nu}u_{\rho}\gamma^{\langle\mu\rangle\rho}+ \gamma^{\langle\mu\rangle\langle\nu\rangle}\]
It can be easily shown that \(u^{\nu}u_{\rho}\gamma^{\langle\mu\rangle\rho}=-u^{\nu}F^{\mu}\). Therefore, \(\gamma^{\mu\nu}\) has the form,
\[\gamma^{\mu\nu}=u^{\mu}F^{\nu}-u^{\nu}F^{\mu}+\gamma^{\langle\mu\rangle \langle\nu\rangle}. \tag{104}\]
Now, let us consider the last term in Eq. (102),
\[\phi^{\langle\lambda\rangle\mu\nu}=g^{\mu}_{\ \rho}\phi^{\langle \lambda\rangle\rho\nu}=(u^{\mu}u_{\rho}+\Delta^{\mu}_{\ \rho})\phi^{\langle\lambda\rangle\rho\nu}=u^{\mu}u_{\rho}\phi^{\langle \lambda\rangle\rho\nu}+\phi^{\langle\lambda\rangle\langle\mu\rangle\nu} \tag{105}\]
Defining \(u_{\rho}\phi^{\langle\lambda\rangle\rho\nu}\equiv-\Sigma^{\lambda\nu}\), implies \(u_{\lambda}\Sigma^{\lambda\nu}=0\). Therefore,
\[\phi^{\langle\lambda\rangle\mu\nu} = \phi^{\langle\lambda\rangle\langle\mu\rangle\nu}-u^{\mu}\Sigma^{ \lambda\nu} \tag{106}\] \[= g^{\nu}_{\ \alpha}\phi^{\langle\lambda\rangle\langle\mu \rangle\alpha}-u^{\mu}\Sigma^{\lambda\nu}\] \[= \phi^{\langle\lambda\rangle\langle\nu\rangle}+u^{\nu}u_{\alpha} \phi^{\langle\lambda\rangle\langle\mu\rangle\alpha}-u^{\mu}\Sigma^{\lambda\nu}\] \[= \phi^{\langle\lambda\rangle\langle\mu\rangle\langle\nu\rangle}+u^ {\nu}\Sigma^{\lambda\mu}-u^{\mu}\Sigma^{\lambda\nu}.\]
Using Eqs. (103) and (104) in Eq. (102) we obtain,
\[\phi^{\lambda\mu\nu}=u^{\lambda}\left(u^{\mu}F^{\nu}-u^{\nu}F^{\mu}+\gamma^{ \langle\mu\rangle\langle\nu\rangle}\right)+u^{\nu}\Sigma^{\lambda\mu}-u^{\mu} \Sigma^{\lambda\nu}+\phi^{\langle\lambda\rangle\langle\mu\rangle\langle\nu \rangle}. \tag{107}\]
Here we can introduce \(\mathcal{S}^{\mu\nu}\equiv u^{\mu}F^{\nu}-u^{\nu}F^{\mu}+\gamma^{\langle\mu \rangle\langle\nu\rangle}\). Noticing that \(\mathcal{S}^{\mu\nu}\) is an antisymmetric tensor that can also be decomposed as \(\mathcal{S}^{\mu\nu}\equiv u^{\mu}F^{\nu}-u^{\nu}\kappa^{\mu}+\epsilon^{\mu \nu\alpha\beta}u_{\alpha}\omega_{\beta}\), with \(u\cdot\kappa=0\) and \(u\cdot\omega=0\), we identify \(F^{\nu}=\kappa^{\nu}\), and \(\gamma^{\langle\mu\rangle\langle\nu\rangle}\equiv\epsilon^{\mu\nu\alpha\beta} u_{\alpha}\omega_{\beta}\)[102]. Since \(\Sigma^{\mu\nu}\) is asymmetric (not antisymmetric!) and orthogonal to \(u^{\mu}\) it can also be decomposed into symmetric (\(\Sigma^{\mu\nu}_{(s)}\)) and antisymmetric (\(\Sigma^{\mu\nu}_{(a)}\)) parts. The symmetric part can be further decomposed into a trace (\(\Sigma\)) and a traceless part (\(\Sigma^{\langle\mu\nu\rangle}_{s}\)). Finally, we obtain the following expression,
\[\phi^{\lambda\mu\nu}=u^{\lambda}\mathcal{S}^{\mu\nu}+\left(u^{\nu}\Delta^{ \lambda\mu}-u^{\mu}\Delta^{\lambda\nu}\right)\Sigma+\left(u^{\nu}\Sigma^{ \langle\lambda\mu\rangle}_{(s)}-u^{\mu}\Sigma^{\langle\lambda\nu\rangle}_{(s)} \right)+\left(u^{\nu}\Sigma^{\lambda\mu}_{(a)}-u^{\mu}\Sigma^{\lambda\nu}_{(a) }\right)+\phi^{\langle\lambda\rangle\langle\mu\rangle\langle\nu\rangle}. \tag{108}\]
One may check that the number of degrees of freedom (DOF) matches for the quantities on both sides of the above equation. The tensor \(\phi^{\lambda\mu\nu}\) has in total 24 DOF. At the same time, \(\mathcal{S}^{\mu\nu}\) has 6 DOF, and \(\Sigma\) is a scalar, hence it has only one DOF. \(\Sigma^{\langle\mu\nu\rangle}_{(s)}\) is symmetric, traceless, and orthogonal to the fluid flow vector, hence it has 5 DOF, while \(\Sigma^{\mu\nu}_{(a)}\) is antisymmetric and transverse to the fluid flow, hence it has 3 DOF. Finally, \(\phi^{\langle\lambda\rangle\langle\mu\rangle\langle\nu\rangle}\) is antisymmetric in the last two indices and orthogonal to flow vector in all indices, hence it has only 9 DOF.
## Appendix C Derivation of Eq. (21)
We start with the entropy current given in Eq. (18),
\[s^{\mu}_{\text{IS}}=\beta_{\nu}T^{\mu\nu}+p\beta^{\mu}-\beta\omega_ {\alpha\beta}S^{\mu\alpha\beta}+Q^{\mu}\] \[\Longrightarrow\partial_{\mu}s^{\mu}_{\text{IS}}=T^{\mu\nu} \partial_{\mu}\beta_{\nu}+\beta_{\nu}\partial_{\mu}T^{\mu\nu}+\partial_{\mu}( p\beta^{\mu})-S^{\mu\alpha\beta}\partial_{\mu}(\beta\omega_{\alpha\beta})- \beta\omega_{\alpha\beta}\partial_{\mu}S^{\mu\alpha\beta}+\partial_{\mu}Q^{\mu}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
Appendix E Explicit expressions for \(\mathcal{D}\), \(\mathcal{A}^{\mu}\), \(\mathcal{B}^{\mu}\), \(\mathcal{C}^{\mu\nu}\), \(\mathcal{E}^{\mu\nu}\), \(\mathcal{F}\), \(\mathcal{G}^{\mu\nu}\), \(\mathcal{H}^{\mu\nu}\), and \(\mathcal{I}^{\alpha\mu\nu}\)
The first step in deriving the following scalars, vectors, and tensors starts by taking the partial derivative of \(Q^{\mu}\) in Eq. (20). Note that the partial derivative of the parameters \(a_{i},\tilde{a}_{i},b_{i},\tilde{b}_{i}\), and \(c_{i}\) is not zero. The next step is to collect all terms having common dissipative current. In such a process, one can encounter terms of two different dissipative currents, for example, "\(\pi_{\mu\nu}h^{\nu}\nabla^{\mu}b_{2}\)". For that, we've introduced the constants \(l\) and \(\tilde{l}\) such that
\[\pi_{\mu\nu}h^{\nu}\nabla^{\mu}b_{2}=l_{h\pi}\pi_{\mu\nu}h^{\nu}\nabla^{\mu}b_ {2}+(1-l_{h\pi})\pi_{\mu\nu}h^{\nu}\nabla^{\mu}b_{2} \tag{101}\]
Following the above procedure we obtain,
\[\mathcal{D}= a_{1}\Pi\theta+\Pi Da_{1}+2a_{1}D\Pi+(1-l_{\Pi h})h^{\mu} \nabla_{\mu}b_{1}-b_{1}(1-\tilde{l}_{\Pi h})h^{\mu}Du_{\mu}+b_{1}\nabla_{\mu} h^{\mu}+l_{\Pi q}q^{\mu}\nabla_{\mu}b_{4}\] \[-\tilde{l}_{\Pi h}b_{4}q^{\mu}Du_{\mu}+b_{4}\nabla_{\mu}q^{\mu}+l _{\Theta\Pi}\Theta^{\alpha\mu\nu}\Delta_{\alpha\mu}\nabla_{\nu}c_{3}-\tilde{ l}_{\Theta\Pi}c_{3}\Delta_{\alpha\mu}\Theta^{\alpha\mu\nu}Du_{\nu}+c_{3}\Delta_{ \alpha\beta}\nabla_{\mu}\Theta^{\alpha\beta\mu}. \tag{102}\]
\[\mathcal{A}^{\mu}= a_{3}h^{\mu}\theta+h^{\mu}Da_{3}+2a_{3}Dh^{\mu}+l_{\Pi h}\Pi \nabla^{\mu}b_{1}+b_{1}\nabla^{\mu}-b_{1}\tilde{l}_{\Pi h}\Pi Du^{\mu}+l_{\pi h }\pi^{\lambda\mu}\nabla_{\lambda}b_{2}+b_{2}\nabla_{\lambda}\pi^{\lambda\mu}\] \[-b_{2}\tilde{l}_{\pi h}\pi^{\lambda\mu}Du_{\lambda}+l_{\phi h} \phi^{\lambda\mu}\nabla_{\lambda}b_{3}+b_{3}\nabla_{\lambda}\phi^{\lambda\mu }-b_{3}\tilde{l}_{\phi h}\phi^{\lambda\mu}Du_{\lambda}+l_{\Phi h}\Phi\nabla^{ \mu}\tilde{b}_{1}+\tilde{b}_{1}\nabla^{\mu}\Phi-\tilde{b}_{1}\tilde{l}_{ \Phi h}\Phi Du^{\mu}\] \[+l_{\tau_{s}h}\tau_{(s)}^{\lambda\mu}\nabla_{\lambda}\tilde{b}_ {2}+\tilde{b}_{2}\nabla_{\lambda}\tau_{(s)}^{\lambda\mu}-\tilde{b}_{2}\tilde{l }_{\tau_{s}h}\tau_{(s)}^{\lambda\mu}Du_{\lambda}+l_{\tau_{s}h}\tau_{(\mu)}^{ \lambda\mu}\nabla_{\lambda}\tilde{b}_{3}+\tilde{b}_{3}\nabla_{\lambda}\tau_{ (\alpha)}^{\lambda\mu}-\tilde{b}_{3}\tilde{l}_{\tau_{s}h}\tau_{(\alpha)}^{ \lambda\mu}Du_{\lambda}. \tag{103}\]
\[\mathcal{B}^{\mu}= a_{4}q^{\mu}\theta+q^{\mu}Da_{4}+2a_{4}Dq^{\mu}+(1-l_{\Pi q})\Pi \nabla^{\mu}b_{4}+b_{4}\nabla^{\mu}-b_{4}(1-\tilde{l}_{\Pi q})\Pi Du^{\mu}+(1- l_{\pi q})\pi^{\lambda\mu}\nabla_{\lambda}b_{5}\] \[+b_{5}\nabla_{\lambda}\pi^{\lambda\mu}-b_{5}(1-\tilde{l}_{\pi q })\pi^{\lambda\mu}Du_{\lambda}+l_{\phi q}\phi^{\lambda\mu}\nabla_{\lambda}b_{ 6}+b_{6}\nabla_{\lambda}\phi^{\lambda\mu}-b_{6}\tilde{l}_{\phi q}\phi^{\lambda \mu}Du_{\lambda}+l_{\Phi}\Phi\nabla^{\mu}\tilde{b}_{4}+\tilde{b}_{4}\nabla^{ \mu}\Phi\] \[-\tilde{b}_{4}\tilde{l}_{\Phi q}\Phi Du^{\mu}+l_{\tau_{s}\tau_{(s)} ^{\lambda\mu}}\nabla_{\lambda}\tilde{b}_{5}+\tilde{b}_{5}\nabla_{\lambda}\tau_ {(s)}^{\lambda\mu}-\tilde{b}_{5}\tilde{l}_{\tau_{s}q}\tau_{(s)}^{\lambda\mu} Du_{\lambda}+l_{\tau_{s}q}\tau_{(\mu)}^{\lambda\mu}\nabla_{\lambda}\tilde{b}_{6}+ \tilde{b}_{6}\nabla_{\lambda}\tau_{(\alpha)}^{\lambda\mu}-\tilde{b}_{6}\tilde{ l}_{\tau_{s}q}\tau_{(\alpha)}^{\lambda\mu}Du_{\lambda}. \tag{104}\]
\[\mathcal{C}^{\mu\nu}= a_{2}\theta\pi^{\mu\nu}+\pi^{\mu\nu}Da_{2}+2a_{2}D\pi^{\mu\nu}+(1-l_{ \pi h})h^{(\nu}\nabla^{\mu)}b_{2}-b_{2}(1-\tilde{l}_{\pi h})h^{(\nu}Du^{\mu)}+ b_{2}\nabla^{(\mu}h^{\nu)}\] \[+l_{\pi q}q^{(\nu}\nabla^{\mu)}b_{5}-\tilde{l}_{\pi q}b_{5}q^{(\nu }Du^{\mu)}+b_{5}\nabla^{(\mu}q^{\nu)}+l_{\Theta\pi}\Theta^{(\mu\nu)\alpha} \nabla_{\alpha}c_{4}-\tilde{l}_{\Theta\pi}c_{4}\Theta^{(\mu\nu)\alpha}Du_{ \alpha}+c_{4}\nabla_{\alpha}\Theta^{(\mu\nu)\alpha}. \tag{105}\]
\[\mathcal{E}^{\mu\nu}= a_{5}\theta\phi^{\mu\nu}+\phi^{\mu\nu}Da_{5}+2a_{5}D\phi^{\mu\nu}+(1-l_{ \phi h})h^{[\nu}\nabla^{\mu]}b_{3}-b_{3}(1-\tilde{l}_{\phi h})h^{[\nu}Du^{ \mu]}+b_{3}\nabla^{[\mu}h^{\nu]}\] \[+(1-l_{\phi q})q^{[\nu}\nabla^{\mu]}b_{6}-b_{6}(1-\tilde{l}_{\phi q })q^{[\nu}Du^{\mu]}+b_{6}\nabla^{[\mu}q^{\nu]}+l_{\Theta\phi}\Theta^{\lambda \mu\nu}\nabla_{\lambda}c_{1}-\tilde{l}_{\phi\phi}c_{1}\Theta^{\lambda\mu\nu}Du_{\lambda}\] \[+c_{3}\nabla_{\lambda}\Theta^{\lambda\mu\nu}+k_{\Theta\phi}\Theta^{[ \mu\nu]\lambda}\nabla_{\lambda}c_{7}-\tilde{k}_{\Theta\phi}c_{7}\Theta^{[\mu \nu]\lambda}Du_{\lambda}+c_{7}\nabla_{\lambda}\Theta^{[\mu\nu]\lambda}. \tag{106}\]
\[\mathcal{F}= \tilde{a}_{1}\theta\Phi+\Phi D\tilde{a}_{1}+2\tilde{a}_{1}D\Phi+(1-l _{\Phi h})h^{\mu}\nabla_{\mu}\tilde{b}_{1}-(1-\tilde{l}_{\Phi h})\tilde{b}_{1}h^{ \mu}Du_{\mu}+\tilde{b}_{1}\nabla_{\mu}h^{\mu}+(1-l_{\Phi q})q^{\mu}\nabla_{ \mu}\tilde{b}_{4}\] \[-(1-\tilde{l}_{\Phi q})\tilde{b}_{4}q^{\mu}Du_{\mu}+\tilde{b}_{4} \nabla_{\mu}q^{\mu}+l_{\Theta\Phi}\Theta^{\alpha\mu\nu}\Delta_{\alpha\mu} \nabla_{\nu}c_{5}-\tilde{l}_{\Theta\Pi}c_{5}\Delta_{\alpha\mu}\Theta^{\alpha\mu \nu}Du_{\nu}+c_{5}\Delta_{\alpha\beta}\nabla_{\mu}\Theta^{\alpha\beta\mu}. \tag{107}\]
\[\mathcal{G}^{\mu\nu}= \tilde{a}_{2}\theta\tau_{(s)}^{\mu\nu}+\tau_{(s)}^{\mu\nu}D\tilde{a} _{2}+2\tilde{a}_{2}D\tau_{(s)}^{\mu\nu}+(1-l_{\tau_{s}h})h^{(\nu}\nabla^{\mu)} \tilde{b}_{2}-\tilde{b}_{2}(1-\tilde{l}_{\tau_{s}h})h^{(\nu}Du^{\mu)}+\tilde{b }_{2}\nabla^{(\mu}h^{\nu)}\] \[+(1-l_{\tau_{s}q})q^{(\nu}\nabla^{\mu)}\tilde{b}_{5}-(1-\tilde{l}_ {\tau_{s}q})\tilde{b}_{5}q^{(\nu}Du^{\mu)}+\tilde{b}_{5}\nabla^{(\mu}q^{\nu) }+l_{\Theta\tau |
2310.03959 | Towards Increasing the Robustness of Predictive Steering-Control
Autonomous Navigation Systems Against Dash Cam Image Angle Perturbations Due
to Pothole Encounters | Vehicle manufacturers are racing to create autonomous navigation and steering
control algorithms for their vehicles. These software are made to handle
various real-life scenarios such as obstacle avoidance and lane maneuvering.
There is some ongoing research to incorporate pothole avoidance into these
autonomous systems. However, there is very little research on the effect of
hitting a pothole on the autonomous navigation software that uses cameras to
make driving decisions. Perturbations in the camera angle when hitting a
pothole can cause errors in the predicted steering angle. In this paper, we
present a new model to compensate for such angle perturbations and reduce any
errors in steering control prediction algorithms. We evaluate our model on
perturbations of publicly available datasets and show our model can reduce the
errors in the estimated steering angle from perturbed images to 2.3%, making
autonomous steering control robust against the dash cam image angle
perturbations induced when one wheel of a car goes over a pothole. | Shivam Aarya | 2023-10-06T00:58:52Z | http://arxiv.org/abs/2310.03959v1 | Towards Increasing the Robustness of Predictive Steering-Control Autonomous Navigation Systems Against Dash Cam Image Angle Perturbations Due to Pothole Encounters*
###### Abstract
Vehicle manufacturers are racing to create autonomous navigation and steering control algorithms for their vehicles. These software are made to handle various real-life scenarios such as obstacle avoidance and lane maneuvering. There is some ongoing research to incorporate pothole avoidance into these autonomous systems. However, there is very little research on the effect of hitting a pothole on the autonomous navigation software that uses cameras to make driving decisions. Perturbations in the camera angle when hitting a pothole can cause errors in the predicted steering angle. In this paper, we present a new model to compensate for such angle perturbations and reduce any errors in steering control prediction algorithms. We evaluate our model on perturbations of publicly available datasets and show our model can reduce the errors in the estimated steering angle from perturbed images to 2.3%, making autonomous steering control robust against the dash cam image angle perturbations induced when one wheel of a car goes over a pothole.
## I Introduction
Cars have become the main transportation method in many parts of the world. An average American drives 14,263 miles per year, amounting to 3.2 trillion miles driven annually, according to the Federal Highway Administration [1]. Increasing adoption of cars around the world also drives the increase in the number of road accidents. As per the Centers for Disease Control and Prevention (CDC), 1.35 million people are killed annually due to road accidents, making it the eighth leading cause of death. While analysis of road safety focuses on driving under the influence, speeding, mobile phone use, and fatigued driving [2], they usually overlook the danger posed by deteriorating road conditions leading to the formation of potholes.
Cars are increasingly being equipped with automated steering control, autopilot (i.e., autonomous braking and acceleration), and predictive safety maneuvers, towards full self-driving capabilities. By reducing the dependence on human drivers, they can reduce the leading causes of accidents that are all attributed to driver errors. However, these autonomous navigation systems have not yet been adapted to the problem of avoiding potholes on the road that human drivers usually handle. The automated systems will simply drive at full speed without adjusting the steering angle (the degree to which the steering wheel of the car is rotated) over a pothole as if it does not exist, which can be extremely dangerous if an inattentive driver fails to take control and correct the car's trajectory before the impact.
While most drivers consider them a minor nuisance that makes their drive less comfortable or a minor obstacle on the road to be avoided, potholes can actually have a dangerously significant impact on a vehicle. Direct contact with a pothole could result in an impact that causes injury, major damage to the vehicle, or even a result equivalent to that of a 35 mph vehicular crash [3]. This is not an uncommon problem, with $26 billion being spent by drivers in 2021 to pay for pothole repair, and with 1 in 10 drivers needing to repair their vehicle after hitting a pothole [4].
Therefore, it is becoming crucial to address the issue of pothole avoidance in these systems and to predict a car's response to driving over a pothole. While research is being done to develop methods to recognize [5] and avoid potholes [6], these capabilities are yet to appear in production vehicles. When an autonomous vehicle goes over a pothole, the images captured by the cameras get perturbed. These perturbations can impact the accuracy of steering control at the moment of a pothole encounter. There have been works to make autonomous steering control models robust to irrelevant objects (e.g., buildings, shrubs, etc.) in the scene not encountered in the training data so as to make pre-trained models robust to unseen irrelevant objects [7, 8]. However, there have been very few works to develop methods to make autonomous steering control robust to perturbations in images due to the vehicle's encounters with a pothole. It is important to consider the consequences of a pothole impact and to regulate its effect on the car's autonomous navigation systems to maintain a safe response to the impact and keep the driver as far as possible from harm.
In this paper, we train a denoising autoencoder to increase the robustness of the autonomous navigation systems against the image perturbations (any deviation made to the image to change it from the original image, e.g., rotation in dash cam image angle) that result from encounters with potholes. We train and test our proposed model via experiments on publicly available data sets collected in real life.
## II Related Work
Multiple researchers have attempted to solve the problem of corrective obstacle avoidance. In these methods, the avoidance logic can be easily extended to potholes in the road such that the autonomous navigation model will turn the
steering wheel accordingly to avoid the incoming pothole on the road. The researchers have proposed different approaches that can broadly be categorized into one of three categories: vibration-based, depth-based, and GPS-based approaches.
### _Vibration-Based Approach_
Some works use sensors such as speedometers and accelerometers to identify and classify potholes with vehicle encounters [9, 10, 11, 12]. They do this by mathematically modeling the jerk induced by traveling over a pothole on the car's suspension and then analyzing the sensor input by feeding it into a Bayes decision classifier to record the proper response from hitting the pothole [13, 14]. Some approaches have even employed various deep learning techniques to analyze the accelerometer sensor data [15, 16, 17]. These approaches are fairly accurate in identifying when the vehicle encounters a pothole, but due to the limited reaction time of a vehicle, there is very little the algorithm can do to actually avoid the pothole other than know when it is encountered in real-time [18, 15]. There is also very little the algorithm can do in terms of reacting to the information being provided at the moment due to limited processing time and sensor limitations. The reason for this is that the approach can only formulate a reaction to the pothole once it detects it with the vibrations in the car, at which point it is too late to react properly. While this may be a useful aspect of the pothole collision problem by accurately recording the resulting discomfort levels of hitting a pothole, it cannot solve the initial pothole avoidance problem in its entirety [19, 20, 21, 22].
### _Depth-Based Approach_
To help flush out the limitations of the vibration-based pothole avoidance approaches, other researchers have proposed a depth-based pothole mapping system that can use recorded data from LiDAR sensors and the approximate distance of the surface to identify pavement anomalies [20, 23, 24, 25]. A minor variation of this method employs the use of stereo vision to capture 3-D road surface data [26, 27, 28]. This approach is one of the most reliable at identifying information about the upcoming potholes ahead of time, but a LiDAR sensor is limited to only gathering information about the distance to the pavement in front of the car. For the car to perform reactive and corrective steering, it needs to be aware of its surroundings and make decisions based on inputs from its environment as well. This is the same issue seen in the vibration-based approach proposed by [29, 30].
### _GPS-Based Approach_
Finally, there have been numerous methods proposed to crowd-source the recording of potholes similar to collecting data for Google Maps. These approaches use sensors in the car to identify when it has passed over a pothole, then it records this data along with GPS coordinates and sends it to a central server [11, 21, 31]. This information can then be used to warn other cars that are fitted with the same system of upcoming potholes [32]. However, this means that the car can only react if another car has already gone over the pothole at full speed and recorded it, and if the pothole is patched or worsened, the car will not have updated information as it will simply avoid the area altogether. This may also lead to some privacy concerns with user GPS data being monitored and transmitted through a third-party server, and this is overall not a feasible or viable real-time reactive approach or final solution [33, 13, 34].
### _The Gap_
Each of these three approaches utilizes a different strategy to predict the correct reaction to a pothole encounter. [19] analyzes the impact of the pothole on the car mid-collision, while [29] uses computer vision to classify and identify the pothole prior to it encountering the car. [33] and [11] propose GPS-based approaches that can identify the pothole well in advance but cannot give information to the self-driving system until one car goes over the pothole and does not have real-time information about the road situation around it. While there is much development being made on these different strategies to handle the moments prior to the encounter, there is very little research about the post-impact effects on the autonomous navigation system.
Therefore, the purpose of this research is to develop a technique for increasing the robustness of steering control algorithms in response to image perturbations due to the impact of a car with a pothole.
## III Method
We take the first steps towards our research goal by developing models on real-life datasets and evaluating our model in simulation. Our work can pave the way to perform real-world experiments where cars with cameras can be used to test various algorithms as the car is driven over potholes.
### _Generating 3D Representation of Potholes_
The first step in analyzing the effects of a pothole collision is to establish what the potholes themselves look like. There is a wide variety of potholes on the roads and the range of possible potholes to encounter is very large. In order to represent the average pothole collision, we must first take a sample of various potholes on an average roadway. To do this, we utilize a publicly available dataset that incorporates novel algorithms for road disparity (or inverse depth) transformation and deploys it on a semantic segmentation network [23, 24, 25]. This means that the researchers developed a technique to use multiple stereo cameras positioned at varying angles on a moving car to capture 3D visualizations of damage in roadways along with RGB overlays (color images) of the same damaged areas.
This dataset includes 600 samples of potholes split into 180 samples in a testing group, 240 samples in a training group, and 180 samples in a validation group. This ratio of images split into multiple groups is commonly used in machine learning applications, but since this research only uses this dataset to represent a sample of all potholes in roadways,
we proceed by combining all samples into one group of 600 potholes. This group contains an RGB color image (e.g. Figure 1), a heatmap in the jet color scale (e.g., Figure 2), and a strictly black and white label image (e.g. Figure 3). The heatmap uses the jet color scale because it is the default output color scale to show a displacement map of the pothole (indicating areas of low depth with the color blue and areas of high depths with the color red) in the road for each of the 600 images [23, 24, 25].
Next, the pothole images need to be reconstructed into 3D representations of potholes that can then be statistically analyzed to extract various properties of the potholes. These measurements can then be used with measurements from accurate car diagrams to determine the effect of driving over the pothole on a car and its camera.
For this 3-D reconstruction, we use Blender, which is a free and open-source powerful 3D modeling software used by multiple different industries for different parts of the 3D pipeline, such as modeling, sculpting, visual effects, computer graphics, UV Unwrapping, virtual reality, and even creating fully animated films.
Since there are 600 different potholes, it would be a lengthy process to individually combine the three images into a 3D model for each of the samples. Coincidentally, Blender implements a new Python API in Blender 2.5 [35] that allows for an interaction between the Python programming language and the Blender software. We proceeded by creating a Blender script in Python (Algorithm 1) to take each of the 1,800 images (600 groups of 3 images) and recursively process each of them into a complete 3D file.
```
input: RGB, heatmap, and label images for each pothole in the pothole-600 dataset output: A dataset consisting of a 3-D model for every pothole in the pothole-600 dataset for each RGB, heatmap, label image in dataset do Add plane mesh Subdivide mesh by 7 divisions Add heatmap as displacement modifier to plane Create new material shader and add RGB image as texture Apply material shader to model Shade smooth the mesh and export as fbx (for 3D point-space data Delete all generated assets to clear workspace for processing next pothole endfor
```
**Algorithm 1** Blender pothole generation
### _Statistical Characterization of Potholes_
Now that we have a collection of 600 physical models of potholes, the next step is to obtain a statistical representation of those potholes. For this purpose, we developed another Python program to process all of the 3D objects created by the Blender program. This Python program imports each 3D project and then saves a matrix of gradients for every point on the model, essentially creating a dataset of slopes (rate of change in depth of the pothole) for every pixel of the model. Then, the program discretizes this matrix into a \(10X10\) set of equally-sized squares, giving us 100 different small portions of the pothole. We then take the average gradient for each of the 100 squares (taking the direction of each of the slopes into account) and save this as the new matrix representing the model of the pothole. The reason we do this is to reduce the amount of data that is needed to represent the pothole, which makes it computationally efficient.
After the program repeats the pothole modeling process for each of the 600 models, it saves the \(10X10\) matrix of average gradients of each pothole into one big dataset containing 60,000 values representing each of the potholes. Doing so turns the entire set of 3D pothole models into a low-level representative dataset of numerical values, on which we can then perform a statistical analysis to obtain average statistics and measures of variability and central tendency in the sample of potholes.
\begin{table}
\begin{tabular}{||c|c||} \hline Statistic & Value (mm) \\ \hline \hline mean & 61.724417 \\ std & 10.208508 \\ min & 47.450000 \\ \hline
25\% & 55.812500 \\ \hline
50\% & 59.490000 \\
75\% & 63.862500 \\ \hline max & 108.500000 \\ \hline \end{tabular}
\end{table} TABLE I: Statistical output from Algorithm 2 on 600 images
Fig. 4: 3D Object created by the program in Algorithm 1 for a sample pothole
Fig. 1: RGB scan of top-down view for a sample pothole
Fig. 2: Jet colorscale heatmap of depth for a sample pothole
### _Estimating Camera Angle Perturbation_
Now that we have a statistical representation of potholes, our next step is to estimate the change in the camera angle when the car hits the pothole. For this purpose, we analyze the physical construction of a typical vehicle and treat it as a rigid body, then calculate the effect of hitting the average pothole on the image input from a dash cam mounted facing the front of the car that can be used for autonomous navigation. In order to do this, some basic assumptions about the dimensions of the average car must be made.
In 2022, the best-selling passenger car worldwide was the Toyota Corolla, acquiring 1.12 million sales [36]. Therefore, we use the Toyota Corolla to represent our average, everyday car driving on the roadways.
In Figure 5, \(a\) represents the width of the car from the middle of each wheel. This is estimated using the baseline assumption of the Toyota Corolla, which has a width of 1,780 mm (1.78 m). Let \(b\) represent the amount that the wheel dips down due to a pothole impact. For the purposes of this research, we assume that the car is a rigid body. Therefore, we can simply use the sample distribution of pothole depths given by our statistical analysis of the pothole models and directly plug them into \(b\) to get the offset of the car. After that information is plugged in, we need only to perform some trigonometry to obtain our distribution of angles \(\theta_{2}\) that represents the average dash cam perturbation in terms of the angle by which the image is shifted.
\[\theta_{1} =\arctan\frac{b}{a},\theta_{2}=\arctan\frac{2b}{a}\] \[\theta_{2} =\arctan\frac{123.44}{1780},\text{if }a=1780mm,b\approx 61.72mm\] \[\theta_{2} \approx 0.0692\times\frac{180}{\pi}=3.967^{\circ}\] (Derivation 1)
Derivation 1 outlines the steps for converting pothole depths to camera angle perturbations. Next, we perform the steps in Derivation 1 on the distribution of pothole angle depths outlined in Table I to obtain a distribution of angle perturbations to dashcam images.
### _Predicting the Steering Angle Correction_
Finally, we develop a steering-angle prediction model that can provide the change in steering angle needed to become resilient against angle perturbations of the camera when the car goes over a pothole.
To represent the images for training a prediction model, we adopt the AutoJoin AutoEncoder [38]. The basic concept of an autoencoder is to reduce input images to a low-dimensional representation (essentially a blurry image with much less information and detail) and then train the machine learning model to recreate the original image from that low-dimensional representation. Since the model can now recreate an image from one containing less data, this technique is commonly used in denoisers to add detail to an image, but in the case of AutoJoin, it is implemented to increase the robustness of an existing autonomous navigation system against noise in dashcam footage.
AutoJoin trains a denoising autoencoder (DAE) that can predict the steering angle from images that are perturbed due to changes in color, saturation, and brightness. They use a joint loss function that includes errors in the predicted steering angle and the reconstructed image from the perturbed images. We instead focus on angular perturbations in images induced by pothole encounters and hence our loss function only considers the errors in the predicted steering angle.
For this modeling, we use a dataset of dashcam images provided by Audi [39] and Honda [40], which has about 850,000 training, testing, and validation images. We then apply the distribution of angle perturbations to the training images in this dataset to generate the perturbed training images. For this research, we use the NVIDIA autonomous navigation model (a pre-trained neural network created by NVIDIA to take an input dashcam image and return a predicted steering angle) for steering angle prediction.
Fig. 5: Diagram representing a typical pothole collision with a car [37]
For our autoencoder, the pre-trained model is fed the original dashcam image as the input and the predicted steering angle is regarded as the ground truth or actual correct steering angle in normal conditions. Then, a perturbed dashcam image (as if the car is currently going over a pothole) is fed into the model, and the difference in the predicted output steering angle for the two images (the ground truth steering angle and the perturbed image steering angle) is calculated.
The autoencoder is trained to reduce the noise in the perturbed image and introduce bias in the model such that the difference between the actual steering angle and the steering angle predicted by the pre-trained model on the perturbed dashcam image is minimized until the model gets as close as possible to the original output even on a perturbed image. Once this happens, the self-driving algorithm has essentially become robust against the image perturbation we have introduced, and will not be nearly as severely affected by the change in dashcam angle introduced when a car is driven over a pothole.
## IV Experiments
We trained and tested our autoencoder on about 500,000 images for 50 epochs (meaning that it goes over each image 50 times when relearning the correct model output) when the model's performance began to stabilize. For our experiments, we used a desktop computer with 32 gigabytes of RAM and an NVIDIA GeForce RTX 3090, which has Tensor cores optimized for processing the large matrices associated with this model training process.
In order to evaluate the performance of the model on the perturbations, we make use of the Mean Squared Error (MSE) in the predicted steering angle estimated from the perturbed image by our model and that determined from the original unperturbed image.
We analyze the change in the MSE after each epoch. As shown in Figure 6, our model is able to reduce the validation error in the estimated angle to stabilize at approximately 2.3% after 50 epochs.
## V Future Works
As the experiments show (in Section IV), our proposed model can be used to make autonomous steering control more resistant to pothole-induced image perturbations. There are many limitations to this work that open up new opportunities to future research.
There were multiple assumptions made in this research that may limit the potential implications of our findings. The most major assumptions were made during the creation of the image perturbation distribution, such as when the car was assumed to be a Toyota Corolla and the car was treated as a rigid body with no suspension. This likely skewed our data to have harsher angles than there would be in real cars, but this was not enough to pose a risk of our model becoming overly resilient. In the real world, the car could be one of a very wide selection of self-driving vehicles and it will likely have a suspension of varying levels of effectiveness. This test is also based almost entirely on simulation, so the technique needs to be implemented in a physical car to test the true impact of the robust model in real-world scenarios. If this were to happen, the car wheelbase and the impact of suspension should be taken into account and incorporated into the experiment to obtain the most accurate results.
The initial pothole sample itself is also a limitation in that it is not a completely representative sample of all potholes, it is only the potholes that the ZED stereo camera was able to catch, so there may be other deformations in the road such as broken speedbumps or major dips that have a similar image perturbation effect on the dash cam footage but were not incorporated in the training of our robust model. While this may skew our results to be less or more severe depending on the frequency of alternative road deformations on the road being tested, since the overall effect on the camera is similar, the robust model should still be able to handle the perturbation despite not being trained and optimized to deal with the scenario.
We only presented the Mean Squared Error (MSE) observed in the proposed model. Future work can compare it with non-trivial baseline methods as well as other alternative approaches and present Average Mean Accuracy Improvement (AMAI) [8].
Ultimately, any model developed for improving autonomous steering control needs to be implemented in existing autonomous navigation systems and tested in real life. Fortunately, the model proposed here can be incorporated into existing autonomous vehicles, without the need for any major restructuring of the preexisting self-driving model.
## VI Conclusion
This research proposed a new model for creating robustness in an autonomous navigation algorithm to be resilient to perturbations in camera angles at the time of pothole encounters. Improving it further and incorporating it in production vehicles can improve the accuracy of steering control. Even after pothole avoidance becomes a standard feature, it may still not be possible to avoid many potholes due to multiple colocated potholes or not enough margin to maneuver around the pothole in heavy traffic. Therefore, compensating for angle perturbation when encountering potholes will continue to be useful in autonomous vehicles in the foreseeable future.
Fig. 6: Reduction in the Mean Squared Error (MSE) in the estimated steering angle with increasing epochs |